We're expecting to occasionally have to deal with pretty large messages being sent to Kafka. We will of course set the fetch size appropriately high, but are concerned about the behavior when the message exceeds the fetch size. As far as I can tell the current behavior when a message that is too large is encountered is to pretend it is not there and not notify the consumer in any way. IMO it would be better to throw an exception than silently ignoring the issue (with the current code one can't really distinguish a large message from no data at all).
Re: Consumer behavior when message exceeds fetch.message.max.bytes
Yes. It's good to enforce that. Could you file a jira and attach your patch there?
Jun On Thu, Aug 1, 2013 at 7:39 AM, Sam Meder <[EMAIL PROTECTED]>wrote:
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation projects and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext