I've been doing some testing, trying to understand how the
max.message.bytes works, with respect to sending batches of messages.  In a
previous discussion, there appeared to be a suggestion that one work around
when triggering a MessageSizeTooLargeException is to reduce the batch size
and resubmit the batch in smaller sub-batches.

However, so far in my testing, I'm not seeing a way to trigger a
MessageSizeTooLargeException by having a large batch of messages, whose
cumulative size is greater than max.message.bytes, but without any
individual message exceeding the max.   And in fact, I'm able to send
through some very large batches.  E.g. 200 messages of 500000 bytes each in
a single batch (where the max.message.bytes is 1000000.   However, if
anyone of the messages in the batch exceeds that limit, then it will reject
the whole batch.

Is this the expected behavior?  What was being referred to in the previous
discussions around retrying smaller batch sizes as a work-around for

I haven't been using compression with any of these tests, does that make a



NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB