hi all i've got a exception . kafka.common.MessageSizeTooLargeException: payload size of 1772597 larger than 1000000 at kafka.message.ByteBufferMessageSet.verifyMessageSize(ByteBufferMessageSet.scala:93) at kafka.producer.SyncProducer.send(SyncProducer.scala:122) at kafka.producer.ProducerPool$$anonfun$send$1.apply$mcVI$sp(ProducerPool.scala:114) at kafka.producer.ProducerPool$$anonfun$send$1.apply(ProducerPool.scala:100) at kafka.producer.ProducerPool$$anonfun$send$1.apply(ProducerPool.scala:100) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43) at kafka.producer.ProducerPool.send(ProducerPool.scala:100) at kafka.producer.Producer.zkSend(Producer.scala:140) at kafka.producer.Producer.send(Producer.scala:99) at kafka.javaapi.producer.Producer.send(Producer.scala:103)
There is a setting that controls the maximum message size. This is to ensure the messages can be read on the server and by all consumers without running out of memory or exceeding the consumer fetch size. In 0.7.x this setting is controlled by the broker configuration max.message.size.
-Jay On Tue, Jan 29, 2013 at 12:18 AM, Bo Sun <[EMAIL PROTECTED]> wrote:
At linkedin, what is the largest payload size per message you guys have in production? My app might have like 20-100 kilobytes in size and I am hoping to get an idea if others have large messages like this for any production use case. On Tue, Jan 29, 2013 at 11:35 AM, Neha Narkhede <[EMAIL PROTECTED]>wrote:
WRT to how to set the maximum there are two considerations: 1. It should be smaller then the fetch size your consumers use 2. Messages are fully instantiated in memory so obscenely large messages (say hundreds of mb) will cause a lot of memory allocation churn/problems.
-Jay On Tue, Jan 29, 2013 at 8:57 AM, S Ahmed <[EMAIL PROTECTED]> wrote: