Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> Max message size & fetch size related?


+
Puneet Mehta 2012-09-24, 22:51
+
Swapnil Ghike 2012-09-24, 23:06
+
Puneet Mehta 2012-09-24, 23:22
Copy link to this message
-
Re: Max message size & fetch size related?
Yes, that should work as long as your maxMessageSize limit is
appropriately set in producer and server.

Thanks,
Swapnil

On 9/24/12 4:22 PM, "Puneet Mehta" <[EMAIL PROTECTED]> wrote:

>Hi Swapnil,
>
>Thanks for the quick response.
>
>So if I am using a SimpleConsumer, I will need to create the fetch
>request like the following, right?
>
>// Fetch size as 5MB
>
>long fetchSize="5*1024*1024"
>FetchRequest fetchRequest = new FetchRequest("test", 0, offset,
>fetchSize);
>Thanks,
>
>Puneet Mehta
>
>
>
>On Monday, September 24, 2012 at 4:06 PM, Swapnil Ghike wrote:
>
>> Hi Puneet,
>>
>> Yes you will need to bump up the maxMessageSize in server.KafkaConfig
>>and
>> fetchSize in consumer.ConsumerConfig.
>>
>> server.KafkaConfig.maxMessageSize can be same as
>> producer.ProducerConfig.maxMessageSize.
>>
>>
>> You can set consumer.ConsumerConfig.fetchsize to a value greater than or
>> equal to producer.ProducerConfig.maxMessageSize.
>>
>> Thanks,
>> Swapnil
>>
>> On 9/24/12 3:51 PM, "Puneet Mehta" <[EMAIL PROTECTED]
>>(mailto:[EMAIL PROTECTED])> wrote:
>>
>> > Hi all,
>> >
>> > We are using kafka 0.7.1
>> >
>> > I am seeing this error in the producer ->
>> >
>> > kafka.common.MessageSizeTooLargeException
>> > at
>> >
>>kafka.producer.SyncProducer$$anonfun$kafka$producer$SyncProducer$$verifyM
>>e
>> > ssageSize$1.apply(SyncProducer.scala:141)
>> > at
>> >
>>kafka.producer.SyncProducer$$anonfun$kafka$producer$SyncProducer$$verifyM
>>e
>> > ssageSize$1.apply(SyncProducer.scala:139)
>> > at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
>> > at kafka.message.MessageSet.foreach(MessageSet.scala:87)
>> > at
>> >
>>kafka.producer.SyncProducer.kafka$producer$SyncProducer$$verifyMessageSiz
>>e
>> > (SyncProducer.scala:139)
>> > at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
>> > at
>> >
>>kafka.producer.ProducerPool$$anonfun$send$1.apply$mcVI$sp(ProducerPool.sc
>>a
>> > la:116)
>> > at
>> >
>>kafka.producer.ProducerPool$$anonfun$send$1.apply(ProducerPool.scala:102)
>> > at
>> >
>>kafka.producer.ProducerPool$$anonfun$send$1.apply(ProducerPool.scala:102)
>> > at kafka.producer.ProducerPool.send(ProducerPool.scala:102)
>> > at kafka.producer.Producer.zkSend(Producer.scala:143)
>> > at kafka.producer.Producer.send(Producer.scala:105)
>> > at kafka.javaapi.producer.Producer.send(Producer.scala:104)
>> >
>> > We are using the max message size as 1 <1000000>000000 <1000000>
>>bytes.
>> >
>> > I am planning to bump this up to say 5 <5000000>000000 <5000000>
>>bytes.
>> >
>> > I am just wondering do I need to change any other properties in
>> > producer/broker/consumer that may get impacted by this bump up?
>> >
>> > Also, I came across this thread, which relates to fetch size relative
>>to
>> > max message size -> https://issues.apache.org/jira/browse/KAFKA-247
>> >
>> > Could anyone of you advise me on places I may get impacted and change
>> > accordingly?
>> >
>> >
>> > Thanks,
>> > Puneet Mehta
>> >
>>
>>
>>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB