Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> GZIPCompressionCodec


+
Corbin Hoenes 2012-08-10, 18:07
+
jjian fan 2012-08-13, 09:21
+
Corbin Hoenes 2012-08-16, 03:35
+
Neha Narkhede 2012-08-16, 04:33
Copy link to this message
-
Re: GZIPCompressionCodec
It's only happening under production load/data but I'll try to see if we can figure something out.

On Aug 15, 2012, at 10:33 PM, Neha Narkhede wrote:

> Hi Corbin,
>
> It will help if you can file a JIRA and attach a reproducible test
> case to it. We have seen this issue occasionally, but haven't been
> able to reproduce it.
>
> Thanks,
> Neha
>
> On Wed, Aug 15, 2012 at 8:35 PM, Corbin Hoenes <[EMAIL PROTECTED]> wrote:
>> Side note KAFKA-411 didn't seem to help.  Still getting invalid message exceptions.
>>
>>
>> On Aug 13, 2012, at 3:21 AM, jjian fan wrote:
>>
>>> pls check kafka-411. https://issues.apache.org/jira/browse/KAFKA-411
>>>
>>> 2012/8/11 Corbin Hoenes <[EMAIL PROTECTED]>
>>>
>>>> Guys I am getting loads of these exceptions.  I am currently pushing loads
>>>> of data through and still getting my head wrapped around how to debug
>>>> issues like this.  Anyone have a clue what might be going on here?
>>>>
>>>> using: kafka-0.7.1-incubating-candidate-2
>>>>
>>>> kafka.message.InvalidMessageException: message is invalid, compression
>>>> codec: GZIPCompressionCodec size: 63426 curr offset: 0 init offset: 0
>>>>       at
>>>> kafka.message.ByteBufferMessageSet$$anon$1.makeNextOuter(ByteBufferMessageSet.scala:130)
>>>>       at
>>>> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:166)
>>>>       at
>>>> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:100)
>>>>       at
>>>> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:59)
>>>>       at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:51)
>>>>       at scala.collection.Iterator$class.foreach(Iterator.scala:631)
>>>>       at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
>>>>       at
>>>> scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
>>>>       at kafka.message.MessageSet.foreach(MessageSet.scala:87)
>>>> --
>>>>       at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
>>>>       at
>>>> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
>>>>       at scala.collection.mutable.ArrayOps.map(ArrayOps.scala:34)
>>>>       at
>>>> kafka.server.KafkaRequestHandlers.handleMultiProducerRequest(KafkaRequestHandlers.scala:62)
>>>>       at
>>>> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$4.apply(KafkaRequestHandlers.scala:41)
>>>>       at
>>>> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$4.apply(KafkaRequestHandlers.scala:41)
>>>>       at kafka.network.Processor.handle(SocketServer.scala:296)
>>>>       at kafka.network.Processor.read(SocketServer.scala:319)
>>>>       at kafka.network.Processor.run(SocketServer.scala:214)
>>>>       at java.lang.Thread.run(Thread.java:619)
>>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB