Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # dev >> Maybe a bug?


Copy link to this message
-
Re: Maybe a bug?
I realize that this issue is complaining about the on-disk data. However,
if something invalid was accepted, it would cause invalid data to be
recorded both on-disk and in ZooKeeper which would relate those two,
seemingly unrelated, things.

I may have fixed this with https://issues.apache.org/jira/browse/KAFKA-371

-Jonathan

On Fri, Jul 20, 2012 at 4:30 PM, Neha Narkhede <[EMAIL PROTECTED]>wrote:

> Jonathan,
>
> This issue is not coming from the zookeeper data, it is coming from
> the on-disk data. What do the contents of your kafka log directory
> look like ?
> The error is complaining that the partition directory that should look
> like "topic-partition" is malformed.
>
> Thanks,
> Neha
>
> On Mon, Jul 16, 2012 at 10:59 AM, Jonathan Creasy <[EMAIL PROTECTED]> wrote:
> > I believe it was "marketing".
> >
> > The Zookeeper structure looked like:
> >
> > /brokers/topics/marketing/log_hdfs_marketing
> >
> > instead of
> >
> > /brokers/topics/marketing/1
> >
> > -Jonathan
> >
> > On Sat, Jul 14, 2012 at 11:41 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
> >> It seems that the error is introduced when parsing the topic string. Do
> you
> >> know what topic triggered the error?
> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >> On Thu, Jul 12, 2012 at 6:12 PM, Jonathan Creasy <[EMAIL PROTECTED]>
> wrote:
> >>
> >>> I think a poorly implemented Producer in our dev environment may have
> >>> triggered a bug in Kafka.
> >>>
> >>> We are running 0.71.
> >>>
> >>> The error:
> >>>
> >>> 2012-07-11 12:20:09,034 -  [main:Logging$class@58] - Starting Kafka
> >>> server...
> >>> 2012-07-11 12:20:09,046 -  [main:Logging$class@58] - Loading log
> 'test-0'
> >>> 2012-07-11 12:20:09,062 -  [main:Logging$class@58] - Loading the last
> >>> segment /var/lib/kafka/test-0/00000000000000000000.kafka in mutable
> mode,
> >>> recovery false
> >>> 2012-07-11 12:20:09,082 -  [main:Logging$class@58] - Loading log
> >>> 'marketing'
> >>> 2012-07-11 12:20:09,088 -  [main:Logging$class@96] - Fatal error
> during
> >>> KafkaServerStable startup. Prepare to shutdown
> >>> java.lang.StringIndexOutOfBoundsException: String index out of range:
> -1
> >>> at java.lang.String.substring(String.java:1937)
> >>>  at kafka.utils.Utils$.getTopicPartition(Utils.scala:558)
> >>>  at kafka.log.LogManager$$anonfun$4.apply(LogManager.scala:71)
> >>>  at kafka.log.LogManager$$anonfun$4.apply(LogManager.scala:65)
> >>>  at
> >>>
> >>>
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> >>>  at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> >>>  at kafka.log.LogManager.<init>(LogManager.scala:65)
> >>> at kafka.server.KafkaServer.startup(KafkaServer.scala:58)
> >>>  at
> >>>
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> >>>  at kafka.Kafka$.main(Kafka.scala:50)
> >>>  at kafka.Kafka.main(Kafka.scala)
> >>>
> >>> Other Symptoms:
> >>>
> >>> 1) We blew away /var/lib/kafka as our two brokers did not have the same
> >>> contents but were both displaying this error on different parts. I
> saved
> >>> this data from one system for future review if useful. Kafka still did
> not
> >>> start.
> >>>
> >>> 2) I logged into ZooKeeper and the directory structure
> >>> /brokers/topics/<topic>/<broker_id> had string values matching what
> should
> >>> have been topics where <broker_id> goes. Manually clearing out
> >>> /brokers/topics and restarting Kafka on both brokers did resolve the
> issue.
> >>>
> >>> Obviously there would have been data loss had this been a production
> >>> environment. Is this a known issue where we simply are not using Kafka
> >>> properly or have we accidentally made a mal-formed producer which did
> >>> something the broker did not expect?
> >>>
> >>> We are not sure what the Producers we are working on may have done to
> cause
> >>> this. We have two developers working on independent producers to
> integrate
> >>> into our applications.
> >>>
> >>> -Jonathan
> >>>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB