Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> simple producer (kafka 0.7.2) exception


Copy link to this message
-
Re: simple producer (kafka 0.7.2) exception
Yes, Kafka broker writes data to disk. There is a time-based and size-based
retention policy that determine how long the data are kept.

Thanks,

Jun
On Mon, Apr 8, 2013 at 3:23 AM, Oleg Ruchovets <[EMAIL PROTECTED]> wrote:

> Yes , I resolve this by changing a configuration path in zookeeper
> properties dataDir=/tmp/zookeeper.  I made scala code debug and got that I
> have cuple of topics from previous executions. One of the topic cause the
> exception.
>    By the way : do I understand correct that kafka serialize the data on
> disc?
> What is the serialization polisy?
> In case I want to delete / remove topic how can I do in using API?
>
> Thanks
> Oleg.
>
>
> On Mon, Apr 8, 2013 at 6:38 AM, Swapnil Ghike <[EMAIL PROTECTED]> wrote:
>
> > Was a kafka broker running when your producer got this exception?
> >
> > Thanks,
> > Swapnil
> >
> > On 4/7/13 3:15 AM, "Oleg Ruchovets" <[EMAIL PROTECTED]> wrote:
> >
> > >try to execute kafka 0.7.2 and got such exception:
> > >
> > >
> > >log4j:WARN No appenders could be found for logger
> > >(org.I0Itec.zkclient.ZkConnection).
> > >log4j:WARN Please initialize the log4j system properly.
> > >Exception in thread "main" java.lang.NumberFormatException: null
> > > at java.lang.Integer.parseInt(Integer.java:417)
> > > at java.lang.Integer.parseInt(Integer.java:499)
> > > at
> > >scala.collection.immutable.StringLike$class.toInt(StringLike.scala:208)
> > > at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
> > > at
> >
> >kafka.producer.ZKBrokerPartitionInfo$$anonfun$kafka$producer$ZKBrokerParti
> >
> >tionInfo$$getZKTopicPartitionInfo$1$$anonfun$5.apply(ZKBrokerPartitionInfo
> > >.scala:167)
> > > at
> >
> >kafka.producer.ZKBrokerPartitionInfo$$anonfun$kafka$producer$ZKBrokerParti
> >
> >tionInfo$$getZKTopicPartitionInfo$1$$anonfun$5.apply(ZKBrokerPartitionInfo
> > >.scala:167)
> > > at
> >
> >scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scal
> > >a:206)
> > > at
> >
> >scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scal
> > >a:206)
> > > at scala.collection.Iterator$class.foreach(Iterator.scala:631)
> > > at
> >
> >scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.
> > >scala:549)
> > > at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
> > > at
> >
> >scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scal
> > >a:596)
> > > at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> > > at
> >
> >scala.collection.JavaConversions$JListWrapper.map(JavaConversions.scala:59
> > >6)
> > > at
> >
> >kafka.producer.ZKBrokerPartitionInfo$$anonfun$kafka$producer$ZKBrokerParti
> >
> >tionInfo$$getZKTopicPartitionInfo$1.apply(ZKBrokerPartitionInfo.scala:167)
> > > at
> >
> >kafka.producer.ZKBrokerPartitionInfo$$anonfun$kafka$producer$ZKBrokerParti
> >
> >tionInfo$$getZKTopicPartitionInfo$1.apply(ZKBrokerPartitionInfo.scala:163)
> > > at scala.collection.Iterator$class.foreach(Iterator.scala:631)
> > > at
> >
> >scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.
> > >scala:549)
> > > at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
> > > at
> >
> >scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scal
> > >a:596)
> > > at
> >
> >kafka.producer.ZKBrokerPartitionInfo.kafka$producer$ZKBrokerPartitionInfo$
> > >$getZKTopicPartitionInfo(ZKBrokerPartitionInfo.scala:163)
> > > at
> >
> >kafka.producer.ZKBrokerPartitionInfo.<init>(ZKBrokerPartitionInfo.scala:65
> > >)
> > > at kafka.producer.Producer.<init>(Producer.scala:47)
> > > at kafka.javaapi.producer.Producer.<init>(Producer.scala:33)
> > > at kafka.javaapi.producer.Producer.<init>(Producer.scala:40)
> > > at kafka.example.Producer.main(Producer.java:66)
> > >Disconnected from the target VM, address: '127.0.0.1:49086', transport:
> > >'socket'
> > >
> > >Please advice
> > >
> > >Thanks
> > >Oleg.
> >
> >
>

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB