Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> expected exceptions?


Copy link to this message
-
Re: expected exceptions?
If expected, does it make sense to log them as exceptions as such?  Can we
instead log something meaningful to the console, like:

"No leader was available, one will now be created"

or

"ConsumerConnector has shutdown"

etc.

Should I file jira's for these?

Jason
On Wed, May 8, 2013 at 8:22 AM, Jun Rao <[EMAIL PROTECTED]> wrote:

> Yes, both are expected.
>
> Thanks,
>
> Jun
>
>
> On Wed, May 8, 2013 at 12:16 AM, Jason Rosenberg <[EMAIL PROTECTED]> wrote:
>
> > I'm porting some unit tests from 0.7.2 to 0.8.0.  The test does the
> > following, all embedded in the same java process:
> >
> > -- spins up a zk instance
> > -- spins up a kafka server using a fresh log directory
> > -- creates a producer and sends a message
> > -- creates a high-level consumer and verifies that it can consume the
> > message
> > -- shuts down the consumer
> > -- stops the kafka server
> > -- stops zk
> >
> > The test seems to be working fine now, however, I consistently see the
> > following exceptions (which from poking around the mailing list seem to
> be
> > expected?).  If these are expected, can we suppress the logging of these
> > exceptions, since it clutters the output of tests, and presumably,
> clutters
> > the logs of the running server/consumers, during clean startup and
> > shutdown......
> >
> > When I call producer.send(), I get:
> >
> > kafka.common.LeaderNotAvailableException: No leader for any partition
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartition(DefaultEventHandler.scala:212)
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:148)
> > at
> >
> >
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:148)
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:94)
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
> > at kafka.producer.Producer.send(Producer.scala:74)
> > at kafka.javaapi.producer.Producer.send(Producer.scala:32)
> > ...
> >   ...
> >
> > When I call consumerConnector.shutdown(), I get:
> >
> > java.nio.channels.ClosedByInterruptException
> > at
> >
> >
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
> > at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:543)
> > at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
> > at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:47)
> > at kafka.consumer.SimpleConsumer.reconnect(SimpleConsumer.scala:60)
> > at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
> > at
> >
> >
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:73)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> > at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
> > at
> >
> >
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB