The weird part is this. If the consumers are consuming, the following
fetcher thread shouldn't be blocked on enqueuing the data. Could you turn
on TRACE level logging in kafka.server.KafkaRequestHandlers and if there is
any fetch requests issued to the broker when the consumer threads get stuck?

"FetchRunnable-1" prio=10 tid=0x00007fcbc902b800 nid=0x2064 waiting on
condition [0x00007fcb833eb000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x00000006809e8000> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
        at java.util.concurrent.LinkedBlockingQueue.put(
LinkedBlockingQueue.java:306)
        at kafka.consumer.PartitionTopicInfo.enqueue(
PartitionTopicInfo.scala:61)
        at kafka.consumer.FetcherRunnable$$anonfun$run$
5.apply(FetcherRunnable.scala:79)
        at kafka.consumer.FetcherRunnable$$anonfun$run$
5.apply(FetcherRunnable.scala:65)
        at scala.collection.LinearSeqOptimized$class.
foreach(LinearSeqOptimized.scala:61)
        at scala.collection.immutable.List.foreach(List.scala:45)
        at kafka.consumer.FetcherRunnable.run(FetcherRunnable.scala:65)

Thanks

Jun
On Wed, Jul 10, 2013 at 8:30 AM, Nihit Purwar <[EMAIL PROTECTED]> wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB