Thanks. I know I can write a SimpleConsumer to do this, but it feels like
the High Level consumer is _so_ close to being robust enough to    handle
what I'd think people want to do in most applications. I'm going to submit
an enhancement request.

I'm trying to understand the level of data loss in this situation, so I
looked deeper into the KafkaStream logic: it looks like a KafkaStream
includes a BlockingQueue for transferring the messages to my code from
Kafka. If I call shutdown() when I detect the problem, are the messages
already in the BlockingQueue considered 'read' by Kafka, or does the
shutdown peek into the Queue to see what is still there before updating

My concern is if that queue is not empty I'll be losing more than the one
message that led to the failure.

I'm also curious how others are handling this situation. Do you assume the
message that is causing problems is lost or somehow know to go get it
later? I'd think others would have this problem too.



On Tue, Jul 9, 2013 at 3:23 PM, Philip O'Toole <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB