Thanks Felix for sharing your work. Contrib hadoop-consumer looks like the
same way.

I think i need to really understand this offset stuff. So far i have used
only high level consumer.When consumer is done reading all the messages, i
used to kill the process(because it won't on its own).

Again i used Producer to pump more messages and Consumer to read the new
messages(which is a new process as i killed the last consumer).

But i never saw messages getting duplicating.

Now its not very clear for me that how offsets is tracked specifically when
i am re-launching the consumer?
And why retention policy is not working when used with SimpleConsumer? For
my experiment i made it 4 hours.

Please help me understand.

On Tue, Jan 15, 2013 at 4:12 AM, Felix GV <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB