I think you may be misunderstanding the way Kafka works.

A kafka broker is never supposed to clear messages just because a consumer
read them.

The kafka broker will instead clear messages after their retention period
ends, though it will not delete the messages at the exact time when they
expire. Instead, a background process will periodically delete a batch of
expired messages. The retention policies guarantee a minimum retention
time, not an exact retention time.

It is the responsibility of each consumer to keep track of which messages
they have consumed already (by recording an offset for each consumed
partition). The high-level consumer stores these offsets in ZK. The simple
consumer has no built-in capability to store and manage offsets, so it is
the developer's responsibility to do so. In the case of the hadoop consumer
in the contrib package, these offsets are stored in offset files within

I wrote a blog post a while ago that explains how to use the offset files
generated by the contrib consumer to do incremental consumption (so that
you don't get duplicated messages by re-consuming everything in subsequent


I'm not sure how up to date this is, regarding the current Kafka versions,
but it may still give you some useful pointers...


On Mon, Jan 14, 2013 at 1:34 PM, navneet sharma <[EMAIL PROTECTED]
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB