It depends on how you process a batch of compressed messages. In 0.7, the
message offset only advances at the compressed message set boundary. So, if
you always finish processing all messages in a compressed set, there
shouldn't be any duplicates. If say, you stop after consuming only 3
messages in a compressed set of 10, when you refetch, you will get the
first 3 messages again.


On Fri, Jan 10, 2014 at 11:17 PM, Xuyen On <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB