Kafka, mail # user - Re: Duplicate records in Kafka 0.7 - 2014-01-13, 04:03
 Search Hadoop and all its subprojects:

Switch to Threaded View
Copy link to this message
-
Re: Duplicate records in Kafka 0.7
It depends on how you process a batch of compressed messages. In 0.7, the
message offset only advances at the compressed message set boundary. So, if
you always finish processing all messages in a compressed set, there
shouldn't be any duplicates. If say, you stop after consuming only 3
messages in a compressed set of 10, when you refetch, you will get the
first 3 messages again.

Thanks,

Jun
On Fri, Jan 10, 2014 at 11:17 PM, Xuyen On <[EMAIL PROTECTED]> wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB