As per my understanding, if the broker says the msg is committed,  its
guaranteed to have been committed as per ur ack config. If it says it did
not get committed, then its very hard to figure out if this was just a
false error. Since there is concept of unique ids for messages, a replay of
the same message will result in duplication. I think its a reasonable
behaviour considering kafka prefers to append data to partitions fot
performance reasons.
The best way to right now deal with duplicate msgs is to build the
processing engine (layer where your consumer sits) to deal with at least
once semantics of the broker.
On 25 Oct 2013 23:23, "Kane Kane" <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB