Kafka, mail # user - Re: testing issue with reliable sending - 2013-10-05, 06:52
 Search Hadoop and all its subprojects:

Switch to Threaded View
Copy link to this message
-
Re: testing issue with reliable sending
Neha,

I'm not sure I understand.  I would have thought that if the leader
acknowledges receipt of a message, and is then shut down cleanly (with
controlled shutdown enabled), that it would be able to reliably persist any
in memory buffered messages (and replicate them), before shutting down.
 Shouldn't this be part of the contract?  It should be able to make sure
this happens before shutting down, no?

I would understand a message dropped if it were a hard shutdown.

I'm not sure then how to implement reliable delivery semantics, while
allowing a rolling restart of the broker cluster (or even to tolerate a
single node failure, where one node might be down for awhile and need to be
replaced or have a disk repaired).  In this case, if we need to use
required.request.acks=-1, that will pretty much prevent any successful
message producing while any of the brokers for a partition is unavailable.
 So, I don't think that's an option.  (Not to mention the performance
degradation).

Is there not a way to make this work more reliably with leader only
acknowledgment, and clean/controlled shutdown?

My test does succeed, as expected, with acks = -1, at least for the 100 or
so iterations I've let it run so far.  It does on occasion send duplicates
(but that's ok with me).

Jason
On Fri, Oct 4, 2013 at 6:38 PM, Neha Narkhede <[EMAIL PROTECTED]>wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB