Hi Tim,

While your comments regarding durability are accurate for 0.7 version of
Kafka, it is a bit greyer with 0.8.  In 0.8 you have the ability to
configure Kafka to have the durability you need.  This is what I was
referring to with the link to Jun’s ApacheCon slides (
http://www.slideshare.net/junrao/kafka-replication-apachecon2013).

If you look at slide 21 titled, ‘Data Flow in Replication’ you see the
three possible durability configurations which tradeoff latency for greater
persistence guarantees.

The third row is the ‘no data loss’ configuration option where the producer
only receives an ack from the broker once the message(s) are committed by
the leader and peers (mirrors as you call them) and flushed to disk.  This
seems to be very similar to the scenario you describe in Rabbit, no?

Jun or Neha can you please confirm my understanding of 0.8 durability is
correct and there is no data loss in the scenario I describe?  I know there
is a separate configuration setting, log.flush.interval.messages, but I
thought in sync mode the producer doesn’t receive an ack until message(s)
are committed and flushed to disk.  Please correct me if my understanding
is incorrect.

Thanks!
On Tue, Jun 11, 2013 at 8:20 AM, Tim Watson <[EMAIL PROTECTED]>wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB