Kafka, mail # user - Re: Arguments for Kafka over RabbitMQ ? - 2013-06-11, 16:50
Solr & Elasticsearch trainings in New York & San Francisco [more info][hide]
 Search Hadoop and all its subprojects:

Switch to Plain View
+
Dragos Manolescu 2013-06-06, 18:41
+
Jonathan Hodges 2013-06-06, 19:29
+
Marc Labbe 2013-06-07, 01:09
+
Alexis Richardson 2013-06-07, 12:54
+
Marc Labbe 2013-06-07, 13:31
+
Alexis Richardson 2013-06-07, 13:31
+
Jun Rao 2013-06-07, 15:24
+
Alexis Richardson 2013-06-07, 15:58
+
Jonathan Hodges 2013-06-07, 18:04
+
Jonathan Hodges 2013-06-07, 18:42
+
Dragos Manolescu 2013-06-07, 20:52
+
Sybrandy, Casey 2013-06-07, 20:56
+
Alexis Richardson 2013-06-07, 22:41
+
Alexis Richardson 2013-06-07, 22:49
+
Jonathan Hodges 2013-06-08, 01:09
+
Alexis Richardson 2013-06-08, 08:08
+
Jonathan Hodges 2013-06-08, 11:53
+
Alexis Richardson 2013-06-08, 20:09
+
Alexis Richardson 2013-06-08, 20:20
+
Alexis Richardson 2013-06-08, 21:27
+
Jonathan Hodges 2013-06-08, 23:03
+
Mark 2013-06-09, 15:59
+
Jonathan Hodges 2013-06-10, 12:13
+
Tim Watson 2013-06-10, 12:40
+
Jonathan Hodges 2013-06-10, 13:19
+
Tim Watson 2013-06-11, 14:20
Copy link to this message
-
Re: Arguments for Kafka over RabbitMQ ?
Hi Tim,

While your comments regarding durability are accurate for 0.7 version of
Kafka, it is a bit greyer with 0.8.  In 0.8 you have the ability to
configure Kafka to have the durability you need.  This is what I was
referring to with the link to Jun’s ApacheCon slides (
http://www.slideshare.net/junrao/kafka-replication-apachecon2013).

If you look at slide 21 titled, ‘Data Flow in Replication’ you see the
three possible durability configurations which tradeoff latency for greater
persistence guarantees.

The third row is the ‘no data loss’ configuration option where the producer
only receives an ack from the broker once the message(s) are committed by
the leader and peers (mirrors as you call them) and flushed to disk.  This
seems to be very similar to the scenario you describe in Rabbit, no?

Jun or Neha can you please confirm my understanding of 0.8 durability is
correct and there is no data loss in the scenario I describe?  I know there
is a separate configuration setting, log.flush.interval.messages, but I
thought in sync mode the producer doesn’t receive an ack until message(s)
are committed and flushed to disk.  Please correct me if my understanding
is incorrect.

Thanks!
On Tue, Jun 11, 2013 at 8:20 AM, Tim Watson <[EMAIL PROTECTED]>wrote:
 
+
Alexis Richardson 2013-06-13, 14:45
+
Jonathan Hodges 2013-06-13, 15:23
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB