Home | About | Sematext search-lucene.com search-hadoop.com search-devops.com metrics + logs = try SPM and Logsene for free
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Arguments for Kafka over RabbitMQ ?


Copy link to this message
-
Re: Arguments for Kafka over RabbitMQ ?
We also went through the same decision making and our arguments for Kafka
where in the same lines as those Jonathan mentioned. The fact that we have
heterogeneous consumers is really a deciding factor. Our requirements were
to avoid loosing messages at all cost while having multiple consumers
reading the same data at a different pace. On one side, we have a few
consumers being fed with data coming in from most, if not all, topics. On
the other side, we have a good bunch of consumers reading only from a
single topic. The big guys can take their time to read while the smaller
ones are mostly for near real-time events so they need to keep up the pace
of incoming messages.

RabbitMQ stores data on disk only if you tell it to while Kafka persists by
design. From the beginning, we decided we would try to use the queues the
same way, pub/sub with a routing key (an exchange in RabbitMQ) or topic,
persisted to disk and replicated.

One of our scenario was to see how the system would cope with the largest
consumer down for a while, therefore forcing the brokers to keep the data
for a long period. In the case of RabbitMQ, this consumer has it owns queue
and data grows on disk, which is not really a problem if you plan
consequently. But, since it has to keep track of all messages read, the
Mnesia database used by RabbitMQ as the messages index also grows pretty
big. At that point, the amount of RAM necessary becomes very large to keep
the level of performance we need. In our tests, we found that this an
adverse effect on ALL the brokers, thus affecting all consumers. You can
always say that you'll monitor the consumers to make sure it won't happen.
That's a good thing if you can. I wasn't ready to make that bet.

Another point is the fact that, since we wanted to use pub/sub with a
exchange in RabbitMQ, we would have ended up with a lot data duplication
because if a message is read by multiple consumers, it will get duplicated
in the queue of each of those consumer. Kafka wins on that side too since
every consumer reads from the same source.

The downsides of Kafka were the language issues (we are using mostly Python
and C#). 0.8 is very new and few drivers are available at this point. Also,
we will have to try getting as close as possible to once-and-only-once
guarantee. There are two things where RabbitMQ would have given us less
work out of the box as opposed to Kafka. RabbitMQ also provides a bunch of
tools that makes it rather attractive too.

In the end, looking at throughput is a pretty nifty thing but being sure
that I'll be able to manage the beast as it grows will allow me to get to
sleep way more easily.
On Thu, Jun 6, 2013 at 3:28 PM, Jonathan Hodges <[EMAIL PROTECTED]> wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB