I think the problem you are describing is that if a single broker is slow
all producers will come to a halt (because they all talk to this broker).

We don't have a great solution for this at the moment.

In our own usage for the first tier of data collection each producer
connects to a single broker and sends all data there and if it dies the
producer reconnects. This somewhat moderates the problem since if only 1 of
n brokers is slow, only 1/nth the producers are impacted. This does not
allow any semantic partitioning by key. You should be able to accomplish
this with a custom partitioner that chooses a random partition and sticks
with it instead of round-robining.

A more sophisticated solution might detect slow brokers and shoot them in
the head. If the detection works correctly and the underlying cause is some
hardware problem or other process on the machine, then just killing the
node would fix the problem. However if the problem is just load then this
will probably make things worse. It is also a bit tricky to define what is
"slow" and have the user accurately configure that. It would be easy to
imagine a half-assed implementation causing more problems then it fixed.

On Fri, Jan 4, 2013 at 12:24 AM, Raghu Angadi <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB