I think what you are asking for is backpressure from the broker to the
producer. I.e. when the broker got close to full it would start to slow
down the producer to let the consumer catch up. This is a fairly typical
thing for a message broker to do.

Our approach is different though. We have found that most use cases cannot
tolerate backpressure because the production of messages comes from a live
service that cannot stop. So we have instead focused on scaling the data
that is retained and the consumption. In Kafka it is very reasonable to
retain 5TB per server and you can have lots of servers to provide
sufficient buffer. If you feel like there are use cases where backpressure
is preferrable it would probably be good for us to hear about them because
right now we haven't seen a need for it.

On Wed, Dec 11, 2013 at 11:42 PM, xingcan <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB