Thank you for your comments. I'll reply point by point for clarity.

1. We were aware of the migration tool but since we haven't used Kafka for
production yet we just started using the 0.8 version directly.

2. I hadn't seen those particular slides, very interesting. I'm not sure
we're testing the same thing though. In our case we vary the number of
physical machines, but each one has 10 threads accessing a pool of Kafka
producer objects and in theory a single machine is enough to saturate the
brokers (which our test mostly confirms). Also, assuming that the slides
are based on the built-in producer performance tool, I know that we started
getting very different numbers once we switched to use "real" (actual
production log) messages. Compression may also be a factor in case it
wasn't configured the same way in those tests.

3. In the latency section, there are two tests, one for average and another
for maximum latency. Each one has two graphs presenting the exact same data
but at different levels of zoom. The first one is to observe small
variations of latency when target throughput <= actual throughput. The
second is to observe the overall shape of the graph once latency starts
growing when target throughput > actual throughput. I hope that makes sense.

4. That sounds great, looking forward to it.


On Mon, Apr 8, 2013 at 9:48 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB