On Sun, Apr 21, 2013 at 8:53 AM, Oleg Ruchovets <[EMAIL PROTECTED]> wrote:

No, I mean spreading the data across the two partitions, so 50% goes
in one, and 50% goes in the other. Have your Producer always write to
partition "-1", which will tell Kafka to select a partition at random
for each message.

Then one of the Consumers will consume partition 0, the other partition 1.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB