I am on POC stage , so I can configure the producer to write in different
But how it will help me to process the same data with two consumers.

I try to get such effect:
  I got the data and store it to Kafka.

I have 2 consumers:
   1) for real time which consumes the data for example every 10 seconds.
   2) for move data to hdfs - for example every 1 hour.

But in case I'll use 2 partitions , each consumer process part of the data
(50%). Does it correct?
I need that 2 consumers will produce 100% of the data.

Please advice.
On Sun, Apr 21, 2013 at 12:00 PM, Philip O'Toole <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB