The idea is that each mapper is only connecting to a single Kafka broker.
Each line in the input file specifies broker uri, topic, partition and
offset.

The hadoop consumer in contrib is probably a bit outdated. The one that
LinkedIn uses now can be found at https://github.com/linkedin/camus

Thanks,

Jun
On Tue, Jun 4, 2013 at 7:29 AM, Samir Madhavan <
[EMAIL PROTECTED]> wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB