Kafka, mail # user - Re: error recovery in multiple thread reading from Kafka with HighLevel api - 2014-08-08, 18:12
Solr & Elasticsearch trainings in New York & San Fransisco [more info]
 Search Hadoop and all its subprojects:

Switch to Threaded View
Copy link to this message
-
Re: error recovery in multiple thread reading from Kafka with HighLevel api
Maybe i could batch the messages before commit.., e.g committing every 10
second.this is what the auto commit does anyway and  I could live with
duplicate data.
What do u think?

I would then also seem to need a monitoring daemon to check the lag to
restart the consumer during machine crashes..
On Fri, Aug 8, 2014 at 10:40 AM, Chen Wang <[EMAIL PROTECTED]>
wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB