Kafka, mail # user - Re: is it possible to commit offsets on a per stream basis? - 2013-09-02, 20:36
Solr & Elasticsearch trainings in New York & San Francisco [more info][hide]
 Search Hadoop and all its subprojects:

Switch to Threaded View
Copy link to this message
-
Re: is it possible to commit offsets on a per stream basis?
Will this work if we are using a TopicFilter, that can map to multiple
topics.  Can I create multiple connectors, and have each use the same Regex
for the TopicFilter?  Will each connector share the set of available
topics?  Is this safe to do?

Or is it necessary to create mutually non-intersecting regex's for each
connector?

It seems I have a similar issue.  I have been using auto commit mode, but
it doesn't guarantee that all messages committed have been successfully
processed (seems a change to the connector itself might expose a way to use
auto offset commit, and have it never commit a message until it is
processed).  But that would be a change to the
ZookeeperConsumerConnector....Essentially, it would be great if after
processing each message, we could mark the message as 'processed', and thus
use that status as the max offset to commit when the auto offset commit
background thread wakes up each time.

Jason
On Thu, Aug 29, 2013 at 11:58 AM, Yu, Libo <[EMAIL PROTECTED]> wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB