Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> are commitOffsets botched to zookeeper?


Copy link to this message
-
RE: are commitOffsets botched to zookeeper?
Can a request be made to zookeeper for this feature?

Thanks,
rob

> -----Original Message-----
> From: Neha Narkhede [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, May 16, 2013 9:53 PM
> To: [EMAIL PROTECTED]
> Subject: Re: are commitOffsets botched to zookeeper?
>
> Currently Kafka depends on zookeeper 3.3.4 that doesn't have a batch write
> api. So if you commit after every message at a high rate, it will be slow
and
> inefficient. Besides it will cause zookeeper performance to degrade.
>
> Thanks,
> Neha
> On May 16, 2013 6:54 PM, "Rob Withers" <[EMAIL PROTECTED]> wrote:
>
> > We are calling commitOffsets after every message consumption.  It
> > looks to be ~60% slower, with 29 partitions.  If a single KafkaStream
> > thread is from a connector, and there are 29 partitions, then
> > commitOffsets sends 29 offset updates, correct?  Are these offset
> > updates batched in one send to zookeeper?
> >
> > thanks,
> > rob
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB