Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> High Level Consumer commit offset


Copy link to this message
-
Re: High Level Consumer commit offset
That I can manage. Thanks so much.
On Tue, Nov 5, 2013 at 6:46 AM, Neha Narkhede <[EMAIL PROTECTED]>wrote:

> Yes, it will commit offsets only for the partitions that the consumer owns.
> But over time, the set of partitions that a consumer owns can change.
>
> Thanks,
> Neha
> On Nov 5, 2013 12:17 AM, "Vadim Keylis" <[EMAIL PROTECTED]> wrote:
>
> > I am using creating   Consumer.createJavaConsumerConnector(kafka 0.8) for
> > each topic/partition. Would it be safe to assume that commit offset will
> > apply only to stream/partition managed by that connector?
> >
> > Thanks,
> > Vadim
> >
> >
> > On Mon, Nov 4, 2013 at 8:43 PM, Neha Narkhede <[EMAIL PROTECTED]
> > >wrote:
> >
> > > You need to set "auto.offset.reset"="smallest". By default, the
> consumer
> > > will start consuming the latest messages.
> > >
> > > Thanks,
> > > Neha
> > >
> > >
> > > On Mon, Nov 4, 2013 at 4:38 PM, Guozhang Wang <[EMAIL PROTECTED]>
> > wrote:
> > >
> > > > And exceptions you saw from the broker end, in server log?
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Mon, Nov 4, 2013 at 4:27 PM, Vadim Keylis <[EMAIL PROTECTED]>
> > > > wrote:
> > > >
> > > > > Thanks for confirming, but that not behavior I observe. My consumer
> > > does
> > > > > not commit data to kafka. It get messages sent to kafka. Once
> > > restarted I
> > > > > should of gotten  messages that previously received by consumer,
> but
> > on
> > > > > contrarily I got none. Logs confirm the initial offset been as -1.
> > What
> > > > am
> > > > > I doing wrong?
> > > > >
> > > > > 04 Nov 2013 16:03:11,570 DEBUG
> meetme_Consumer_pkey_1062739249349868
> > > > > kafka.consumer.PartitionTopicInfo - initial consumer offset of
> > > meetme:0:
> > > > > fetched offset = -1: consumed offset = -1 is -1
> > > > > 04 Nov 2013 16:03:11,570 DEBUG
> meetme_Consumer_pkey_1062739249349868
> > > > > kafka.consumer.PartitionTopicInfo - initial fetch offset of
> meetme:0:
> > > > > fetched offset = -1: consumed offset = -1 is -1
> > > > >
> > > > > 04 Nov 2013 16:03:11,879 DEBUG
> > > > >
> > > > >
> > > >
> > >
> >
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> > > > > kafka.network.BlockingChannel - Created socket with SO_TIMEOUT =
> > 30000
> > > > > (requested 30000), SO_RCVBUF = 65536 (requested 65536), SO_SNDBUF =
> > > 11460
> > > > > (requested -1).
> > > > > 04 Nov 2013 16:03:11,895 DEBUG
> > > > >
> > > > >
> > > >
> > >
> >
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> > > > > kafka.consumer.PartitionTopicInfo - reset fetch offset of (
> meetme:0:
> > > > > fetched offset = 99000: consumed offset = -1 ) to 99000
> > > > > 04 Nov 2013 16:03:11,896 DEBUG
> > > > >
> > > > >
> > > >
> > >
> >
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> > > > > kafka.consumer.PartitionTopicInfo - reset consume offset of
> meetme:0:
> > > > > fetched offset = 99000: consumed offset = 99000 to 99000
> > > > > 04 Nov 2013 16:03:11,897 INFO
> > > > >
> > > > >
> > > >
> > >
> >
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> > > > > kafka.consumer.ConsumerFetcherManager -
> > > > > [ConsumerFetcherManager-1383609790333] Adding fetcher for partition
> > > > > [meetme,0], initOffset -1 to broker 9 with fetcherId 0
> > > > >
> > > > >
> > > > > Here is my property file:
> > > > >  zookeeper.connect=dzoo01.tag-dev.com:2181/kafka
> > > > > zookeeper.connectiontimeout.ms=1000000
> > > > > group.id=event1
> > > > > auto.commit.enable=false
> > > > >
> > > > >
> > > > > On Mon, Nov 4, 2013 at 3:32 PM, Guozhang Wang <[EMAIL PROTECTED]>
> > > > wrote:
> > > > >
> > > > > > That is correct. If auto.commit.enable is set to faulse, the
> > offsets
> > > > will
> > > > > > not be committed at all unless the consumer calls the commit
> > function
> > > > > > explicitly.
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > >
> > > > > > On Mon, Nov 4, 2013 at 2:42 PM, Vadim Keylis <

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB