Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> High Level Consumer commit offset


+
Vadim Keylis 2013-11-04, 22:42
+
Guozhang Wang 2013-11-04, 23:33
+
Vadim Keylis 2013-11-05, 00:27
Copy link to this message
-
Re: High Level Consumer commit offset
And exceptions you saw from the broker end, in server log?

Guozhang
On Mon, Nov 4, 2013 at 4:27 PM, Vadim Keylis <[EMAIL PROTECTED]> wrote:

> Thanks for confirming, but that not behavior I observe. My consumer does
> not commit data to kafka. It get messages sent to kafka. Once restarted I
> should of gotten  messages that previously received by consumer, but on
> contrarily I got none. Logs confirm the initial offset been as -1. What am
> I doing wrong?
>
> 04 Nov 2013 16:03:11,570 DEBUG meetme_Consumer_pkey_1062739249349868
> kafka.consumer.PartitionTopicInfo - initial consumer offset of meetme:0:
> fetched offset = -1: consumed offset = -1 is -1
> 04 Nov 2013 16:03:11,570 DEBUG meetme_Consumer_pkey_1062739249349868
> kafka.consumer.PartitionTopicInfo - initial fetch offset of meetme:0:
> fetched offset = -1: consumed offset = -1 is -1
>
> 04 Nov 2013 16:03:11,879 DEBUG
>
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> kafka.network.BlockingChannel - Created socket with SO_TIMEOUT = 30000
> (requested 30000), SO_RCVBUF = 65536 (requested 65536), SO_SNDBUF = 11460
> (requested -1).
> 04 Nov 2013 16:03:11,895 DEBUG
>
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> kafka.consumer.PartitionTopicInfo - reset fetch offset of ( meetme:0:
> fetched offset = 99000: consumed offset = -1 ) to 99000
> 04 Nov 2013 16:03:11,896 DEBUG
>
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> kafka.consumer.PartitionTopicInfo - reset consume offset of meetme:0:
> fetched offset = 99000: consumed offset = 99000 to 99000
> 04 Nov 2013 16:03:11,897 INFO
>
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> kafka.consumer.ConsumerFetcherManager -
> [ConsumerFetcherManager-1383609790333] Adding fetcher for partition
> [meetme,0], initOffset -1 to broker 9 with fetcherId 0
>
>
> Here is my property file:
>  zookeeper.connect=dzoo01.tag-dev.com:2181/kafka
> zookeeper.connectiontimeout.ms=1000000
> group.id=event1
> auto.commit.enable=false
>
>
> On Mon, Nov 4, 2013 at 3:32 PM, Guozhang Wang <[EMAIL PROTECTED]> wrote:
>
> > That is correct. If auto.commit.enable is set to faulse, the offsets will
> > not be committed at all unless the consumer calls the commit function
> > explicitly.
> >
> > Guozhang
> >
> >
> > On Mon, Nov 4, 2013 at 2:42 PM, Vadim Keylis <[EMAIL PROTECTED]>
> > wrote:
> >
> > > Good afternoon. I was under impression if auto commit set to false
>  then
> > > once consumer is restarted then logs would be replayed from the
> > beginning.
> > > Is that correct?
> > >
> > > Thanks,
> > > Vadim
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>

--
-- Guozhang

 
+
Neha Narkhede 2013-11-05, 04:44
+
Vadim Keylis 2013-11-05, 06:03
+
Shafaq 2013-11-05, 06:42
+
Vadim Keylis 2013-11-05, 08:17
+
Neha Narkhede 2013-11-05, 14:47
+
Vadim Keylis 2013-11-05, 15:32