-Re: OffsetOutOfRangeException with 0 retention
Neha Narkhede 2013-03-14, 15:05
If you are never able to commit the offset, it will always try to consume
from the initial fetch offset. Eventually, that offset will be garbage
collected from the broker. So it will automatically reset its fetch offset
to the earliest or latest offset available on the broker. The choice of
resetting to earliest or latest offset depends on the autooffset.reset
config on the consumer.
On Wed, Mar 13, 2013 at 7:59 PM, Nicolas Berthet
> Sadly, I don't have access to those logs anymore, I don't have access to
> environment. Though I remember seeing some exception during offset writing,
> most probably due to zookeeper connection issue.
> What would be side effects of not being able to write the consumer offset,
> beside seeing this exception ? As long as my service doesn't restart and I
> do not recreate the consumer, would the consumer continue to work ? Get
> duplicates of messages when it's getting connected to ZK again ?
> Basically, I'm interested in whatever could go wrong with our kafka
> consumers, what would be the symptoms, what would be the possible
> -----Original Message-----
> From: Neha Narkhede [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, March 13, 2013 9:19
> To: [EMAIL PROTECTED]
> Subject: Re: OffsetOutOfRangeException with 0 retention
> Looks like your consumers have never updated their offsets and are unable
> reset offset to the earliest/latest on startup. Can you pass around the
> entire consumer log ?
> On Mon, Mar 11, 2013 at 6:34 PM, Nicolas Berthet
> <[EMAIL PROTECTED]>wrote:
> > Neha,
> > Thanks for the reply. I'm using the high level consumer, btw, I'm
> > using kafka 0.7.2 (we built it with scala 2.10) the consumer is using
> > default values with an high ZK timeout value.
> > As far as I know, my consumers didn't restart, they're running on
> > services that were not restarted (unless the consumer itself would
> > reconnect after sometime).
> > Don't know if it could be part of the reason, some of my consumers are
> > in remote sites, they have high latency and experience ZK timeouts
> > here and there. I've ZK observers on the remote sites with rather high
> > timeout values, they still disconnect from time to time from the main
> > site due to timeout.
> > Due to the ZK timeouts I noticed the consumers fail to write their
> > PS: Sorry for the previous spamming, my mail client went crazy and by
> > the time I realized it was too late.
> > Kindly,
> > Nicolas
> > -----Original Message-----
> > From: Neha Narkhede [mailto:[EMAIL PROTECTED]]
> > Sent: Monday, March 11, 2013 23:52
> > To: [EMAIL PROTECTED]
> > Subject: Re: OffsetOutOfRangeException with 0 retention
> > Nicolas,
> > It seems that you started a consumer from the earliest offset, then
> > shut it down for a long time, and tried restarting it again. At this
> > time, you will see OffsetOutOfRange exceptions, since the offset that
> > your consumer is trying to fetch has been garbage collected from the
> > server (due to it being too old). If you are using the high level
> > consumer (ZookeeperConsumerConnector), the consumer will automatically
> > reset the offset to the earliest or latest depending on the
> > autooffset.reset config value.
> > Which consumer are you using in this test ?
> > Thanks,
> > Neha
> > On Mon, Mar 11, 2013 at 2:12 AM, Nicolas Berthet
> > <[EMAIL PROTECTED]>wrote:
> > > Hi,
> > >
> > >
> > >
> > > I'm currently seeing a lot of OffsetOutOfRangeException in my server
> > > logs (it's not something that appeared recently, I simply didn't use
> > > Kafka before). I tried to find information on the mailing-list, but
> > > nothing seems to match my case.
> > >
> > >
> > >
> > > ERROR error when processing request FetchRequest(topic:test-topic,
> > > part:0
> > > offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)