Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka, mail # user - Where is broker 'current offset' stored in ZooKeeper?


+
Chris Curtin 2013-02-18, 15:55
+
Jun Rao 2013-02-18, 17:04
Copy link to this message
-
Re: Where is broker 'current offset' stored in ZooKeeper?
Chris Curtin 2013-02-18, 17:57
Thanks Jun.

Looks like a couple of ways to do this based on how operations wants to
manage things.

Thanks,

Chris
On Mon, Feb 18, 2013 at 12:04 PM, Jun Rao <[EMAIL PROTECTED]> wrote:

> All zk paths in 0.8 are documented in
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper
>
> In 0.8, there is a jmx bean (ConsumerLag) in the consumer under
> kafka.server that monitors the lag of each partition in terms of messages.
> We also have a command line tool ConsumerOffsetChecker.
>
> Thanks,
>
> Jun
>
> On Mon, Feb 18, 2013 at 7:54 AM, Chris Curtin <[EMAIL PROTECTED]
> >wrote:
>
> > Hi,
> >
> > A few items our operations teams monitor today in our
> > JMS infrastructure are 'number of messages processed' and 'number of
> > messages in queue'.
> >
> > Since Kafka changes the paradigm away from 'messages in queue' how do we
> > get operations an idea if our consumers are running behind?
> >
> > One thought is to query the zookeeper storage and get the current offset
> > for each topic/partition/consumer group and compare it to the latest
> offset
> > created by the broker. That will tell us if one or more consumer groups
> and
> > specific partition consumers are not keeping up.
> >
> > Only problem, I can't figure out where the broker is storing the high
> water
> > mark. I looked at the 0.8 Zookeeper document on the wiki and didn't see
> it
> > there (which is where I see the consumer group information).
> >
> > First question, is the watermark stored in zookeeper?
> >
> > Second, is there a different way of monitoring how consumers are doing
> > relative to what has been stored by the broker?
> >
> > Thanks,
> >
> > Chris
> >
>