Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka, mail # user - Dynamic broker discovery not working for me


Copy link to this message
-
Re: Dynamic broker discovery not working for me
navneet sharma 2012-04-19, 07:16
I am using zookeeper based discover only (and not static list)

Looks like there are 2 workarounds:
1) Either create topic directory and restart broker
2) Or close all the brokers except the new broker. Then requests start
flowing to the latest. Then start all of them.

Best way is to get a fix in next release.

On Wed, Apr 18, 2012 at 10:10 PM, Jun Rao <[EMAIL PROTECTED]> wrote:

> Navneet,
>
> Yes, this is a bug in the producer logic. Basically, it only bootstraps
> partitions on the 1st new broker. This problem will be fixed in 0.8 since
> partitions are no longer tied to each physical brokers. For now, the quick
> solution is to create a topic directory on disk for the new topic on each
> of the brokers that you want the topic to reside (and restart those
> brokers).
>
> Thanks,
>
> Jun
>
> On Wed, Apr 18, 2012 at 2:00 AM, navneet sharma <
> [EMAIL PROTECTED]
> > wrote:
>
> > Hi All,
> >
> > I was trying the following scenario:
> > 1) Start zookeeper
> > 2) Start server 1 ie broker 1. It will connect with zookeeper
> > 3) Start Producer stand alone java program. This will push string
> messages
> > read from a file line-by-line. Before pushing it will divide the messages
> > into 3 topics.
> > 4) Start Consumer stand alone java program. This is a set of 3 consumers
> > each dedicated to 3 different topics.
> >
> > After this, :
> > 5) I started another broker server on a different port.
> > 6) But this was never discovered by the producer and it kept on pushing
> > everything to the first broker only
> >
> > I could see this on producer logs:::::
> > 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Getting the number
> of
> > broker partitions registered for topic: cartTopic
> > 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Broker partitions
> > registered for topic: cartTopic = List(1-0)
> > 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Sending message to
> > broker 127.0.1.1:9095 on partition 0
> > 13:42:43,227 [main] DEBUG kafka.producer.ProducerPool  - Fetching sync
> > producer for broker id: 1
> > 13:42:43,227 [main] DEBUG kafka.message.ByteBufferMessageSet  -
> makeNext()
> > in deepIterator: innerDone = true
> > 13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  - Message
> is
> > uncompressed. Valid byte count = 0
> > 13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  -
> makeNext()
> > in deepIterator: innerDone = true
> > 13:42:43,228 [main] DEBUG kafka.producer.ProducerPool  - Sending message
> to
> > broker 1
> >
> > Infact i suspected that producer may sync up with zookeeper only at run
> > time. So, with zookeeper and both brokers up, i re-ran producer and
> > consumer but it gave me same result.
> >
> > Am i missing anything?
> >
> > Thanks,
> > Navneet Sharma
> >
>