Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Getting LeaderNotAvailableException in console producer after increasing partitions from 4 to 16.


Copy link to this message
-
Re: Getting LeaderNotAvailableException in console producer after increasing partitions from 4 to 16.
Cool! You can follow the process of creating a JIRA here:

http://kafka.apache.org/contributing.html

And submit patch here:

https://cwiki.apache.org/confluence/display/KAFKA/Git+Workflow

It will be great if you can also add an entry for this issue in FAQ since I
think this is a common question:

https://cwiki.apache.org/confluence/display/KAFKA/FAQ

Guozhang
On Tue, Aug 27, 2013 at 2:38 PM, Rajasekar Elango <[EMAIL PROTECTED]>wrote:

> Thanks Guozhang, Changing max retry to 5 worked. Since I am changing
> console producer code, I can also submit patch adding both
> message.send.max.retries
> and retry.backoff.ms to console producer. Can you let me know process for
> submitting patch?
>
> Thanks,
> Raja.
>
>
> On Tue, Aug 27, 2013 at 4:03 PM, Guozhang Wang <[EMAIL PROTECTED]> wrote:
>
> > Hello Rajasekar,
> >
> > The remove fetcher log entry is normal under addition of partitions,
> since
> > they indicate that some leader changes have happened so brokers are
> closing
> > the fetchers to the old leaders.
> >
> > I just realized that the console Producer does not have the
> > message.send.max.retries options yet. Could you file a JIRA for this and
> I
> > will followup to add this option? As for now you can hard modify the
> > default value from 3 to a larger number.
> >
> > Guozhang
> >
> >
> > On Tue, Aug 27, 2013 at 12:37 PM, Rajasekar Elango
> > <[EMAIL PROTECTED]>wrote:
> >
> > > Thanks Neha & Guozhang,
> > >
> > > When I ran StateChangeLogMerger, I am seeing this message repeated 16
> > times
> > > for each partition:
> > >
> > > [2013-08-27 12:30:02,535] INFO [ReplicaFetcherManager on broker 1]
> > Removing
> > > fetcher for partition [test-60,13] (kafka.server.ReplicaFetcherManager)
> > > [2013-08-27 12:30:02,536] INFO [Log Manager on Broker 1] Created log
> for
> > > partition [test-60,13] in
> > > /home/relango/dev/mandm/kafka/main/target/dist/mandm-kafka/kafka-data.
> > > (kafka.log.LogManager)
> > >
> > > I am also seeing .log and .index files created for this topic in data
> > dir.
> > > Also list topic command shows leaders, replicas and isrs for all
> > > partitions. Do you still think increasing num of retries would help or
> is
> > > it some other issue..? Also console Producer doesn't seem to  have
> option
> > > to set num of retries. Is there a way to configure num of retries for
> > > console producer ?
> > >
> > > Thanks,
> > > Raja.
> > >
> > >
> > > On Tue, Aug 27, 2013 at 12:52 PM, Neha Narkhede <
> [EMAIL PROTECTED]
> > > >wrote:
> > >
> > > > As Guozhang said, your producer might give up sooner than the leader
> > > > election completes for the new topic. To confirm if your producer
> gave
> > up
> > > > too soon, you can run the state change log merge tool for this topic
> > and
> > > > see when the leader election finished for all partitions
> > > >
> > > > ./bin/kafka-run-class.sh kafka.tools.StateChangeLogMerger --logs
> > > <location
> > > > to all state change logs> --topic <topic>
> > > >
> > > > Note that this tool requires you to give the state change logs for
> all
> > > > brokers in the cluster.
> > > >
> > > >
> > > > Thanks,
> > > > Neha
> > > >
> > > >
> > > > On Tue, Aug 27, 2013 at 9:45 AM, Guozhang Wang <[EMAIL PROTECTED]>
> > > wrote:
> > > >
> > > > > Hello Rajasekar,
> > > > >
> > > > > In 0.8 producers keep a cache of the partition -> leader_broker_id
> > map
> > > > > which is used to determine to which brokers should the messages be
> > > sent.
> > > > > After new partitions are added, the cache on the producer has not
> > > > populated
> > > > > yet hence it will throw this exception. The producer will then try
> to
> > > > > refresh its cache by asking the brokers "who are the leaders of
> these
> > > new
> > > > > partitions that I do not know of before". The brokers at the
> > beginning
> > > > also
> > > > > do not know this information, and will only get this information
> from
> > > > > controller which will only propagation the leader information after
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB