Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Producer will pick one of the two brokers, but never the two at same time [0.8]


Copy link to this message
-
Re: Producer will pick one of the two brokers, but never the two at same time [0.8]
Any error in state-change.log? Also, are you using the latest code in the
0.8 branch?

Thanks,

Jun
On Wed, Jun 12, 2013 at 9:27 AM, Alexandre Rodrigues <
[EMAIL PROTECTED]> wrote:

> Hi Jun,
>
> Thanks for your prompt answer. The producer yields those errors in the
> beginning, so I think the topic metadata refresh has nothing to do with it.
>
> The problem is one of the brokers isn't leader on any partition assigned to
> it and because topics were created with a replication factor of 1, the
> producer will never connect to that broker at all. What I don't understand
> is why doesn't the broker assume the lead of those partitions.
>
> I deleted all the topics and tried now with a replication factor of two
>
> topic: A  partition: 0    leader: 1       replicas: 1,0   isr: 1
> topic: A  partition: 1    leader: 0       replicas: 0,1   isr: 0,1
> topic: B partition: 0    leader: 0       replicas: 0,1   isr: 0,1
> topic: B partition: 1    leader: 1       replicas: 1,0   isr: 1
> topic: C      partition: 0    leader: 1       replicas: 1,0   isr: 1
> topic: C      partition: 1    leader: 0       replicas: 0,1   isr: 0,1
>
>
> Now producer doesn't yield errors. However, one of the brokers ( broker 0 )
> generates lots of lines like this:
>
> [2013-06-12 16:19:41,805] WARN [KafkaApi-0] Produce request with
> correlation id 404999 from client  on partition [B,0] failed due to
> Partition [B,0] doesn't exist on 0 (kafka.server.KafkaApis)
>
> There should be a replica there, so I don't know why it complains about
> that message.
>
> Have you ever found anything like this?
>
>
>
> On 12 June 2013 16:27, Jun Rao <[EMAIL PROTECTED]> wrote:
>
> > If the leaders exist in both brokers, the producer should be able to
> > connect to both of them, assuming you don't provide any key when sending
> > the data. Could you try restarting the producer? If there has been broker
> > failures, it may take topic.metadata.refresh.interval.ms for the
> producer
> > to pick up the newly available partitions (see
> > http://kafka.apache.org/08/configuration.html for details).
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Wed, Jun 12, 2013 at 8:01 AM, Alexandre Rodrigues <
> > [EMAIL PROTECTED]> wrote:
> >
> > > Hi,
> > >
> > > I have a Kafka 0.8 cluster with two nodes connected to three ZKs, with
> > the
> > > same configuration but the brokerId (one is 0 and the other 1). I
> created
> > > three topics A, B and C with 4 partitions and a replication factor of
> 1.
> > My
> > > idea was to have 2 partitions per topic in each broker. However, when I
> > > connect a producer, I can't have both brokers to write at the same time
> > and
> > > I don't know what's going on.
> > >
> > > My server.config has the following entries:
> > >
> > > auto.create.topics.enable=true
> > > num.partitions=2
> > >
> > >
> > > When I run bin/kafka-list-topic.sh --zookeeper localhost:2181   I get
> the
> > > following partition leader assignments:
> > >
> > > topic: A  partition: 0    leader: 1       replicas: 1     isr: 1
> > > topic: A  partition: 1    leader: 0       replicas: 0     isr: 0
> > > topic: A  partition: 2    leader: 1       replicas: 1     isr: 1
> > > topic: A  partition: 3    leader: 0       replicas: 0     isr: 0
> > > topic: B partition: 0    leader: 0       replicas: 0     isr: 0
> > > topic: B partition: 1    leader: 1       replicas: 1     isr: 1
> > > topic: B partition: 2    leader: 0       replicas: 0     isr: 0
> > > topic: B partition: 3    leader: 1       replicas: 1     isr: 1
> > > topic: C      partition: 0    leader: 0       replicas: 0     isr: 0
> > > topic: C      partition: 1    leader: 1       replicas: 1     isr: 1
> > > topic: C      partition: 2    leader: 0       replicas: 0     isr: 0
> > > topic: C      partition: 3    leader: 1       replicas: 1     isr: 1
> > >
> > >
> > > I've forced reassignment using the kafka-reassign-partitions tool with
> > the
> > > following JSON:
> > >
> > > {"partitions":  [
> > >    {"topic": "A", "partition": 1, "replicas": [0] },