Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka, mail # user - Mirror maker doesn't replicate new topics


Copy link to this message
-
Re: Mirror maker doesn't replicate new topics
Guozhang Wang 2013-09-09, 22:02
Hi Raja,

So just to summarize the scenario:

1) The consumer of mirror maker is successfully consuming all partitions of
the newly created topic.
2) The producer of mirror maker is not producing the new messages
immediately when the topic is created (observed from ProducerSendThread's
log).
3) The producer of mirror maker will start producing the new messages when
more messages are sent to the source cluster.

If 1) is true then KAFKA-1030 is excluded, since the consumer successfully
recognize all the partitions and start consuming.

If both 2) and 3) is true, I would wonder if the batch size of the mirror
maker producer is large and hence will not send until enough messages are
accumulated at the producer queue.

Guozhang
On Mon, Sep 9, 2013 at 2:36 PM, Rajasekar Elango <[EMAIL PROTECTED]>wrote:

> yes, the data exists in source cluster, but not in target cluster. I can't
> replicate this problem in dev environment and it happens only in prod
> environment. I turned on debug logging, but not able to identify  the
> problem. Basically, whenever I send data to new topic, I don't see any log
> messages from ProducerSendThread in mirrormaker log so they are not
> produced to target cluster. If I send more messages to same topic, the
> producer send thread kicks off and replicates the messages. But whatever
> messages send first time gets lost. How can I trouble shoot this problem
> further? Even this could be due to know issue
> https://issues.apache.org/jira/browse/KAFKA-1030, how can I confirm that?
> Is there config tweaking I can make to workaround this..?
> ConsumerOffsetChecks helps to track consumers. Its there any other tool we
> can use to track producers in mirrormaker. ?
>
> Thanks in advance for help.
>
> Thanks,
> Raja.
>
>
>
>
> On Fri, Sep 6, 2013 at 3:50 AM, Swapnil Ghike <[EMAIL PROTECTED]> wrote:
>
> > Hi Rajasekar,
> >
> > You said that ConsumerOffsetChecker shows that new topics are
> successfully
> > consumed and the lag is 0. If that's the case, can you verify that there
> > is data on the source cluster for these new topics? If there is no data
> at
> > the source, MirrorMaker will only assign consumer streams to the new
> > topic, but the lag will be 0.
> >
> > This could otherwise be related to
> > https://issues.apache.org/jira/browse/KAFKA-1030.
> >
> > Swapnil
> >
> >
> >
> > On 9/5/13 8:38 PM, "Guozhang Wang" <[EMAIL PROTECTED]> wrote:
> >
> > >Could you let me know the process of reproducing this issue?
> > >
> > >Guozhang
> > >
> > >
> > >On Thu, Sep 5, 2013 at 5:04 PM, Rajasekar Elango
> > ><[EMAIL PROTECTED]>wrote:
> > >
> > >> Yes guozhang
> > >>
> > >> Sent from my iPhone
> > >>
> > >> On Sep 5, 2013, at 7:53 PM, Guozhang Wang <[EMAIL PROTECTED]> wrote:
> > >>
> > >> > Hi Rajasekar,
> > >> >
> > >> > Is auto.create.topics.enable set to true in your target cluster?
> > >> >
> > >> > Guozhang
> > >> >
> > >> >
> > >> > On Thu, Sep 5, 2013 at 4:39 PM, Rajasekar Elango
> > >><[EMAIL PROTECTED]
> > >> >wrote:
> > >> >
> > >> >> We having issues that mirormaker not longer replicate newly created
> > >> topics.
> > >> >> It continues to replicate data for existing topics and but new
> topics
> > >> >> doesn't get created on target cluster. ConsumerOffsetTracker shows
> > >>that
> > >> new
> > >> >> topics are successfully consumed and Lag is 0. But those topics
> > >>doesn't
> > >> get
> > >> >> created in target cluster. I also don't see mbeans for this new
> topic
> > >> under
> > >> >> kafka.producer.ProducerTopicMetrics.<topic name>metric. In logs I
> see
> > >> >> warning for NotLeaderForPatition. but don't see major error. What
> > >>else
> > >> can
> > >> >> we look to troubleshoot this further.
> > >> >>
> > >> >> --
> > >> >> Thanks,
> > >> >> Raja.
> > >> >
> > >> >
> > >> >
> > >> > --
> > >> > -- Guozhang
> > >>
> > >
> > >
> > >
> > >--
> > >-- Guozhang
> >
> >
>
>
> --
> Thanks,
> Raja.
>

--
-- Guozhang