Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Kafka 155


Copy link to this message
-
Re: Kafka 155
Ok thanks Neha :) ...

--
Felix

On Wed, Apr 11, 2012 at 6:14 PM, Neha Narkhede <[EMAIL PROTECTED]>wrote:

> >> Is there a way to achieve something like that?
>
> This is going to be some pain in 0.7. Another way to make a broker
> read-only is by -
>
> 1. Changing VIP configuration to remove that broker from the list.
> This will work if you are using a hardware load balancer between the
> producers and the brokers.
> OR
> 2. Changing broker.list configuration on the producer to remove the
> read-only broker. But this will involve restarting the producers to
> pick up the new config
>
> Thanks,
> Neha
>
>
> On Wed, Apr 11, 2012 at 2:02 PM, Felix GV <[EMAIL PROTECTED]> wrote:
> > Intra cluster replication is great and would alleviate (or probably
> > eliminate) the need to have graceful decommission.
> >
> > But that still does not answer the question: if one had to gracefully
> > decommission a broker today in 0.7 (or in trunk or w/ patches), how would
> > one do it?
> >
> > How can we make a broker read-only?
> >
> > Wouldn't there be a way to somehow bring consumers down, bring down the
> > brokers that need to be decommissioned, then re-start those
> decommissioned
> > brokers in another network where no producers are pushing any content
> into
> > them, then restart the consumers inside of that other network to consume
> > the stuff they had not consumed yet, and then move (stop and restart) the
> > consumers back into their original network, where they could see and
> > consume from the other brokers (which were not decommissioned and were
> > still receiving stuff from the producers on that network).
> >
> > That seems awfully convoluted, and I'm not even sure it would work. Plus
> it
> > would imply that there is some down time of the consumers (which matters
> > for real-time oriented consumers, but less so for batch-oriented
> ones...).
> > But still, I can't believe there wouldn't be any way at all...
> >
> > Is there a way to achieve something like that?
> >
> > --
> > Felix
> >
> >
> >
> > On Thu, Apr 5, 2012 at 9:17 PM, Bateman, Matt <[EMAIL PROTECTED]>
> wrote:
> >
> >> Hi Jun,
> >>
> >> That would definitely solve the issue. I guess it's just a matter of
> >> timing for 0.8...
> >>
> >> Thanks,
> >>
> >> Matt
> >>
> >> -----Original Message-----
> >> From: Jun Rao [mailto:[EMAIL PROTECTED]]
> >> Sent: Thursday, April 05, 2012 6:09 PM
> >> To: [EMAIL PROTECTED]
> >> Subject: Re: Kafka 155
> >>
> >> Matt,
> >>
> >> The main motivation for decommission is that one can let consumers drain
> >> messages from a broker before taking it out. In 0.8, we are adding intra
> >> cluster replication. So, a broker can be taken out anytime without
> >> affecting consumers. Do you still see a need for decommission then?
> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >> On Thu, Apr 5, 2012 at 2:13 PM, Bateman, Matt <[EMAIL PROTECTED]>
> wrote:
> >>
> >> > Hi Guys,
> >> >
> >> > I see that 155 isn't targeted for a release. It deals with gracefully
> >> > decommissioning a broker. Is there another way to achieve this? In
> >> > other words, is there a way to prevent publishing to a broker but
> >> > still allow consumers to pull messages until the broker is "empty"?
> >> >
> >> > Thanks,
> >> >
> >> > Matt
> >> >
> >> >
> >>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB