-Re: Graceful termination of kafka broker after draining all the data consumed
Jun Rao 2013-02-18, 16:51
In 0.7, it's not very easy to decommission a broker using ZK based
producers. It's possible to do that with a vip (then you can't do
partitioning). In 0.8 (probably 0.8.1), you can use a tool to move all
partitions off a broker first and then decommission it.
On Sun, Feb 17, 2013 at 2:19 AM, Íõ¹ú¶° <[EMAIL PROTECTED]> wrote:
> Hi Jun,
> If we use high level producer based on zookeeper, how can we decommission a
> broker without message loss?
> Since we want to partition the log with IP, if all the brokers use the
> same vip, we can not use the customized partition strategy.
> On Mon, Jan 7, 2013 at 12:52 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
> > In 0.7, one way to do this is to use a vip. All producers send data to
> > vip. To decommission a broker, you first take the broker out of vip so
> > new data will be produced to it. Then you let the consumer drain the data
> > (you can use ConsumerOffsetChecker to check if all data has been
> > Finally, you can shut down the broker.
> > This will be much easier in 0.8 because of replication.
> > Thanks,
> > Jun
> > On Sat, Jan 5, 2013 at 11:34 PM, Bae, Jae Hyeon <[EMAIL PROTECTED]>
> > wrote:
> > > Hi
> > >
> > > If I want to terminate kafka broker gracefully. Before termination, it
> > > should stop receiving the traffic from producers and wait until all
> > > data will be consumed.
> > >
> > > I don't think that kafka 0.7.x is supporting this feature. If I want
> > > to implement this feature for myself, could you give me a brief sketch
> > > of implementation?
> > >
> > > Thank you
> > > Best, Jae
> > >
> Guodong Wang