-Re: Graceful termination of kafka broker after draining all the data consumed
王国栋 2013-02-17, 10:20
If we use high level producer based on zookeeper, how can we decommission a
broker without message loss?
Since we want to partition the log with IP, if all the brokers use the
same vip, we can not use the customized partition strategy.
On Mon, Jan 7, 2013 at 12:52 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
> In 0.7, one way to do this is to use a vip. All producers send data to the
> vip. To decommission a broker, you first take the broker out of vip so no
> new data will be produced to it. Then you let the consumer drain the data
> (you can use ConsumerOffsetChecker to check if all data has been consumed).
> Finally, you can shut down the broker.
> This will be much easier in 0.8 because of replication.
> On Sat, Jan 5, 2013 at 11:34 PM, Bae, Jae Hyeon <[EMAIL PROTECTED]>
> > Hi
> > If I want to terminate kafka broker gracefully. Before termination, it
> > should stop receiving the traffic from producers and wait until all
> > data will be consumed.
> > I don't think that kafka 0.7.x is supporting this feature. If I want
> > to implement this feature for myself, could you give me a brief sketch
> > of implementation?
> > Thank you
> > Best, Jae