Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> Graceful termination of kafka broker after draining all the data consumed


+
Bae, Jae Hyeon 2013-01-06, 07:35
+
Jun Rao 2013-01-06, 16:52
+
Neha Narkhede 2013-01-06, 18:36
+
Bae, Jae Hyeon 2013-01-07, 19:18
+
王国栋 2013-02-17, 10:20
Copy link to this message
-
Re: Graceful termination of kafka broker after draining all the data consumed
In 0.7, it's not very easy to decommission a broker using ZK based
producers. It's possible to do that with a vip (then you can't do
partitioning). In 0.8 (probably 0.8.1), you can use a tool to move all
partitions off a broker first and then decommission it.

Thanks,

Jun
On Sun, Feb 17, 2013 at 2:19 AM, Íõ¹ú¶° <[EMAIL PROTECTED]> wrote:

> Hi Jun,
>
> If we use high level producer based on zookeeper, how can we decommission a
> broker without message loss?
>
> Since we want to partition the log with IP,  if all the brokers use the
> same vip, we can not use the customized partition strategy.
>
> Thanks.
>
>
> On Mon, Jan 7, 2013 at 12:52 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
>
> > In 0.7, one way to do this is to use a vip. All producers send data to
> the
> > vip. To decommission a broker,  you first take the broker out of vip so
> no
> > new data will be produced to it. Then you let the consumer drain the data
> > (you can use ConsumerOffsetChecker to check if all data has been
> consumed).
> > Finally, you can shut down the broker.
> >
> > This will be much easier in 0.8 because of replication.
> >
> > Thanks,
> >
> > Jun
> >
> > On Sat, Jan 5, 2013 at 11:34 PM, Bae, Jae Hyeon <[EMAIL PROTECTED]>
> > wrote:
> >
> > > Hi
> > >
> > > If I want to terminate kafka broker gracefully. Before termination, it
> > > should stop receiving the traffic from producers and wait until all
> > > data will be consumed.
> > >
> > > I don't think that kafka 0.7.x is supporting this feature. If I want
> > > to implement this feature for myself, could you give me a brief sketch
> > > of implementation?
> > >
> > > Thank you
> > > Best, Jae
> > >
> >
>
>
>
> --
> Guodong Wang
> Íõ¹ú¶°
>

 
+
王国栋 2013-02-19, 01:41
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB