Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Graceful termination of kafka broker after draining all the data consumed


Copy link to this message
-
Re: Graceful termination of kafka broker after draining all the data consumed
In 0.7, one way to do this is to use a vip. All producers send data to the
vip. To decommission a broker,  you first take the broker out of vip so no
new data will be produced to it. Then you let the consumer drain the data
(you can use ConsumerOffsetChecker to check if all data has been consumed).
Finally, you can shut down the broker.

This will be much easier in 0.8 because of replication.

Thanks,

Jun

On Sat, Jan 5, 2013 at 11:34 PM, Bae, Jae Hyeon <[EMAIL PROTECTED]> wrote:

> Hi
>
> If I want to terminate kafka broker gracefully. Before termination, it
> should stop receiving the traffic from producers and wait until all
> data will be consumed.
>
> I don't think that kafka 0.7.x is supporting this feature. If I want
> to implement this feature for myself, could you give me a brief sketch
> of implementation?
>
> Thank you
> Best, Jae
>

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB