Kafka, mail # user - Re: Graceful termination of kafka broker after draining all the data consumed - 2013-01-06, 16:52
 Search Hadoop and all its subprojects:

Switch to Threaded View
Copy link to this message
-
Re: Graceful termination of kafka broker after draining all the data consumed
In 0.7, one way to do this is to use a vip. All producers send data to the
vip. To decommission a broker,  you first take the broker out of vip so no
new data will be produced to it. Then you let the consumer drain the data
(you can use ConsumerOffsetChecker to check if all data has been consumed).
Finally, you can shut down the broker.

This will be much easier in 0.8 because of replication.

Thanks,

Jun

On Sat, Jan 5, 2013 at 11:34 PM, Bae, Jae Hyeon <[EMAIL PROTECTED]> wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB