In 0.8, if you turn on replication, it may not matter too much if a broker takes long to start up since data can still be served from the replicas. It may be possible to improve this by maintaining a flush checkpoint file on disk. We can then use that info to reduce the amount of the data to be recovered.
Jun On Mon, May 6, 2013 at 3:07 PM, Jason Rosenberg <[EMAIL PROTECTED]> wrote:
Recently, we had an issue where our kafka brokers were shut down hard (and so did not write out the clean shutdown file). Thus on restart, it went through all logs and ran a recovery on them.
Unfortunately, this took a long time (on the order of 30 minutes). We have a lot of topics (e.g. ~1000 or so). Is there anyway this can be done more quickly, say in parallel?
Also, it be done as a background process, so the server can start up and start receiving messages, logs for incoming topics are prioritized in the recovery process, and perhaps messages can still be buffered in memory while the log recovery is happening?
It seems onerous to block all activity for 30 minutes while a slow, serial, recovery job happens....
Jason Rosenberg 2013-05-06, 22:07
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation projects and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext