I've checked it, it's per partition, what i'm talking about is more of global log size limit, like if i have only 200gb, i want to set global limit on log size, not per partition, so i won't have to change it later if i add topics/partitions.
On Tue, Nov 5, 2013 at 8:37 PM, Neha Narkhede <[EMAIL PROTECTED]> wrote:
So, I've hit this issue with kafka - when disk is full kafka stops with exception FATAL [KafkaApi-1] Halting due to unrecoverable I/O error while handling produce request: (kafka.server.KafkaApis) kafka.common.KafkaStorageException: I/O exception in append to log 'perf1-2'
I think it will be useful if we can put an overall limit on total log size, so disk doesn't get full. Also what is the recovery strategy in this case? Is it possible to recover from this state or I have to delete all data?
On Tue, Nov 5, 2013 at 9:11 PM, Kane Kane <[EMAIL PROTECTED]> wrote:
We don't have this global setting right now - so the guideline is to capacity plan to set your log retention settings and set alerts on disk space accordingly.
In order to recover you should be able to change your retention settings, but the log cleaner is not called on start up so that won't work currently (https://issues.apache.org/jira/browse/KAFKA-1063) On Wed, Nov 06, 2013 at 05:04:25PM -0800, Kane Kane wrote:
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation project and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext