So, I've hit this issue with kafka - when disk is full kafka stops
with exception
FATAL [KafkaApi-1] Halting due to unrecoverable I/O error while
handling produce request:  (kafka.server.KafkaApis)
kafka.common.KafkaStorageException: I/O exception in append to log 'perf1-2'

I think it will be useful if we can put an overall limit on total log
size, so disk doesn't get full.
Also what is the recovery strategy in this case? Is it possible to
recover from this state or I have to delete all data?


On Tue, Nov 5, 2013 at 9:11 PM, Kane Kane <[EMAIL PROTECTED]> wrote:

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB