So, I've hit this issue with kafka - when disk is full kafka stops
FATAL [KafkaApi-1] Halting due to unrecoverable I/O error while
handling produce request: (kafka.server.KafkaApis)
kafka.common.KafkaStorageException: I/O exception in append to log 'perf1-2'
I think it will be useful if we can put an overall limit on total log
size, so disk doesn't get full.
Also what is the recovery strategy in this case? Is it possible to
recover from this state or I have to delete all data?
On Tue, Nov 5, 2013 at 9:11 PM, Kane Kane <[EMAIL PROTECTED]> wrote:
> I've checked it, it's per partition, what i'm talking about is more of
> global log size limit, like if i have only 200gb, i want to set global
> limit on log size, not per partition, so i won't have to change it
> later if i add topics/partitions.
> On Tue, Nov 5, 2013 at 8:37 PM, Neha Narkhede <[EMAIL PROTECTED]> wrote:
>> You are probably looking for log.retention.bytes. Refer to
>> On Tue, Nov 5, 2013 at 3:10 PM, Kane Kane <[EMAIL PROTECTED]> wrote:
>>> What would happen if disk is full? Does it make sense to have
>>> additional variable to set the maximum size for all logs combined?