We don't have this global setting right now - so the guideline is to
capacity plan to set your log retention settings and set alerts on
disk space accordingly.
In order to recover you should be able to change your retention
settings, but the log cleaner is not called on start up so that won't
work currently (https://issues.apache.org/jira/browse/KAFKA-1063)
On Wed, Nov 06, 2013 at 05:04:25PM -0800, Kane Kane wrote:
> So, I've hit this issue with kafka - when disk is full kafka stops
> with exception
> FATAL [KafkaApi-1] Halting due to unrecoverable I/O error while
> handling produce request: (kafka.server.KafkaApis)
> kafka.common.KafkaStorageException: I/O exception in append to log 'perf1-2'
> I think it will be useful if we can put an overall limit on total log
> size, so disk doesn't get full.
> Also what is the recovery strategy in this case? Is it possible to
> recover from this state or I have to delete all data?
> On Tue, Nov 5, 2013 at 9:11 PM, Kane Kane <[EMAIL PROTECTED]> wrote:
> > I've checked it, it's per partition, what i'm talking about is more of
> > global log size limit, like if i have only 200gb, i want to set global
> > limit on log size, not per partition, so i won't have to change it
> > later if i add topics/partitions.
> > Thanks.
> > On Tue, Nov 5, 2013 at 8:37 PM, Neha Narkhede <[EMAIL PROTECTED]> wrote:
> >> You are probably looking for log.retention.bytes. Refer to
> >> http://kafka.apache.org/documentation.html#brokerconfigs
> >> On Tue, Nov 5, 2013 at 3:10 PM, Kane Kane <[EMAIL PROTECTED]> wrote:
> >>> Hello,
> >>> What would happen if disk is full? Does it make sense to have
> >>> additional variable to set the maximum size for all logs combined?
> >>> Thanks.