Yes, Kafka keeps all log files open indefinitely. There is no inherent
reason this needs to be the case, though, it would be possible to LRU out
old file descriptors and close them if they are not accessed for a few
hours and then reopen on the first access. We just haven't implemented
anything like that.

It would be good to understand this a little better. Does xfs pre-allocate
space for all open files? Perhaps just closing the file on log role and
opening it read-only would solve the issue? Is this at all related to the
use of sparse files for the indexes (i.e. RandomAccessFile.setLength(10MB)
when we create the index)? Does this effect other filesystems or just xfs?

On Fri, Jul 26, 2013 at 12:42 AM, Jason Rosenberg <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB