What is the recommendation for this problem...

   When HDFS sink has trouble closing the tmp file on HDFS, the tmp file
lingers around without being closed or renamed. if HDFS sink is configured
to use compression, the data in this tmp file is not going to be

Does it make sense for Flume to actually keep retrying the close()
operation itself in case of failure ?


NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB