Hi Sam,

1. I am sorry I didn't quite get "how many methods could clean it correctly?

Since this directory contains only the temporary files it should get
cleaned up after your jobs are over. If you still have unnecessary data
present there you can delete it. Make sure no jobs are running while you
clean this directory.

2. All the daemons use log4j and DailyRollingFileAppender, which does not
have retention settings. You can change the behavior by changing the
Appender of your choice in *log4j.properties* files under
*HADOOP_HOME/conf*directory. The associated property is

3. You must never touch the content of these 2 directories. This the actual
HDFS *data+metadata*, which you don't want to loose.

You can't find more on log files


*Warm regards,*
*Mohammad Tariq*
*cloudfront.blogspot.com <http://cloudfront.blogspot.com>*
On Wed, May 7, 2014 at 9:10 AM, sam liu <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB