Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> How to change logging from DRFA to RFA? Is it a good idea?


+
Leo Alekseyev 2010-09-27, 23:12
Copy link to this message
-
Re: How to change logging from DRFA to RFA? Is it a good idea?
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}

log4j.appender.RFA.MaxFileSize=1MB
log4j.appender.RFA.MaxBackupIndex=30

hadoop.root.logger=INFO,RFA
On 9/27/10 4:12 PM, "Leo Alekseyev" <[EMAIL PROTECTED]> wrote:

We are looking for ways to prevent Hadoop daemon logs from piling up
(over time they can reach several tens of GB and become a nuisance).
Unfortunately, the log4j DRFA class doesn't seem to provide an easy
way to limit the number of files it creates.  I would like to try
switching to RFA with set MaxFileSize and MaxBackupIndex, since it
looks like that's going to solve the log accumulation problem, but I
can't figure out how to change the default logging class for the
daemons.  Can anyone give me some hints on how to do it?

Alternatively, please let me know if there's a better solution to
control log accumulation.

Thanks,
--Leo

+
Leo Alekseyev 2010-09-28, 21:13
+
Alex Kozlov 2010-09-28, 23:12
+
Steve Loughran 2010-09-29, 09:07
+
Leo Alekseyev 2010-09-29, 04:09
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB