Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> How to change logging from DRFA to RFA? Is it a good idea?


Copy link to this message
-
Re: How to change logging from DRFA to RFA? Is it a good idea?
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}

log4j.appender.RFA.MaxFileSize=1MB
log4j.appender.RFA.MaxBackupIndex=30

hadoop.root.logger=INFO,RFA
On 9/27/10 4:12 PM, "Leo Alekseyev" <[EMAIL PROTECTED]> wrote:

We are looking for ways to prevent Hadoop daemon logs from piling up
(over time they can reach several tens of GB and become a nuisance).
Unfortunately, the log4j DRFA class doesn't seem to provide an easy
way to limit the number of files it creates.  I would like to try
switching to RFA with set MaxFileSize and MaxBackupIndex, since it
looks like that's going to solve the log accumulation problem, but I
can't figure out how to change the default logging class for the
daemons.  Can anyone give me some hints on how to do it?

Alternatively, please let me know if there's a better solution to
control log accumulation.

Thanks,
--Leo