You can find the job specific logs in two places. The first one is in the hdfs ouput directory. The second place is under $HADOOP_HOME/logs/history ($HADOOP_HOME/logs/history/done)
Both these paces have the config file and the job logs for each submited job.
Sent from my iPad
Please excuse the typos.
On Sep 23, 2011, at 12:52 AM, Bejoy KS <[EMAIL PROTECTED]> wrote:
> Hi All
> I do have a query here on maintaining Hadoop map-reduce logs.
> In default the logs appear in respective task tracker nodes which you can
> easily drill down from the job tracker web UI at times of any failure.(Which
> I was following till now) . Now I need to get into the next level to manage
> the logs corresponding to individual jobs. In my log I'm dumping some key
> parameters with respect to my business which could be used for business
> level debugging/analysis at time in the future if required . For this
> purpose, I need a central log file corresponding to a job. (not many files,
> ie one per task tracker because as the cluster grows the no of log files
> corresponding to a job also increases). A single point of reference makes
> things handy for analysis by any business folks .
> I think it would be a generic requirement of any enterprise application
> to manage and archive the logs of each job execution. Hence definitely
> there would be best practices and standards identified and maintained by
> most of the core Hadoop enterprise users. Could you please help me out by
> sharing some of the better options for log management for Hadoop map-reduce
> logs. It could greatly help me choose the best practice that suit my
> environment and application needs.
> Thank You