Much useful one thanks Binglin for sharing it!
On Tue, Oct 30, 2012 at 8:54 AM, Binglin Chang <[EMAIL PROTECTED]> wrote:
> I think you want to analyze hadoop job logs in jobtracker history folder?
> These logs are in a centralized folder and don't need tools like flume or
> scribe to gather them.
> I used to write a simple python script to parse those log files, and
> generate csv/json reports, basically you can use it to get execution time,
> counter, status of job, taks, attempts, maybe you can modify it to meet you
> On Tue, Oct 30, 2012 at 9:48 AM, bharath vissapragada <
> [EMAIL PROTECTED]> wrote:
>> Hi list,
>> Are the any tools for parsing and extracting data from Hadoop's Job Logs?
>> I want to do stuff like ..
>> 1. Getting run time of each map/reduce task
>> 2. Total map/reduce tasks ran on a particular node in that job and some
>> similar stuff
>> Any suggestions?