-Re: Error while reading from task log url
Bejoy KS 2012-07-20, 20:40
To see the failed task logs in hadoop, the easiest approach is
drilling down the jobtracker web UI.
Go to the job url (which you'll get in the beginning of a job on your console, by the name Tracking url)
Browse into failed tasks.
Go to a failed attempt, there you'll see the actual std out and std err logs.
This logs would give you the root cause why a task failed.
Sent from handheld, please excuse typos.
From: Raihan Jamal <[EMAIL PROTECTED]>
Date: Fri, 20 Jul 2012 13:00:21
To: <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Error while reading from task log url
I tried opening the below URL, and nothing got opened, I got page cannot be
displayed. Why is that so?
On Fri, Jul 20, 2012 at 12:39 PM, Sriram Krishnan <[EMAIL PROTECTED]>wrote:
> What version of Hadoop and Hive are you using? We have seen errors like
> this in the past – and you can actually replace taskid with attemptid to
> fetch your logs.
> So try this:
> But yes, that is not the reason the job failed – you actually have to
> look at the task logs to figure it out.
> From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Reply-To: <[EMAIL PROTECTED]>
> Date: Fri, 20 Jul 2012 14:28:48 -0500
> To: <[EMAIL PROTECTED]>
> Subject: Re: Error while reading from task log url
> First of all, this exception is not what is causing your job to fail. When
> a job fails Hive attempts to automatically retrieve the task logs from the
> JobTracker's TaskLogServlet. This indicates something wrong with your
> hadoop setup, JobTracker down, maybe?
> You can suppress this exception by doing:
> hive> SET hive.exec.show.job.failure.debug.info=false;
> Look into your task logs to see why your job actually failed.
> On Fri, Jul 20, 2012 at 2:12 PM, Raihan Jamal <[EMAIL PROTECTED]>wrote:
>> Whenever I run the below query-
>> *SELECT buyer_id, item_id, ranknew(buyer_id, item_id), created_time*
>> *FROM (*
>> * SELECT buyer_id, item_id, created_time*
>> * FROM testingtable1*
>> * DISTRIBUTE BY buyer_id, item_id*
>> * SORT BY buyer_id, item_id, created_time desc*
>> *) a*
>> *WHERE ranknew(buyer_id, item_id) % 2 == 0;*
>> I always get the below error, I have no clue what does this error
>> means? Is there any problem with my query or something wrong with the
>> *Total MapReduce jobs = 1*
>> *Launching Job 1 out of 1*
>> *Number of reduce tasks not specified. Estimated from input data size: 1*
>> *In order to change the average load for a reducer (in bytes):*
>> * set hive.exec.reducers.bytes.per.reducer=<number>*
>> *In order to limit the maximum number of reducers:*
>> * set hive.exec.reducers.max=<number>*
>> *In order to set a constant number of reducers:*
>> * set mapred.reduce.tasks=<number>*
>> *Starting Job = job_201207172005_14407, Tracking URL >> http://ares-jt.vip.ebay.com:50030/jobdetails.jsp?jobid=job_201207172005_14407
>> *Kill Command = /home/hadoop/latest/bin/../bin/hadoop job
>> -Dmapred.job.tracker=ares-jt:8021 -kill job_201207172005_14407*
>> *2012-07-21 02:07:15,917 Stage-1 map = 0%, reduce = 0%*
>> *2012-07-21 02:07:27,211 Stage-1 map = 100%, reduce = 0%*
>> *2012-07-21 02:07:38,700 Stage-1 map = 100%, reduce = 33%*
>> *2012-07-21 02:07:48,517 Stage-1 map = 100%, reduce = 0%*
>> *2012-07-21 02:08:49,566 Stage-1 map = 100%, reduce = 0%*
>> *2012-07-21 02:09:08,640 Stage-1 map = 100%, reduce = 100%*
>> *Ended Job = job_201207172005_14407 with errors*
>> *java.lang.RuntimeException: Error while reading from task log url*
>> * at
>> * at