Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Error while reading from task log url


Copy link to this message
-
Re: Error while reading from task log url
Yup, Thanks it worked.

*Raihan Jamal*

On Fri, Jul 20, 2012 at 1:40 PM, Bejoy KS <[EMAIL PROTECTED]> wrote:

> **
> Raihan
>
> To see the failed task logs in hadoop, the easiest approach is
> drilling down the jobtracker web UI.
>
> Go to the job url (which you'll get in the beginning of a job on your
> console, by the name Tracking url)
>
> http://ares-jt.vip.ebay.com:50030/jobdetails.jsp?jobid=job_201207172005_14407
>
> Browse into failed tasks.
> Go to a failed attempt, there you'll see the actual std out and std err
> logs.
>
> This logs would give you the root cause why a task failed.
> Regards
> Bejoy KS
>
> Sent from handheld, please excuse typos.
> ------------------------------
> *From: * Raihan Jamal <[EMAIL PROTECTED]>
> *Date: *Fri, 20 Jul 2012 13:00:21 -0700
> *To: *<[EMAIL PROTECTED]>
> *ReplyTo: * [EMAIL PROTECTED]
> *Subject: *Re: Error while reading from task log url
>
> I tried opening the below URL, and nothing got opened, I got page cannot
> be displayed. Why is that so?
>
>
>
> *Raihan Jamal*
>
>
>
> On Fri, Jul 20, 2012 at 12:39 PM, Sriram Krishnan <[EMAIL PROTECTED]>wrote:
>
>>  What version of Hadoop and Hive are you using? We have seen errors like
>> this in the past – and you can actually replace taskid with attemptid to
>> fetch your logs.
>>
>>  So try this:
>> http://lvsaishdc3dn0857.lvs.ebay.com:50060/tasklog?attemptid=attempt_201207172005_14407_r_000000_1&all=true
>>
>>
>>  But yes, that is not the reason the job failed – you actually have to
>> look at the task logs to figure it out.
>>
>>  Sriram
>>
>>   From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
>> Reply-To: <[EMAIL PROTECTED]>
>> Date: Fri, 20 Jul 2012 14:28:48 -0500
>> To: <[EMAIL PROTECTED]>
>> Subject: Re: Error while reading from task log url
>>
>>  First of all, this exception is not what is causing your job to fail. When
>> a job fails Hive attempts to automatically retrieve the task logs from the
>> JobTracker's TaskLogServlet. This indicates something wrong with your
>> hadoop setup, JobTracker down, maybe?
>>
>>  You can suppress this exception by doing:
>>
>>  hive> SET hive.exec.show.job.failure.debug.info=false;
>>
>>  Look into your task logs to see why your job actually failed.
>>
>> On Fri, Jul 20, 2012 at 2:12 PM, Raihan Jamal <[EMAIL PROTECTED]>wrote:
>>
>>>  Whenever I run the below query-
>>> *
>>> *
>>> *SELECT buyer_id, item_id, ranknew(buyer_id, item_id), created_time*
>>> *FROM (*
>>> *    SELECT buyer_id, item_id, created_time*
>>> *    FROM testingtable1*
>>> *    DISTRIBUTE BY buyer_id, item_id*
>>> *    SORT BY buyer_id, item_id, created_time desc*
>>> *) a*
>>> *WHERE ranknew(buyer_id, item_id) %  2 == 0;*
>>>
>>>
>>>  I always get the below error, I have no clue what does this error
>>> means? Is there any problem with my query or something wrong with the
>>> system?
>>>
>>>  *Total MapReduce jobs = 1*
>>> *Launching Job 1 out of 1*
>>> *Number of reduce tasks not specified. Estimated from input data size: 1
>>> *
>>> *In order to change the average load for a reducer (in bytes):*
>>> *  set hive.exec.reducers.bytes.per.reducer=<number>*
>>> *In order to limit the maximum number of reducers:*
>>> *  set hive.exec.reducers.max=<number>*
>>> *In order to set a constant number of reducers:*
>>> *  set mapred.reduce.tasks=<number>*
>>> *Starting Job = job_201207172005_14407, Tracking URL >>> http://ares-jt.vip.ebay.com:50030/jobdetails.jsp?jobid=job_201207172005_14407
>>> *
>>> *Kill Command = /home/hadoop/latest/bin/../bin/hadoop job
>>>  -Dmapred.job.tracker=ares-jt:8021 -kill job_201207172005_14407*
>>> *2012-07-21 02:07:15,917 Stage-1 map = 0%,  reduce = 0%*
>>> *2012-07-21 02:07:27,211 Stage-1 map = 100%,  reduce = 0%*
>>> *2012-07-21 02:07:38,700 Stage-1 map = 100%,  reduce = 33%*
>>> *2012-07-21 02:07:48,517 Stage-1 map = 100%,  reduce = 0%*
>>> *2012-07-21 02:08:49,566 Stage-1 map = 100%,  reduce = 0%*
>>> *2012-07-21 02:09:08,640 Stage-1 map = 100%,  reduce = 100%*
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB