Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Error while reading from task log url


Copy link to this message
-
Re: Error while reading from task log url
You are running into: https://issues.apache.org/jira/browse/HIVE-1579

I've been meaning to submit a patch for this. I emailed the dev list
concerning a patch for it but got no reply...

Hive is crashing because it can't pull the debug logs for the failed
task, because it's trying to pull them from the url. If you change
taskid to attemptid in the url then you'll get the error logs you need
to debug the root cause.

I'll try and submit a patch at some point.

Phil.

On 29 March 2012 17:36, Stephen Boesch <[EMAIL PROTECTED]> wrote:
> When I go to that url here is the result:
>
> HTTP ERROR 400
>
> Problem accessing /tasklog. Reason:
>
>     Argument attemptid is required
>
> ________________________________
> Powered by Jetty://
>
>
>
> 2012/3/29 Stephen Boesch <[EMAIL PROTECTED]>
>>
>> Hi
>>   I am able to run certain hive commands e.g. create table and select..
>> but not others ..    Also my hadoop pseudo disributed cluster is working
>> fine - i can run the examples.
>>
>> Examples of commands that fail:
>>
>> insert overwrite table demographics select * from demographics_local;
>> Control-C  (killing a task ends up with the same error with the "Error
>> while reading from task log url"
>>
>>
>> Hadoop job information for Stage-0: number of mappers: 1; number of
>> reducers: 0
>> 2012-03-29 08:05:40,699 Stage-0 map = 0%,  reduce = 0%
>> 2012-03-29 08:06:10,868 Stage-0 map = 100%,  reduce = 100%
>> Ended Job = job_201203231956_0010 with errors
>> Error during job, obtaining debugging information...
>> Examining task ID: task_201203231956_0010_m_000002 (and more) from job
>> job_201203231956_0010
>> Exception in thread "Thread-160" java.lang.RuntimeException: Error while
>> reading from task log url
>> at
>> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:130)
>> at
>> org.apache.hadoop.hive.ql.exec.JobDebugger.showJobFailDebugInfo(JobDebugger.java:211)
>> at org.apache.hadoop.hive.ql.exec.JobDebugger.run(JobDebugger.java:81)
>> at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.io.IOException: Server returned HTTP response code: 400
>> for URL:
>> http://localhost:50060/tasklog?taskid=attempt_201203231956_0010_m_000000_3&start=-8193
>> at
>> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436)
>> at java.net.URL.openStream(URL.java:1010)
>> at
>> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:120)
>> ... 3 more
>> FAILED: Execution Error, return code 2 from
>> org.apache.hadoop.hive.ql.exec.MapRedTask
>>
>>
>>
>> I am running hive-0.8.1  aagainst hadoop-1.0.0
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB