Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> Re: Configure Hive in Cluster


+
venkatramanan 2013-01-17, 06:54
+
Nitin Pawar 2013-01-17, 06:59
+
nagarjuna kanamarlapudi 2013-01-17, 07:02
+
venkatramanan 2013-01-17, 07:17
+
Nitin Pawar 2013-01-17, 07:26
+
venkatramanan 2013-01-17, 11:53
+
venkatramanan 2013-01-17, 06:42
+
Nitin Pawar 2013-01-23, 07:37
+
venkatramanan 2013-01-23, 07:58
Copy link to this message
-
Re: Configure Hive in Cluster
this is the error on hadoop job

2013-01-23 12:15:44,884 INFO org.apache.hadoop.mapred.ReduceTask:
Failed to fetch map-output from attempt_201301231151_0002_m_000001_0
even after MAX_FETCH_RETRIES_PER_MAP retries...  or it is a read
error,  reporting to the JobTracker
2013-01-23 12:15:44,885 FATAL org.apache.hadoop.mapred.ReduceTask:
Shuffle failed with too many fetch failures and insufficient
progress!Killing task attempt_201301231151_0002_r_000000_0.

2013-01-23 12:15:45,220 FATAL org.apache.hadoop.mapred.Task: Failed to
contact the tasktracker
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
JvmValidate Failed. Ignoring request from task:
attempt_201301231151_0002_r_000000_0, with JvmId:
jvm_201301231151_0002_r_1079250852
so something is a mess either your network went down or nodes went down

hive tries to get the same task log from the host (savitha-vitualbox)
and it can't figure out what that host is.

On Wed, Jan 23, 2013 at 1:28 PM, venkatramanan <[EMAIL PROTECTED]
> wrote:

>  No, all the nodes are up and running. i dont know, when hive takes the
> other nodes "HOST NAME" thats the error i guess..
>
> revert me if am wrong
>
>
> On Wednesday 23 January 2013 01:07 PM, Nitin Pawar wrote:
>
> when you ran the query, did the VM shutdown ?
>
>
> On Wed, Jan 23, 2013 at 12:57 PM, venkatramanan <
> [EMAIL PROTECTED]> wrote:
>
>>  Hi,
>>
>> I got the following error while executing the "select count(1) from
>> tweettrend;"
>>
>> Below are the exact log msg from the jobtracker Web Interface
>>
>> *Hive Cli Error:*
>>
>> Exception in thread "Thread-21" java.lang.RuntimeException: Error while
>> reading from task log url
>>     at
>> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getStackTraces(TaskLogProcessor.java:240)
>>     at
>> org.apache.hadoop.hive.ql.exec.JobDebugger.showJobFailDebugInfo(JobDebugger.java:227)
>>     at org.apache.hadoop.hive.ql.exec.JobDebugger.run(JobDebugger.java:92)
>>     at java.lang.Thread.run(Thread.java:722)
>> Caused by: java.net.UnknownHostException: savitha-VirtualBox
>>     at
>> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178)
>>     at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
>>     at java.net.Socket.connect(Socket.java:579)
>>     at java.net.Socket.connect(Socket.java:528)
>>     at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
>>     at sun.net.www.http.HttpClient.openServer(HttpClient.java:378)
>>     at sun.net.www.http.HttpClient.openServer(HttpClient.java:473)
>>     at sun.net.www.http.HttpClient.<init>(HttpClient.java:203)
>>     at sun.net.www.http.HttpClient.New(HttpClient.java:290)
>>     at sun.net.www.http.HttpClient.New(HttpClient.java:306)
>>     at
>> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:995)
>>     at
>> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931)
>>     at
>> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849)
>>     at
>> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1299)
>>     at java.net.URL.openStream(URL.java:1037)
>>     at
>> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getStackTraces(TaskLogProcessor.java:192)
>>     ... 3 more
>> FAILED: Execution Error, return code 2 from
>> org.apache.hadoop.hive.ql.exec.MapRedTask
>> MapReduce Jobs Launched:
>> Job 0: Map: 2  Reduce: 1   Cumulative CPU: 9.0 sec   HDFS Read: 408671053
>> HDFS Write: 0 FAIL
>> Total MapReduce CPU Time Spent: 9 seconds 0 msec
>>
>> *syslog logs*
>>
>> utCopier.copyOutput(ReduceTask.java:1394)
>> at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:1326)
>>
>> 2013-01-23 12:15:44,884 INFO org.apache.hadoop.mapred.ReduceTask: Task attempt_201301231151_0002_r_000000_0: Failed fetch #10 from attempt_201301231151_0002_m_000001_0
>> 2013-01-23 12:15:44,884 INFO org.apache.hadoop.mapred.ReduceTask: Failed to fetch map-output from attempt_201301231151_0002_m_000001_0 even after MAX_FETCH_RETRIES_PER_MAP retries...  or it is a read error,  reporting to the JobTracker
Nitin Pawar