Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive, mail # user - hi all


Copy link to this message
-
Re: hi all
Nitin Pawar 2012-07-06, 12:57
can you tell us
1) how many nodes are there in the cluster?
2) is there any connectivity problems if the # nodes > 3
3) if you have just one slave do you have a higher replication factor?
4) what is the compression you are using for the tables?
5) if you have a dhcp based network, did your slave machines changed the ip?

Thanks,
Nitin

On Fri, Jul 6, 2012 at 6:17 PM, shaik ahamed <[EMAIL PROTECTED]> wrote:

> Hi ,
>
>     Below is the error,i found in the Job Tracker log file :
>
>
> *Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out*
>
> Please help me in this ...
>
> *Thanks in Advance*
>
> *Shaik.*
>
>
> On Fri, Jul 6, 2012 at 5:22 PM, Bejoy KS <[EMAIL PROTECTED]> wrote:
>
>> **
>> Hi Shaik
>>
>> There is some error while MR jobs are running. To get the root cause
>> please post in the error log from the failed task.
>>
>> You can browse the Job Tracker web UI and choose the right job Id and
>> drill down to the failed tasks to get the error logs.
>> Regards
>> Bejoy KS
>>
>> Sent from handheld, please excuse typos.
>> ------------------------------
>> *From: *shaik ahamed <[EMAIL PROTECTED]>
>> *Date: *Fri, 6 Jul 2012 17:09:26 +0530
>> *To: *<[EMAIL PROTECTED]>
>> *ReplyTo: *[EMAIL PROTECTED]
>> *Subject: *hi all
>>
>> *Hi users,*
>> **
>> *              As im selecting the distinct column from the vender Hive
>> table *
>> **
>> *Im getting the below error plz help me in this*
>> **
>> *hive> select distinct supplier from vender_sample;*
>>
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks not specified. Estimated from input data size: 1
>> In order to change the average load for a reducer (in bytes):
>>   set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>>   set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>>   set mapred.reduce.tasks=<number>
>> Kill Command = /usr/local/hadoop/bin/../bin/hadoop job
>> -Dmapred.job.tracker=md-trngpoc1:54311 -kill job_201207061535_0005
>> Hadoop job information for Stage-1: number of mappers: 1; number of
>> reducers: 1
>> 2012-07-06 17:03:13,978 Stage-1 map = 0%,  reduce = 0%
>> 2012-07-06 17:03:20,001 Stage-1 map = 100%,  reduce = 0%
>> 2012-07-06 17:04:20,248 Stage-1 map = 100%,  reduce = 0%
>> 2012-07-06 17:04:23,262 Stage-1 map = 100%,  reduce = 100%
>> Ended Job = job_201207061535_0005 with errors
>> Error during job, obtaining debugging information...
>> Examining task ID: task_201207061535_0005_m_000002 (and more) from job
>> job_201207061535_0005
>>
>> Task with the most failures(4):
>> -----
>> Task ID:
>>   task_201207061535_0005_r_000000
>> FAILED: Execution Error, return code 2 from
>> org.apache.hadoop.hive.ql.exec.MapRedTask
>> MapReduce Jobs Launched:
>> Job 0: Map: 1  Reduce: 1   HDFS Read: 99143041 HDFS Write: 0 FAIL
>> Total MapReduce CPU Time Spent: 0 msec
>>
>> Regards
>> shaik.
>>
>
>
--
Nitin Pawar