Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive, mail # user - hive mapred problem


Copy link to this message
-
Re: hive mapred problem
Nitin Pawar 2012-10-09, 09:15
I did not get the distributed mode for hadoop and hive question. can
you explain what exactly what you want to achieve ?

Thanks,
Nitin

On Tue, Oct 9, 2012 at 2:42 PM, Ajit Kumar Shreevastava
<[EMAIL PROTECTED]> wrote:
> Hi Nitin,
>
>
>
> Thanks for your reply...
>
>
>
> Now my query is running but output is like :-->
>
>
>
> hive> select count(1) from pokes;
>
> Total MapReduce jobs = 1
>
> Launching Job 1 out of 1
>
> Number of reduce tasks determined at compile time: 1
>
> In order to change the average load for a reducer (in bytes):
>
>   set hive.exec.reducers.bytes.per.reducer=<number>
>
> In order to limit the maximum number of reducers:
>
>   set hive.exec.reducers.max=<number>
>
> In order to set a constant number of reducers:
>
>   set mapred.reduce.tasks=<number>
>
> Starting Job = job_201210091435_0001, Tracking URL > http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210091435_0001
>
> Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill
> job_201210091435_0001
>
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 2
>
> 2012-10-09 14:37:14,587 Stage-1 map = 0%,  reduce = 0%
>
> 2012-10-09 14:37:20,609 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:21,613 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:22,620 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:23,625 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:24,630 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:25,634 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:26,638 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:27,642 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:28,650 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:29,654 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:30,658 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:31,662 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:32,667 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU
> 1.66 sec
>
> 2012-10-09 14:37:33,672 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU
> 1.66 sec
>
> 2012-10-09 14:37:34,678 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:35,682 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:36,686 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:37,690 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:38,694 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:39,698 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:40,702 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> MapReduce Total cumulative CPU time: 3 seconds 0 msec
>
> Ended Job = job_201210091435_0001
>
> MapReduce Jobs Launched:
>
> Job 0: Map: 1  Reduce: 2   Cumulative CPU: 3.0 sec   HDFS Read: 6034 HDFS
> Write: 6 SUCCESS
>
> Total MapReduce CPU Time Spent: 3 seconds 0 msec
>
> OK
>
> 500
>
> 0
>
> Time taken: 35.161 seconds
>
>
>
> Can you do one favor for me? I want configuration file template for
> distributed mode for both hadoop and hive.
>
>
>
> Regards
>
> Ajit
>
>
>
>
>
> -----Original Message-----
> From: Nitin Pawar [mailto:[EMAIL PROTECTED]]
> Sent: Monday, October 08, 2012 5:52 PM
> To: [EMAIL PROTECTED]
> Subject: Re: hive mapred problem
>
>
>
> from the error looks like you have some incorrect hive settings which
>
> are failing the job initialization.
>
>
>
> this is the error
>
>>java.io.IOException: Number of maps in JobConf doesn't match number of
>
>> recieved splits for job job_201210051717_0015! numMapTasks=10
>
>
>
> can you tell us if you are setting in hive variables before firing up

Nitin Pawar