Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> hive mapred problem


+
Ajit Kumar Shreevastava 2012-10-08, 10:43
+
Nitin Pawar 2012-10-08, 10:49
+
Ajit Kumar Shreevastava 2012-10-08, 12:00
+
Nitin Pawar 2012-10-08, 12:21
+
Ajit Kumar Shreevastava 2012-10-09, 09:12
+
Nitin Pawar 2012-10-09, 09:15
Copy link to this message
-
RE: hive mapred problem
Hi Nitin

Sorry Nitin, Actually I mean Fully distributed mode( hadoop on multimode).
I want configuration file for both hadoop and hive.

-----Original Message-----
From: Nitin Pawar [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, October 09, 2012 2:46 PM
To: [EMAIL PROTECTED]
Subject: Re: hive mapred problem

I did not get the distributed mode for hadoop and hive question. can
you explain what exactly what you want to achieve ?

Thanks,
Nitin

On Tue, Oct 9, 2012 at 2:42 PM, Ajit Kumar Shreevastava
<[EMAIL PROTECTED]> wrote:
> Hi Nitin,
>
>
>
> Thanks for your reply...
>
>
>
> Now my query is running but output is like :-->
>
>
>
> hive> select count(1) from pokes;
>
> Total MapReduce jobs = 1
>
> Launching Job 1 out of 1
>
> Number of reduce tasks determined at compile time: 1
>
> In order to change the average load for a reducer (in bytes):
>
>   set hive.exec.reducers.bytes.per.reducer=<number>
>
> In order to limit the maximum number of reducers:
>
>   set hive.exec.reducers.max=<number>
>
> In order to set a constant number of reducers:
>
>   set mapred.reduce.tasks=<number>
>
> Starting Job = job_201210091435_0001, Tracking URL > http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210091435_0001
>
> Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill
> job_201210091435_0001
>
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 2
>
> 2012-10-09 14:37:14,587 Stage-1 map = 0%,  reduce = 0%
>
> 2012-10-09 14:37:20,609 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:21,613 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:22,620 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:23,625 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:24,630 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:25,634 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:26,638 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:27,642 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:28,650 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:29,654 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:30,658 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:31,662 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:32,667 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU
> 1.66 sec
>
> 2012-10-09 14:37:33,672 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU
> 1.66 sec
>
> 2012-10-09 14:37:34,678 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:35,682 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:36,686 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:37,690 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:38,694 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:39,698 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:40,702 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> MapReduce Total cumulative CPU time: 3 seconds 0 msec
>
> Ended Job = job_201210091435_0001
>
> MapReduce Jobs Launched:
>
> Job 0: Map: 1  Reduce: 2   Cumulative CPU: 3.0 sec   HDFS Read: 6034 HDFS
> Write: 6 SUCCESS
>
> Total MapReduce CPU Time Spent: 3 seconds 0 msec
>
> OK
>
> 500
>
> 0
>
> Time taken: 35.161 seconds
>
>
>
> Can you do one favor for me? I want configuration file template for
> distributed mode for both hadoop and hive.
>
>
>
> Regards
>
> Ajit
>
>
>
>
>
> -----Original Message-----
> From: Nitin Pawar [mailto:[EMAIL PROTECTED]]
> Sent: Monday, October 08, 2012 5:52 PM
> To: [EMAIL PROTECTED]
> Subject: Re: hive mapred problem
>

Nitin Pawar
+
Nitin Pawar 2012-10-09, 10:32
+
Ajit Kumar Shreevastava 2012-10-09, 11:29
+
Bejoy KS 2012-10-09, 12:27
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB