Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Hive 0.11 with Cloudera CHD4.3 MR v1


Copy link to this message
-
Re: Hive 0.11 with Cloudera CHD4.3 MR v1
This error is not the actual reason why your job failed. Please look into
your jobtracker logs to know the real reason. This error simply means that
hive attempted to connect to JT to gather debugging info for your failed
job but could not due to a classpath error.
On Tue, Jul 16, 2013 at 4:50 PM, Sunita Arvind <[EMAIL PROTECTED]>wrote:

> Hi Jim,
>
> I am new to hive too so cannot suggest much on that front. However, I'm
> pretty sure that this error indicates that a particular class is missing in
> your classpath. In the sense, your hive runtime is not able to locate the
> class org.apache.hadoop.mapreduce.util.HostUtil. Double check your
> HADOOP_HOME and any other variable that will configure paths and classpaths
> for hive.
>
> Hope this helps.
>
> All the best!
> Sunita
>
>
> On Mon, Jul 15, 2013 at 9:32 PM, Jim Colestock <[EMAIL PROTECTED]>wrote:
>
>> Hello All,
>>
>> Has anyone been successful at running hive 0.11 with Cloudera CDH 4.3?
>>
>> I've been able to get hive to connect to my metadb (which is in
>> Postgres).  Verified by doing a show tables..  I can run explain and
>> describes on tables, but when I try to run anything that fires off an M/R
>> job, I get the following error:
>>
>> hive>select count(*) from tableA;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>>   set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>>   set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>>   set mapred.reduce.tasks=<number>
>> Starting Job = job_201307112247_13816, Tracking URL >> http://master:50030/jobdetails.jsp?jobid=job_201307112247_13816
>> Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill
>> job_201307112247_13816
>> Hadoop job information for Stage-1: number of mappers: 1; number of
>> reducers: 1
>> 2013-07-12 02:11:42,829 Stage-1 map = 0%,  reduce = 0%
>> 2013-07-12 02:12:08,173 Stage-1 map = 100%,  reduce = 100%
>> Ended Job = job_201307112247_13816 with errors
>> Error during job, obtaining debugging information...
>> Job Tracking URL:
>> http://master:50030/jobdetails.jsp?jobid=job_201307112247_13816
>> Examining task ID: task_201307112247_13816_m_000002 (and more) from job
>> job_201307112247_13816
>> Exception in thread "Thread-19" java.lang.NoClassDefFoundError:
>> org/apache/hadoop/mapreduce/util/HostUtil
>>  at
>> org.apache.hadoop.hive.shims.Hadoop23Shims.getTaskAttemptLogUrl(Hadoop23Shims.java:61)
>> at
>> org.apache.hadoop.hive.ql.exec.JobDebugger$TaskInfoGrabber.getTaskInfos(JobDebugger.java:186)
>>  at
>> org.apache.hadoop.hive.ql.exec.JobDebugger$TaskInfoGrabber.run(JobDebugger.java:142)
>> at java.lang.Thread.run(Thread.java:619)
>> Caused by: java.lang.ClassNotFoundException:
>> org.apache.hadoop.mapreduce.util.HostUtil
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>> at java.security.AccessController.doPrivileged(Native Method)
>>  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
>>  ... 4 more
>> FAILED: Execution Error, return code 2 from
>> org.apache.hadoop.hive.ql.exec.MapRedTask
>> MapReduce Jobs Launched:
>> Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
>> Total MapReduce CPU Time Spent: 0 msec
>>
>>
>> I'm using my configs from hive 0.10, which works with no issues and this
>> was pretty much a drop in replacement on the machine that hadoop 0.10 was
>> running on..
>>
>> I've done a bunch of googling around and have found a bunch of other
>> folks that have have had the same issue, but no solid answers..
>>
>> Thanks in advance for any help..
>>
>> JC
>>
>>
>>
>
--
Swarnim
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB