Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> HADOOP_HOME requirement


Copy link to this message
-
Re: HADOOP_HOME requirement
Thanks for your reply nitin.

Ok. So you mean we always need to set HADOOP_HOME irrespective of "hadoop"
is on the path or not. Correct?

Little confused because that contradicts what's mentioned here[1].

[1]
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-RunningHive
Thanks,

On Wed, Jul 18, 2012 at 11:59 AM, Nitin Pawar <[EMAIL PROTECTED]>wrote:

> This is not a bug.
>
> even if hadoop was path, hive does not use it.
> Hive internally uses HADOOP_HOME in the code base. So you will always need
> to set that for hive.
> Where as for HADOOP clusters, HADOOP_HOME is deprecated but hive still
> needs it.
>
> Don't know if that answers your question
>
> Thanks,
> Nitin
>
>
> On Wed, Jul 18, 2012 at 10:01 PM, [EMAIL PROTECTED] <
> [EMAIL PROTECTED]> wrote:
>
>> Hello,
>>
>> The hive documentation states that either HADOOP_HOME should be set or
>> hadoop should be on the path. However for some cases, where HADOOP_HOME was
>> not set but hadoop was on path, I have seen this error pop up:
>>
>> java.io.IOException: *Cannot run program "null/bin/hadoop" *(in
>> directory "/root/swarnim/hive-0.9.0-cern1-SNAPSHOT"): java.io.IOException:
>> error=2, No such file or directory
>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>> at java.lang.Runtime.exec(Runtime.java:593)
>>  at java.lang.Runtime.exec(Runtime.java:431)
>> at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:268)
>>  at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134)
>> at
>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
>>  at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1326)
>> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1118)
>>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951)
>> at
>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
>>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215)
>> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406)
>>  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
>> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
>>
>> Digging into the code in MapRedTask.java, I found the following
>> (simplified):
>>
>> String *hadoopExec* = conf.getVar(System.getenv("HADOOP_HOME") +
>> "/bin/hadoop");
>> ...
>>
>> Runtime.getRuntime().exec(*hadoopExec*, env, new File(workDir));
>>
>> Clearly, if HADOOP_HOME is not set, the command that it would try to
>> execute is "null/bin/hadoop" which is exactly the exception I am getting.
>>
>> Has anyone else run into this before? Is this a bug?
>>
>> Thanks,
>> --
>> Swarnim
>>
>
>
>
> --
> Nitin Pawar
>
>
--
Swarnim