Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> An issue with Hive on hadoop cluster


Copy link to this message
-
Re: An issue with Hive on hadoop cluster
AFAIK, the fs.default.name<http://fs.default.name> should be set by both the client and server side .xml files, and they should be consistent (the URI scheme, the hostname and port number). The server side config (also called fs.default.name<http://fs.default.name>) should be read by the namenode and the client side is read by any HDFS clients (Hive is one of them).

For example, the setting we have is:

server side core-site-custom.xml:

<property>
  <name>fs.default.name</name>
  <value>hdfs://hostname:9000</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

client side core-site.xml:

<property>
  <name>fs.default.name</name>
  <value>hdfs://hostname:9000</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

>From the stack trace it seems Hive is trying to connect to port 54310, which you should check if it is correct from your server side HDFS config.
On May 23, 2011, at 4:00 AM, MIS wrote:

I have already tried your suggestion. I have mentioned the same in my mail.
I have also given the required permissions for the directory (hive.metastore.warehouse.dir).

If you look closely at the stack trace , the port number that I have specified in the config files for the namenode and jobtracker is reflected but not the hostname. I have also gone through the code base to verify the issue. But nothing fishy there.
The stand-alone hadoop cluster is working fine, but when I try to run a simple query a select , to fetch a few rows, hive throws up the exception.

I was able to get this to work with a few hacks though, like adding localhost as alias in the /etc/hosts file for the server running the namenode. But I can't go ahead with this solution, as it'll break other things.

Thanks.
On Mon, May 23, 2011 at 4:14 PM, jinhang du <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Set the follow property in hive.site.xml.
fs.default.name<http://fs.default.name/> = hdfs:<your namenode of hadoop>
mapred.job.tracker  = <your job tracker:port>
hive.metastore.warehouse.dir =  <hdfs path>
Make sure you have the authority to write into this directory (hive.metastore.warehouse.dir).
Try it.
2011/5/23 MIS <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>
I'm getting into an issue when trying to run hive over the hadoop cluster.

The hadoop cluster is working fine, in a stand alone manner.
I'm using hadoop 0.20.2 and hive 0.7.0 versions.

The problem is that the hive is not considering the fs.default.name<http://fs.default.name/> property that I am setting in the core-site.xml or the mapred.job.tracker in the mapred-site.xml files.
It always considers that namenode can be accessed at localhost (refer to the stack trace below)
So I have specified these properties in the hive-site.xml file as well. I tried making them as final in the hive-site.xml file, but didn't get the intended result.
Further, I set the above properties through command line as well. Again, no success.

I looked at the hive code for 0.7.0 branch to debug the issue, to see if it getting fs.default.name<http://fs.default.name/> property from the file hive-site.xml, which it does through clone of the JobConf. So no issues here.

Further, in hive-site.xml, if I make any of the properties as final, then hive gives me a WARNING log. as below :

WARN  conf.Configuration (Configuration.java:loadResource(1154)) - file:/usr/local/hive-0.7.0/conf/hive-site.xml:a attempt to override final parameter: hive.metastore.warehouse.dir;  Ignoring.
Below is the stack trace I'm getting in the log file:
2011-05-23 15:11:00,793 ERROR CliDriver (SessionState.java:printError(343)) - Failed with exception java.io.IOException:java.net.ConnectException: Call to localhost/127.0.0.1:54310<http://127.0.0.1:54310/> failed on connection exception: java.net.ConnectException: Connection refused
java.io.IOException: java.net.ConnectException: Call to localhost/127.0.0.1:54310<http://127.0.0.1:54310/> failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:341)
    at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:133)
    at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1114)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:54310<http://127.0.0.1:54310/> failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
    at org.apache.hadoop.ipc.Client.call(Client.java:743)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy4.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClien