Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Important "Undefined Error"

Copy link to this message
Re: Important "Undefined Error"

I have tried to make both ports the same.
But the prob is the hbase cannot connect to port 8020.
When i run nmap hostname, port 8020 wasnt with the list of open ports.
I have tried what harsh told me abt.
I used the same port he used but same error occurred.
Another aspect in cloudera doc it says that i have to canonical name for the host ex: namenode.example.com as the hostname, but i didnt find it in any tutorial. No one makes it.
Note that i am deploying my cluster in fully distributed mode i.e am using 4 machines..

So any ideas??!!

Sent from my iPhone

On 2012-05-14, at 4:07 PM, "N Keywal" <[EMAIL PROTECTED]> wrote:

> Hi,
> There could be multiple issues, but it's strange to have in hbase-site.xml
>  <value>hdfs://namenode:9000/hbase</value>
> while the core-site.xml says:
> <value>hdfs://namenode:54310/</value>
> The two entries should match.
> I would recommend to:
> - use netstat to check the ports (netstat -l)
> - do the check recommended by Harsh J previously.
> N.
> On Mon, May 14, 2012 at 3:21 PM, Dalia Sobhy <[EMAIL PROTECTED]> wrote:
>> pleaseeeeeeeeeeee helpppppppppppppppppppp
>>> Subject: RE: Important "Undefined Error"
>>> Date: Mon, 14 May 2012 12:20:18 +0200
>>> Hi,
>>> I tried what you told me, but nothing worked:(((
>>> First when I run this command:dalia@namenode:~$ host -v -t A `hostname`Output:Trying "namenode"Host namenode not found: 3(NXDOMAIN)Received 101 bytes from in 13 ms My core-site.xml:<configuration><property>        <name>fs.default.name</name>        <!--<value>hdfs://namenode:8020</value>-->        <value>hdfs://namenode:54310/</value></property></configuration>
>>> My hdfs-site.xml<configuration><property><name>dfs.name.dir</name><value>/data/1/dfs/nn,/nfsmount/dfs/nn</value></property><!--<property><name>dfs.data.dir</name><value>/data/1/dfs/dn,/data/2/dfs/dn,/data/3/dfs/dn</value></property>--><property><name>dfs.datanode.max.xcievers</name><value>4096</value></property><property><name>dfs.replication</name><value>3</value></property><property> <name>dfs.permissions.superusergroup</name> <value>hadoop</value></property>
>>> My Mapred-site.xml<configuration><name>mapred.local.dir</name><value>/data/1/mapred/local,/data/2/mapred/local,/data/3/mapred/local</value></configuration>
>>> My Hbase-site.xml<configuration><property><name>hbase.cluster.distributed</name>  <value>true</value></property><property>  <name>hbase.rootdir</name>     <value>hdfs://namenode:9000/hbase</value></property><property><name>hbase.zookeeper.quorun</name> <value>namenode</value></property><property><name>hbase.regionserver.port</name><value>60020</value><description>The host and port that the HBase master runs at.</description></property><property><name>dfs.replication</name><value>1</value></property><property><name>hbase.zookeeper.property.clientPort</name><value>2181</value><description>Property from ZooKeeper's config zoo.cfg.The port at which the clients will connect.</description></property></configuration>
>>> Please Help I am really disappointed I have been through all that for two weeks !!!!
>>>> Subject: RE: Important "Undefined Error"
>>>> Date: Sat, 12 May 2012 23:31:49 +0530
>>>> The problem is your hbase is not able to connect to Hadoop, can you put your
>>>> hbase-site.xml content >> here.. have you specified localhost somewhere, if
>>>> so remove localhost from everywhere and put your hdfsl namenode address
>>>> suppose your namenode is running on master:9000 then put your hbase file
>>>> system setting as master:9000/hbase here I am sending you the configuration
>>>> which I am using in hbase and is working
>>>> My hbase-site.xml content is
>>>> <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>> <!--