Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Important "Undefined Error"


Copy link to this message
-
RE: Important "Undefined Error"

/etc/hosts
127.0.0.1 localhost
10.0.2.3 namenode.dalia.com10.0.2.5 datanode3.dalia.com10.0.2.6 datanode1.dalia.com10.0.2.42 datanode2.dalia.com
/etc/hostnamenamenode.dalia.com
And I am always receiving this error:
INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/10.0.2.3:8020. Already tried 0 time(s).
Note that I have already disabled the firewall and I opened the port : ufw allow 8020
But when I run : telnet 10.0.2.3 8020 => connection refused ....
So the problem is that I cannot open the port..... :(
Note that I have tried it with other ports as 54310 and 9000 but same error occurs...
> Date: Fri, 18 May 2012 01:52:48 +0530
> Subject: Re: Important "Undefined Error"
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
>
> Please send the content of /etc/hosts and /etc/hostname file
>
> try this link
>
> http://helpmetocode.blogspot.in/2012/05/hadoop-fully-distributed-cluster.html
>
>
> for hadoop configuration
>
> On Mon, May 14, 2012 at 10:15 PM, Dalia Sobhy <[EMAIL PROTECTED]>wrote:
>
> > Yeasss
> >
> > Sent from my iPhone
> >
> > On 2012-05-14, at 5:28 PM, "N Keywal" <[EMAIL PROTECTED]> wrote:
> >
> > > In core-file.xml, do you have this?
> > >
> > > <configuration>
> > > <property>
> > > <name>fs.default.name</name>
> > > <value>hdfs://namenode:8020/hbase</value>
> > > </property>
> > >
> > > If you want hbase to connect to 8020 you must have hdfs listening on
> > > 8020 as well.
> > >
> > >
> > > On Mon, May 14, 2012 at 5:17 PM, Dalia Sobhy <[EMAIL PROTECTED]>
> > wrote:
> > >> Hiiii
> > >>
> > >> I have tried to make both ports the same.
> > >> But the prob is the hbase cannot connect to port 8020.
> > >> When i run nmap hostname, port 8020 wasnt with the list of open ports.
> > >> I have tried what harsh told me abt.
> > >> I used the same port he used but same error occurred.
> > >> Another aspect in cloudera doc it says that i have to canonical name
> > for the host ex: namenode.example.com as the hostname, but i didnt find
> > it in any tutorial. No one makes it.
> > >> Note that i am deploying my cluster in fully distributed mode i.e am
> > using 4 machines..
> > >>
> > >> So any ideas??!!
> > >>
> > >> Sent from my iPhone
> > >>
> > >> On 2012-05-14, at 4:07 PM, "N Keywal" <[EMAIL PROTECTED]> wrote:
> > >>
> > >>> Hi,
> > >>>
> > >>> There could be multiple issues, but it's strange to have in
> > hbase-site.xml
> > >>>
> > >>>  <value>hdfs://namenode:9000/hbase</value>
> > >>>
> > >>> while the core-site.xml says:
> > >>>
> > >>> <value>hdfs://namenode:54310/</value>
> > >>>
> > >>> The two entries should match.
> > >>>
> > >>> I would recommend to:
> > >>> - use netstat to check the ports (netstat -l)
> > >>> - do the check recommended by Harsh J previously.
> > >>>
> > >>> N.
> > >>>
> > >>>
> > >>> On Mon, May 14, 2012 at 3:21 PM, Dalia Sobhy <
> > [EMAIL PROTECTED]> wrote:
> > >>>>
> > >>>>
> > >>>> pleaseeeeeeeeeeee helpppppppppppppppppppp
> > >>>>
> > >>>>> From: [EMAIL PROTECTED]
> > >>>>> To: [EMAIL PROTECTED]
> > >>>>> Subject: RE: Important "Undefined Error"
> > >>>>> Date: Mon, 14 May 2012 12:20:18 +0200
> > >>>>>
> > >>>>>
> > >>>>>
> > >>>>> Hi,
> > >>>>> I tried what you told me, but nothing worked:(((
> > >>>>> First when I run this command:dalia@namenode:~$ host -v -t A
> > `hostname`Output:Trying "namenode"Host namenode not found:
> > 3(NXDOMAIN)Received 101 bytes from 10.0.2.1#53 in 13 ms My
> > core-site.xml:<configuration><property>        <name>fs.default.name</name>
> > <!--<value>hdfs://namenode:8020</value>-->
> >  <value>hdfs://namenode:54310/</value></property></configuration>
> > >>>>> My
> > hdfs-site.xml<configuration><property><name>dfs.name.dir</name><value>/data/1/dfs/nn,/nfsmount/dfs/nn</value></property><!--<property><name>dfs.data.dir</name><value>/data/1/dfs/dn,/data/2/dfs/dn,/data/3/dfs/dn</value></property>--><property><name>dfs.datanode.max.xcievers</name><value>4096</value></property><property><name>dfs.replication</name><value>3</value></property><property>
     
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB