Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> RE: EXT :Re: HBase Issues (perhaps related to

Copy link to this message
RE: EXT :Re: HBase Issues (perhaps related to

            Yes I do.

With this /etc/hosts HBase works but NX and VNC do not. hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver localhost hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1


With this /etc/hosts NX and VNC work but HBase does not. hadoop1 localhost.localdomain localhost hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1


I assume from your question that if I should try replacing hadoop1 localhost.localdomain localhost

with simply: localhost


-----Original Message-----
From: Michael Segel [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, November 21, 2012 7:40 PM
Subject: EXT :Re: HBase Issues (perhaps related to


Quick question...

DO you have set to anything other than localhost?

If not, then it should be fine and you may want to revert to hard coded IP addresses on your other configuration files.

If you have Hadoop up and working, then you should be able to stand up HBase on top of that.

Just doing a quick look, and it seems that your name for your hadoop is resolving to your localhost.

What does your /etc/ hosts file look like?

How many machines in your cluster?

Have you thought about pulling down a 'free' copy of Cloudera, MapR or if Hortonworks has one ...

If you're thinking about using HBase as a standalone instance and don't care about Map/Reduce, maybe going with something else would make sense.



On Nov 21, 2012, at 3:02 PM, "Ratner, Alan S (IS)" <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:

> Thanks Mohammad.  I set the clientPort but as I was already using the default value of 2181 it made no difference.


> I cannot remove the line from my hosts file.  I connect to my servers via VPN from a Windows laptop using either NX or VNC and both apparently rely on the IP address.  This was not a problem with older versions of HBase (I used to use 0.20.x) so it seems to be something relatively new.


> It seems I have a choice: access my servers remotely or run HBase and these 2 are mutually incompatible.  I think my options are either:

> a) revert to an old version of HBase

> b) switch to Accumulo, or

> c) switch to Cassandra.


> Alan



> -----Original Message-----

> From: Mohammad Tariq [mailto:[EMAIL PROTECTED]]

> Sent: Wednesday, November 21, 2012 3:11 PM


> Subject: EXT :Re: HBase Issues (perhaps related to


> Hello Alan,


>    It's better to keep out of your /etc/hosts and make sure you

> have proper DNS resolution as it plays an important role in proper Hbase

> functioning. Also add the "hbase.zookeeper.property.clientPort" property in

> your hbase-site.xml file and see if it works for you.


> Regards,

>    Mohammad Tariq




> On Thu, Nov 22, 2012 at 1:31 AM, Ratner, Alan S (IS) <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>wrote:


>> I'd appreciate any suggestions as to how to get HBase up and running.

>> Right now it dies after a few seconds on all servers.  I am using Hadoop

>> 1.0.4, ZooKeeper 3.4.4 and HBase 0.94.2 on Ubuntu.


>> History: Yesterday I managed to get HBase 0.94.2 working but only after

>> removing the line from my /etc/hosts file (and synchronizing my

>> clocks).  All was fine until this morning when I realized I could not

>> initiate remote log-ins to my servers (using VNC or NX) until I restored

>> the line in /etc/hosts.  With that restored I am back to a

>> non-working HBase.


>> With HBase managing ZK I see the following in the HBase Master and ZK

>> logs, respectively:

>> 2012-11-21 13:40:22,236 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient