Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> RE: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)


Copy link to this message
-
Re: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
Hi,

I am also using Ubuntu 12.04, Zookeeper 3.4.4 HBase 0.94.2 and Hadoop 1.0.4. (64-bit nodes), I finally managed to have the HBase cluster up and running, below is the line in my /etc/hosts for your reference:

#127.0.0.1      localhost
127.0.0.1       localhost.localdomain localhost

According to my set up experience, below are my advices:
1) /etc/hosts: should not comment out 127.0.01 in /etc/hosts
2) Zookeeper: do not sync its "data" and "datalog" folders  to other Zookeeper servers in your deployment
3) check your start procedures:
  - check your firewall policies, make sure each server can use the required TCP/IP ports, especially port 2181 in your case
     - start Zookeeper first, need to make sure all other servers can access Zookeeper servers, use "/bin/zkCli.sh -server XXXX" or "echo ruok | nc XXXX 2181" to test all Zookeepers from each HBASE server.  
- start Hadoop, use JPS to make sure Namenode, SecondaryNameNode, Datanodes up and running, check LOG files of each servers
- start MapReduce if you need it
- start HBase, use JPS to check HBase's HMaster and HRegionServers, then wait a while use JPS to check HMaster and HRegionServers again, if them all HBASE servers gone but HADOOP still up and running,  most likely it would be HBASE configure issue in hbase-site.xml related to ZooKeeper settings or ZooKeeper configure/data issues.
Hope these help and good luck.
ac

Originally I have 7 nodes, 5 of them are 64-bit and 2 of them are 32-bit, all 64-bit servers are connected to network A and the two
On 24 Nov 2012, at 10:51 AM, Michael Segel wrote:

> Hi Alan,
>
> Yes. I am suggesting that.
>
> Your 127.0.0.1 subnet should be localhost  only and then your other entries.
> It looks like 10.64.155.52 is the external interface (eth0) for the machine hadoop1.
>
> Adding it to 127.0.0.1 confuses HBase since it will use the first entry it sees. (Going from memory) So it will always look to local hosts.
>
> I think that should fix your problem.
>
> HTH
>
> -Mike
>
> On Nov 23, 2012, at 10:11 AM, "Ratner, Alan S (IS)" <[EMAIL PROTECTED]> wrote:
>
>> Mike,
>>
>>
>>
>>           Yes I do.
>>
>>
>>
>> With this /etc/hosts HBase works but NX and VNC do not.
>>
>> 10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver localhost
>>
>> 10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1
>>
>> ...
>>
>>
>>
>> With this /etc/hosts NX and VNC work but HBase does not.
>>
>> 127.0.0.1 hadoop1 localhost.localdomain localhost
>>
>> 10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver
>>
>> 10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1
>>
>> ...
>>
>>
>>
>> I assume from your question that if I should try replacing
>>
>> 127.0.0.1 hadoop1 localhost.localdomain localhost
>>
>> with simply:
>>
>> 127.0.0.1 localhost
>>
>>
>>
>>
>>
>>
>>
>> Alan
>>
>>
>>
>>
>>
>> -----Original Message-----
>> From: Michael Segel [mailto:[EMAIL PROTECTED]]
>> Sent: Wednesday, November 21, 2012 7:40 PM
>> To: [EMAIL PROTECTED]
>> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
>>
>>
>>
>> Hi,
>>
>>
>>
>> Quick question...
>>
>>
>>
>> DO you have 127.0.0.1 set to anything other than localhost?
>>
>>
>>
>> If not, then it should be fine and you may want to revert to hard coded IP addresses on your other configuration files.
>>
>>
>>
>> If you have Hadoop up and working, then you should be able to stand up HBase on top of that.
>>
>>
>>
>> Just doing a quick look, and it seems that your name for your hadoop is resolving to your localhost.
>>
>> What does your /etc/ hosts file look like?
>>
>>
>>
>> How many machines in your cluster?
>>
>>
>>
>> Have you thought about pulling down a 'free' copy of Cloudera, MapR or if Hortonworks has one ...
>>
>>
>>
>> If you're thinking about using HBase as a standalone instance and don't care about Map/Reduce, maybe going with something else would make sense.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB