Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - HBase dies shortly after starting.


Copy link to this message
-
Re: HBase dies shortly after starting.
Amandeep Khurana 2012-06-30, 22:07
To run HBase (or for that matter any distributed system) you need your networking setup to function properly. No route to host is caused due to issues with the underlying network. I have seen TORs losing packets, causing these exceptions. There could be several other issues that could cause them too. This certainly doesn't look like an HBase specific issue and is likely something broken in your network.
On Friday, June 29, 2012 at 3:42 PM, Jay Wilson wrote:

> I "somewhat" have HBase up and running in a distributed mode. It starts
> fine, I can use "hbase shell" to create, disable, and drop tables;
> however, after a short period of time HMaster and the HRegionalservers
> terminate. Decoding the error messages is a bit bewildering and the
> O'Reilly HBase book hasn't helped much with message decoding.
>
>
> Here is a snippet of the messages from a regionalserver log:
>
> ~~~
>
> U Stats: total=6.68 MB, free=807.12 MB, max=813.8 MB, blocks=2,
> accesses=19, hits=17, hitRatio=89.47%%, cachingAccesses=17,
> cachingHits=15, cachingHitsRatio=88.
>
> 23%%, evictions=0, evicted=0, evictedPerRun=NaN
>
> 2012-06-27 12:36:47,103 DEBUG
> org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=6.68
> MB, free=807.12 MB, max=813.8 MB, blocks=2, accesses=19, hits=17,
> hitRatio=89.47%%, cachingAccesses=17, cachingHits=15, cachingHitsRatio=88.
>
> 23%%, evictions=0, evicted=0, evictedPerRun=NaN
>
> 2012-06-27 12:40:02,106 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x382f6861690003, likely
> server has closed socket, closing socket connection and attempting
> reconnect
>
> 2012-06-27 12:40:02,112 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x382f6861690004, likely
> server has closed socket, closing socket connection and attempting
> reconnect
>
> 2012-06-27 12:40:02,245 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server devrackA-01/172.18.0.2:2181
>
> 2012-06-27 12:40:02,247 WARN org.apache.zookeeper.ClientCnxn: Session
> 0x382f6861690003 for server null, unexpected error, closing socket
> connection and attempting reconnect
>
> java.net.NoRouteToHostException: No route to host
>
> ~~~
>
> No route to host would imply it can't reach one of my HQuorumpeers, but
> it talks to them when I first run start-hase.sh. Also there is no DNS
> involved, the /etc/hosts files are identical on all nodes, and it's
> currently a closed cluster. All nodes are on the same subnet 172.18/16
>
>
> Do I have something wrong in one of my xml files:
>
>
> Core-site.xml:
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <!-- Put site-specific property overrides in this file. -->
> <configuration>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/var/hbase-hadoop/tmp</value>
> </property>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devrackA-00:8020</value>
> <final>true</final>
> </property>
> </configuration>
>
>
> Hdfs-site.xml:
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <!-- Put site-specific property overrides in this file. -->
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>3</value>
> </property>
> <property>
> <name>dfs.name.dir</name>
> <value>/var/hbase-hadoop/name</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>/var/hbase-hadoop/data</value>
> </property>
> <property>
> <name>fs.checkpoint.dir</name>
> <value>/var/hbase-hadoop/namesecondary</value>
> </property>
> <property>
> <name>dfs.datanode.max.xcievers</name>
> <value>4096</value>
> </property>
> </configuration>
>
>
> Hbase-site.xml:
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <!--
> /**
> * Copyright 2010 The Apache Software Foundation
> *
> * Licensed to the Apache Software Foundation (ASF) under one
> * or more contributor license agreements. See the NOTICE file