Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local exception: java.io.EOFException


Copy link to this message
-
Re: java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local exception: java.io.EOFException
You're using a very old version of HBase that is no longer supported
and could be a pain to run today, Try using the latest 0.94 release at
the very least.

The problem you face, anyway, is that 0.90.x shipped with a bundled
version of Apache Hadoop older than 1.x that you've setup, therefore
making it incompatible, and your errors expected, unless you follow
the steps we've documented for this at
http://hbase.apache.org/book.html#trouble.versions and
http://hbase.apache.org/book.html#hadoop. Essentially, replace
$HBASE_HOME/lib/'s hadoop-core, with the one from your actual
$HADOOP_PREFIX (or $HADOOP_HOME).

On Wed, Jan 1, 2014 at 11:00 PM, Law-Firms-In.com
<[EMAIL PROTECTED]> wrote:
> I have troubles bringing hbase 0.90.6 to work together with Hadoop
> 1.2.1. Hadoop is 100% working (tested with wordcount mapreduce) and
> hbase was working several months now in standalone mode.
>
> But due to performance problems I am now switching to pseudo mode with
> hbase but I am stuck. I have followed almost all tutorials I could find
> for my problem but still no luck (Example tutorial
> http://cloudfront.blogspot.in/2012/06/how-to-configure-habse-in-pseudo.html#.UsROKKHNRkr).
>
> My mapred-site.xml file:
>
> <property>
> <name>hbase.rootdir</name>
> <value>hdfs://localhost:9000/hbase</value>
> </property>
>
> <property>
> <name>hbase.cluster.distributed</name>
> <value>true</value>
> </property>
>
> <property>
> <name>hbase.zookeeper.quorum</name>
> <value>localhost</value>
>
> </property>
> <property>
> <name>dfs.replication</name>
> <value>1</value>
> </property>
>
> <property>
> <name>hbase.zookeeper.property.clientPort</name>
> <value>2180</value>
> <description>was 2181 but since I have in zoo.cfg file # the port at
> which the clients will connect clientPort=2180i asjusted this but both
> versions dont bring my HMaster alive
> </description>
> </property>
>
> <property>
> <name>hbase.zookeeper.property.dataDir</name>
> <value>/var/lib/zookeeper</value>
> </property>
>
>
> My hbase-env.xml file:
>
> export JAVA_HOME=/usr/java/jdk1.7.0_40
> export
> HBASE_REGIONSERVERS=/srv/vhosts/search.sh/htdocs/nutch/hbase-0.90.6/conf$
> export HBASE_MANAGES_ZK=true
> export HBASE_OPTS="-ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
>
>
> My Hadoop core-site.xml:
>
>      <property>
>          <name>fs.default.name</name>
>          <value>hdfs://localhost:9000</value>
>      </property>
>
>
> Master Log:
> 2014-01-01 18:16:06,093 INFO
> org.apache.hadoop.hbase.master.ActiveMasterManager: Master=localhost:60000
> 2014-01-01 18:16:06,453 FATAL org.apache.hadoop.hbase.master.HMaster:
> Unhandled exception. Starting shutdown.
> java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local
> exception: java.io.EOFException
>         at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>         at org.apache.hadoop.ipc.Client.call(Client.java:743)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at com.sun.proxy.$Proxy5.getProtocolVersion(Unknown Source)
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>         at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:113)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:215)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
>         at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:363)
>         at
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81)
>         at

Harsh J