Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Help - can't start master server for HBase (pseudo-distributed mode).


Copy link to this message
-
Re: Help - can't start master server for HBase (pseudo-distributed mode).
Your HDFS server is listening on a different port than the one you
configured in hbase-site (9000 != 8020).

On Tue, Sep 11, 2012 at 11:44 AM, Jason Huang <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
> I've installed Hadoop 1.0.3 in pseudo-distributed mode and was able to
> successfully start the nodes:
> $ bin/start-all.sh
> $ jps
> 1002 NameNode
> 1246 JobTracker
> 1453 Jps
> 1181 SecondaryNameNode
> 1335 TaskTracker
> 1091 DataNode
>
> Then I installed HBase 0.94 and configured it at pseudo-distributed mode.
> $ ./start-hbase.sh
> $ jps
> 1684 Jps
> 1002 NameNode
> 1647 HRegionServer
> 1246 JobTracker
> 1553 HQuorumPeer
> 1181 SecondaryNameNode
> 1335 TaskTracker
> 1091 DataNode
>
> I couldn't find the MasterServer running so I looked at the log file:
> 2012-09-11 14:35:05,892 INFO
> org.apache.hadoop.hbase.master.ActiveMasterManager:
> Master=192.168.10.23,60000,1347388500668
> 2012-09-11 14:35:06,996 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s).
> 2012-09-11 14:35:07,998 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s).
> 2012-09-11 14:35:08,999 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s).
> 2012-09-11 14:35:10,000 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s).
> 2012-09-11 14:35:11,001 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s).
> 2012-09-11 14:35:12,002 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s).
> 2012-09-11 14:35:13,004 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 6 time(s).
> 2012-09-11 14:35:14,005 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 7 time(s).
> 2012-09-11 14:35:15,006 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 8 time(s).
> 2012-09-11 14:35:16,008 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 9 time(s).
> 2012-09-11 14:35:16,012 FATAL org.apache.hadoop.hbase.master.HMaster:
> Unhandled exception. Starting shutdown.
> java.net.ConnectException: Call to localhost/127.0.0.1:8020 failed on
> connection exception: java.net.ConnectException: Connection refused
>
> Could anyone help me to figure out what I am missing? I tried to do
> some google search but none of the "answers" there helped me.
> Here are my config files:
>
> hbase-site.xml
> <configuration>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://localhost:8020/hbase</value>
>   </property>
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>     <value>localhost</value>
>   </property>
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>dfs.replication</name>
>     <value>1</value>
>   </property>
>   <property>
>      <name>hbase.master</name>
>      <value>localhost:60000</value>
>   </property>
> </configuration>
>
> hdfs-site.xml
> <configuration>
>   <property>
>      <name>fs.default.name</name>
>      <value>localhost:9000</value>
>   </property>
>      <name>mapred.job.tracker</name>
>      <value>localhost:9001</value>
>   <property>
>      <name>dfs.replication</name>
>      <value>1</value>
>   </property>
> </configuration>
>
> mapred-site.xml
> <configuration>
>     <property>
>         <name>mapred.job.tracker</name>
>         <value>localhost:9001</value>
>     </property>
>     <property>
>         <name>mapred.child.java.opts</name>
>         <value>-Xmx512m</value>
>     </property>
>     <property>
>         <name>mapred.job.tracker</name>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB