Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> avoid assigning hbase-access-operatation map/reduce slot to the tasktracker?


+
Jameson Li 2011-03-21, 11:30
+
Harsh J 2011-03-21, 12:08
Copy link to this message
-
Re: avoid assigning hbase-access-operatation map/reduce slot to the tasktracker?
Hello,

To Harsh J <[EMAIL PROTECTED]>:
I am sorry that I had said a wrong description.
I have not configured the hbase-env.sh to "HBASE_MANAGES_ZK=true", and its
really configuration is "#HBASE_MANAGES_ZK=true".
I just think its default value is "HBASE_MANAGES_ZK=true".

My hadoop version is basic on hadoop-0.20.2 release and have added the
patchs:HADOOP-4675,5745,MAPREDUCE-1070,551,1089.
My hbase version is the hbase-0.20.6.

And the below is all of the configuration.

Nine of the nodes have the hbase configuration:
hbase-env.sh:
export HBASE_HOME=/opt/hbase
 export JAVA_HOME=/opt/jdk
export HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError
-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
 export HBASE_PID_DIR=/hadoop/pids
# export HBASE_MANAGES_ZK=true

hbase-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
 <value>hdfs://node01:54310</value>
</property>
<property>
 <name>hbase.rootdir</name>
<value>hdfs://node01:54310/hbase</value>
 </property>
<property>
<name>hbase.master</name>
 <value>node01:60010</value>
</property>
<property>
 <name>hbase.zookeeper.quorum</name>
<value>node01,node02,node03,node04,node05,node06,node07,node08,node09</value>
 </property>
<property>
<name>hbase.cluster.distributed</name>
 <value>true</value>
</property>
...
</configuration>

regionservers:
node01
node02
node03
node04
node05
node06
node07
node08
node09

And all of the nodes(10nodes) have the below hadoop(hdfs and mapred)
configration:
hadoop-env.sh:
export JAVA_HOME=/opt/jdk
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote
$HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote
$HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote
$HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote
$HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote
$HADOOP_JOBTRACKER_OPTS"
export HADOOP_PID_DIR=/hadoop/pids

mapred-site.xml:
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>node07:9001</value>
  </property>
  ...
</configuration>

core-site.xml
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://node01:54310</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/hadoop</value>
  </property>
</configuration>

slaves:
node01
node02
node03
node04
node05
node06
node07
node08
node09
node10

masters:
node08

Every node is both of a datanode and a tasktracker.
But I just configured 9 nodes as the region server and the zookeeper.
And then when I run the map/reduce job that access the hbase, the node that
has not configured the regionserver and the zookeeper will return the below
error.
Should I avoid assigning hbase-operatation map/reduce slot to the
tasktracker?
Or I have to change all of the nodes as the zookeeper node in all of the
datanode(or tasktracker?)?

In the node10 had got the below error(And the others nodes are normal.):
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.3.1-942149, built on 05/07/2010 17:14 GMT
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:host.name=aaa
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.version=1.6.0_20
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.vendor=Sun Microsystems Inc.
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.home=....
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.class.path=...
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.library.path=...
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=...
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.compiler=<NA>
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:os.name=Linux
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:os.arch=amd64
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:os.version=2.6.18-128.el5
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:user.name=hadoop
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:user.home=...
2011-03-21 17:17:39,883 INFO org.apache.zookeeper.ZooKeeper: Client
environment:user.dir=..
2011-03-21 17:17:39,885 INFO org.apache.zookeeper.ZooKeeper: Initiating
client connection, connectString=localhost:2181 sessionTimeout=60000
watcher=org.apache.hadoop.hbase.client.HConnectionManager$ClientZKWatcher@6d372656
2011-03-21 17:17:39,901 INFO org.apache.zookeeper.ClientCnxn: Opening socket
connection to server localhost/127.0.0.1:2181
......
2011-03-21 17:18:44,364 WARN org.apache.zookeeper.ClientCnxn: Session 0x0
for server null, unexpected error, closing socket connection and attempting
reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078)
2011-03-21 17:18:44,465 WARN
org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper: Failed to create /hbase
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase
at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:809)
 at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:837)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureExists(ZooKeeperWrapper.java:405)
 at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureParentExists(ZooKeeperWrapper.java:432)
at
org.apache.hadoop.hbase.zooke
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB