Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> No live nodes on hdfs cluster


Copy link to this message
-
No live nodes on hdfs cluster
Hi!

I've followed the hadoop cluster tutorial on hadoop site (hadoop 1.1.1 on
64bit machines with openjdk  1.6). I've set-up 1 namenode, 1 jobtracker,
and 3 slaves acting as datanode and tasktracker.

I have a problem setting up hdfs on the cluster: dfs daemon start fine on
namenode and datanodes but when I go to http://namenode:50070/

I have this:

Configured Capacity : 0 KB
DFS Used : 0 KB
Non DFS Used : 0 KB
DFS Remaining : 0 KB
DFS Used% : 100 %
DFS Remaining% : 0 %
Live Nodes : 0
Dead Nodes : 0
Decommissioning Nodes : 0
Number of Under-Replicated Blocks : 0

I've read disk space could be a problem, but i've checked there are 10GB
of free space on each datanode.

Here the logs of one of the datanodes:

/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ncepspa117/172.16.140.117
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.1.1
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r
1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012
************************************************************/
2013-02-08 16:36:49,881 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2013-02-08 16:36:49,892 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2013-02-08 16:36:49,892 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2013-02-08 16:36:49,893 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
started
2013-02-08 16:36:49,985 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
registered.
2013-02-08 16:36:50,328 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2013-02-08 16:36:50,340 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened data transfer
server at 50010
2013-02-08 16:36:50,343 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-02-08 16:36:50,388 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-02-08 16:36:50,448 INFO org.apache.hadoop.http.HttpServer: Added
global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-02-08 16:36:50,459 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled =
false
2013-02-08 16:36:50,460 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is
-1. Opening the listener on 50075
2013-02-08 16:36:50,460 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-02-08 16:36:50,460 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50075
2013-02-08 16:36:50,460 INFO org.mortbay.log: jetty-6.1.26
2013-02-08 16:36:50,729 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2013-02-08 16:36:50,735 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
registered.
2013-02-08 16:36:50,735 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
DataNode registered.
2013-02-08 16:36:50,756 INFO org.apache.hadoop.ipc.Server: Starting
SocketReader
2013-02-08 16:36:50,758 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
RpcDetailedActivityForPort50020 registered.
2013-02-08 16:36:50,758 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
RpcActivityForPort50020 registered.
2013-02-08 16:36:50,760 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(ncepspa117.nce.amadeus.net:50010, storageID=,
infoPort=50075, ipcPort=50020)

Here the logs of the namenode:

/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ncepspa119/172.16.140.119
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.1.1
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r
1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012
************************************************************/
2013-02-08 16:36:48,124 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2013-02-08 16:36:48,136 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2013-02-08 16:36:48,137 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2013-02-08 16:36:48,137 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
started
2013-02-08 16:36:48,287 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
registered.
2013-02-08 16:36:48,297 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
registered.
2013-02-08 16:36:48,298 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
NameNode registered.
2013-02-08 16:36:48,323 INFO org.apache.hadoop.hdfs.util.GSet: VM type   =
64-bit
2013-02-08 16:36:48,323 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
memory = 17.77875 MB
2013-02-08 16:36:48,323 INFO org.apache.hadoop.hdfs.util.GSet: capacity  =
2^21 = 2097152 entries
2013-02-08 16:36:48,323 INFO org.apache.hadoop.hdfs.util.GSet:
recommended=2097152, actual=2097152
2013-02-08 16:36:48,359 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=psporacle
2013-02-08 16:36:48,360 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-02-08 16:36:48,360 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
2013-02-08 16:36:48,366 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.bl
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB