Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> datanode daemon


Guys,

I finally solved ALL the Errors: in  ...datanode*.log  after trying to start the node with "service datanode start".
The errors were:
- conflicting NN DD ids - solved through reformatting NN.
- could not connect to 127.0.0.1:8020 - Connection refused - solved through correcting a typo inside hdfs-site.xml under dfs.namenode.http-address; somehow had the default value i/o localhost. (Running pseudo-mode)
- conf was pointing to the wrong sLink - solved by running alternatives -set hadoop-conf <conf.myconf>

However, when I run "service -status-all", still see that datanode [FAILED] message. All others, NN, SNN, JT, TT are running [OK].
1.       Starting daemons, all seems OK:
Starting Hadoop datanode:                                  [  OK  ]
starting datanode, logging to /home/hadoop/logs/hadoop-root-datanode-ip-10-204-47-138.out
Starting Hadoop namenode:                                  [  OK  ]
starting namenode, logging to /home/hadoop/logs/hadoop-hdfs-namenode-ip-10-204-47-138.out
Starting Hadoop secondarynamenode:                         [  OK  ]
starting secondarynamenode, logging to /home/hadoop/logs/hadoop-hdfs-secondarynamenode-ip-10-204-47-138.out

2.
running service -status-all command and get:
Hadoop datanode is not running                             [FAILED]
Hadoop namenode is running                                 [  OK  ]
Hadoop secondarynamenode is running                        [  OK  ]

3.
Here is log file on DN:
2012-10-25 15:33:37,554 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ip-10-204-47-138.ec2.internal/10.204.47.138
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.0.0-cdh4.1.1
STARTUP_MSG:   classpath = /etc/ha..........
...............................
..............................
2012-10-25 15:33:38,098 WARN org.apache.hadoop.hdfs.server.common.Util: Path /home/hadoop/dfs/data should be specified as a URI in configuration files. Please update hdfs configuration.
2012-10-25 15:33:41,589 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-10-25 15:33:42,125 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-10-25 15:33:42,125 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2012-10-25 15:33:42,204 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is ip-10-204-47-138.ec2.internal
2012-10-25 15:33:42,319 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2012-10-25 15:33:42,323 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2012-10-25 15:33:42,412 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2012-10-25 15:33:42,603 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2012-10-25 15:33:42,607 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2012-10-25 15:33:42,607 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2012-10-25 15:33:42,607 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2012-10-25 15:33:42,682 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 0.0.0.0:50075
2012-10-25 15:33:42,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2012-10-25 15:33:42,690 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2012-10-25 15:33:42,690 INFO org.mortbay.log: jetty-6.1.26.cloudera.2
2012-10-25 15:33:43,601 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2012-10-25 15:33:43,787 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2012-10-25 15:33:43,905 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2012-10-25 15:33:43,917 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2012-10-25 15:33:43,943 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2012-10-25 15:33:43,950 WARN org.apache.hadoop.hdfs.server.common.Util: Path /home/hadoop/dfs/data should be specified as a URI in configuration files. Please update hdfs configuration.
2012-10-25 15:33:43,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:8020 starting to offer service
2012-10-25 15:33:44,297 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2012-10-25 15:33:44,304 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2012-10-25 15:33:45,551 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2012-10-25 15:33:46,605 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2012-10-25 15:33:47,865 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2012-10-25 15:33:48,945 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s); retry policy is RetryUpToMaximu
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB