Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> RE: datanode daemon SOLVED`


Copy link to this message
-
RE: datanode daemon SOLVED`
Gents,

Need to share with you my embarrassment... Solved this issue.. How?

Well, while following the installation instructions I thought I installed all the daemons, but, after checking the init.d folder I could not find hadoop-hdfs-datanode script so (thinking I acciddentslly deleted it) I merely scp'ied the script from another node.

I've tried in vain to start that node for at least 13 hours until, I went on installing hadoop on a new node, I realised that I missed data-node installation all together.

I was supposed to run:
sudo yum install hadoop-0.20-mapreduce-tasktracker hadoop-hdfs-datanode

but I ran only
sudo yum install hadoop-0.20-mapreduce-tasktracker
After installing datanode and reformatting the namespace, datanode started like a new engine.

Silly me. Oh well. :) Calm seas do not make good sailors.

AK47

From: Kartashov, Andy
Sent: Thursday, October 25, 2012 3:40 PM
To: [EMAIL PROTECTED]
Subject: datanode daemon

Guys,

I finally solved ALL the Errors: in  ...datanode*.log  after trying to start the node with "service datanode start".
The errors were:
- conflicting NN DD ids - solved through reformatting NN.
- could not connect to 127.0.0.1:8020 - Connection refused - solved through correcting a typo inside hdfs-site.xml under dfs.namenode.http-address; somehow had the default value i/o localhost. (Running pseudo-mode)
- conf was pointing to the wrong sLink - solved by running alternatives -set hadoop-conf <conf.myconf>

However, when I run "service -status-all", still see that datanode [FAILED] message. All others, NN, SNN, JT, TT are running [OK].
1.       Starting daemons, all seems OK:
Starting Hadoop datanode:                                  [  OK  ]
starting datanode, logging to /home/hadoop/logs/hadoop-root-datanode-ip-10-204-47-138.out
Starting Hadoop namenode:                                  [  OK  ]
starting namenode, logging to /home/hadoop/logs/hadoop-hdfs-namenode-ip-10-204-47-138.out
Starting Hadoop secondarynamenode:                         [  OK  ]
starting secondarynamenode, logging to /home/hadoop/logs/hadoop-hdfs-secondarynamenode-ip-10-204-47-138.out

2.
running service -status-all command and get:
Hadoop datanode is not running                             [FAILED]
Hadoop namenode is running                                 [  OK  ]
Hadoop secondarynamenode is running                        [  OK  ]

3.
Here is log file on DN:
2012-10-25 15:33:37,554 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ip-10-204-47-138.ec2.internal/10.204.47.138
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.0.0-cdh4.1.1
STARTUP_MSG:   classpath = /etc/ha..........
...............................
..............................
2012-10-25 15:33:38,098 WARN org.apache.hadoop.hdfs.server.common.Util: Path /home/hadoop/dfs/data should be specified as a URI in configuration files. Please update hdfs configuration.
2012-10-25 15:33:41,589 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-10-25 15:33:42,125 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-10-25 15:33:42,125 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2012-10-25 15:33:42,204 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is ip-10-204-47-138.ec2.internal
2012-10-25 15:33:42,319 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2012-10-25 15:33:42,323 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2012-10-25 15:33:42,412 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2012-10-25 15:33:42,603 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2012-10-25 15:33:42,607 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2012-10-25 15:33:42,607 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2012-10-25 15:33:42,607 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2012-10-25 15:33:42,682 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 0.0.0.0:50075
2012-10-25 15:33:42,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2012-10-25 15:33:42,690 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2012-10-25 15:33:42,690 INFO org.mortbay.log: jetty-6.1.26.cloudera.2
2012-10-25 15:33:43,601 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075<mailto:SelectChannelConnector@0.0.0.0:50075>
2012-10-25 15:33:43,787 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2012-10-25 15:33:43,905 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2012-10-25 15:33:43,917 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2012-10-25 15:33:43,943 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2012-10-25 15:33:43,950 WARN org.apache.hadoop.hdfs.server.common.Util: Path /home/hadoop/dfs/data should be specified as a URI in configuration files. Please update hdfs configuration.
2012-10-25 15:33:43,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:8020 starting to offer service
2012-10-25 15:33:44,297 INFO org.apac
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB