Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Namenode shutting down while creating cluster


Copy link to this message
-
Re: Namenode shutting down while creating cluster
Balaji Narayanan 2012-10-20, 06:12
Seems like an issue with resolution of sk.r252.0. Can you ensure that it
resolves?

On Friday, October 19, 2012, Sundeep Kambhmapati wrote:

> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0
> loaded in 0 seconds.
> 2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 saved in 0 seconds.
> 2012-10-19 23:20:43,612 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 758 msecs
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>  over-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer:

Thanks
-balaji

http://balajin.net/blog/
http://flic.kr/balajijegan