Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Getting error unrecognized option -jvm on starting nodemanager


Copy link to this message
-
Re: Getting error unrecognized option -jvm on starting nodemanager
Sitaraman Vilayannur 2013-12-25, 04:18
Hi Manoj,
The directory is empty.
root@localhost logs]# cd /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/
[root@localhost namenode]# pwd
/usr/local/Software/hadoop-2.2.0/data/hdfs/namenode
[root@localhost namenode]# ls
[root@localhost namenode]#

But i still get the acquired statement in the logs.
3-12-25 09:44:42,415 INFO
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
dfs.block.invalidate.limit=1000
2013-12-25 09:44:42,417 INFO org.apache.hadoop.util.GSet: Computing
capacity for map BlocksMap
2013-12-25 09:44:42,417 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2013-12-25 09:44:42,417 INFO org.apache.hadoop.util.GSet: 2.0% max
memory = 889 MB
2013-12-25 09:44:42,417 INFO org.apache.hadoop.util.GSet: capacity
 = 2^21 = 2097152 entries
2013-12-25 09:44:42,421 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
dfs.block.access.token.enable=false
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
defaultReplication         = 1
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
maxReplication             = 512
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
minReplication             = 1
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
maxReplicationStreams      = 2
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
shouldCheckForEnoughRacks  = false
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
replicationRecheckInterval = 3000
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
encryptDataTransfer        = false
2013-12-25 09:44:42,426 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner
  = sitaraman (auth:SIMPLE)
2013-12-25 09:44:42,426 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup
  = supergroup
2013-12-25 09:44:42,426 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled = true
2013-12-25 09:44:42,426 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2013-12-25 09:44:42,427 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled:
true
2013-12-25 09:44:42,547 INFO org.apache.hadoop.util.GSet: Computing
capacity for map INodeMap
2013-12-25 09:44:42,547 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2013-12-25 09:44:42,547 INFO org.apache.hadoop.util.GSet: 1.0% max
memory = 889 MB
2013-12-25 09:44:42,547 INFO org.apache.hadoop.util.GSet: capacity
 = 2^20 = 1048576 entries
2013-12-25 09:44:42,548 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
occuring more than 10 times
2013-12-25 09:44:42,550 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2013-12-25 09:44:42,550 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.namenode.safemode.min.datanodes = 0
2013-12-25 09:44:42,550 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.namenode.safemode.extension     = 30000
2013-12-25 09:44:42,551 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
namenode is enabled
2013-12-25 09:44:42,551 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will
use 0.03 of total heap and retry cache entry expiry time is 600000
millis
2013-12-25 09:44:42,553 INFO org.apache.hadoop.util.GSet: Computing
capacity for map Namenode Retry Cache
2013-12-25 09:44:42,553 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2013-12-25 09:44:42,553 INFO org.apache.hadoop.util.GSet:
0.029999999329447746% max memory = 889 MB
2013-12-25 09:44:42,553 INFO org.apache.hadoop.util.GSet: capacity
 = 2^15 = 32768 entries
2013-12-25 09:44:42,562 INFO
org.apache.hadoop.hdfs.server.common.Storage: Lock on
/usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/in_use.lock
acquired by nodename [EMAIL PROTECTED]ldomain
2013-12-25 09:44:42,564 INFO org.mortbay.log: Stopped
SelectChannelConnector@0.0.0.0:50070
2013-12-25 09:44:42,564 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
metrics system...
2013-12-25 09:44:42,565 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
system stopped.
2013-12-25 09:44:42,565 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
system shutdown complete.
2013-12-25 09:44:42,565 FATAL
org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
join
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:684)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
2013-12-25 09:44:42,567 INFO org.apache.hadoop.util.ExitUtil: Exiting
with status 1
2013-12-25 09:44:42,568 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
On 12/25/13, Manoj Babu <[EMAIL PROTECTED]> wrote: