Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> ERROR rised during backup node starting


Copy link to this message
-
ERROR rised during backup node starting
Hi ALL

 I am testing my backup node(CDH4.1.2).After I setup the neccessory
settings,i started the backup node.
 Everything went well,except for an ERROR had risen[1].

 What is the problem, and what is the lib.MethodMetric  used for ?
 I also noticed the Bad state: UNINITIALIZED.I wondered  when the
UNINITIALIZED  state could come out?

 Any help will be appreciated.
[1]
13/02/18 18:11:28 INFO impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
13/02/18 18:11:28 INFO impl.MetricsSystemImpl: Scheduled snapshot period at
10 second(s).
13/02/18 18:11:28 INFO impl.MetricsSystemImpl: BackupNode metrics system
started
13/02/18 18:11:29 WARN common.Util: Path /home/hadoop/namedir should be
specified as a URI in configuration files. Please update hdfs configuration.
13/02/18 18:11:29 WARN common.Util: Path /home/hadoop/namedir should be
specified as a URI in configuration files. Please update hdfs configuration.
13/02/18 18:11:29 INFO util.HostsFileReader: Refreshing hosts
(include/exclude) list
13/02/18 18:11:29 INFO blockmanagement.DatanodeManager:
dfs.block.invalidate.limit=1000
13/02/18 18:11:29 INFO blockmanagement.BlockManager:
dfs.block.access.token.enable=false
13/02/18 18:11:29 INFO blockmanagement.BlockManager: defaultReplication
    = 3
13/02/18 18:11:29 INFO blockmanagement.BlockManager: maxReplication
    = 512
13/02/18 18:11:29 INFO blockmanagement.BlockManager: minReplication
    = 1
13/02/18 18:11:29 INFO blockmanagement.BlockManager: maxReplicationStreams
     = 2
13/02/18 18:11:29 INFO blockmanagement.BlockManager:
shouldCheckForEnoughRacks  = false
13/02/18 18:11:29 INFO blockmanagement.BlockManager:
replicationRecheckInterval = 3000
13/02/18 18:11:29 INFO blockmanagement.BlockManager: encryptDataTransfer
     = false
13/02/18 18:11:29 INFO namenode.FSNamesystem: fsOwner             = hadoop
(auth:SIMPLE)
13/02/18 18:11:29 INFO namenode.FSNamesystem: supergroup          supergroup
13/02/18 18:11:29 INFO namenode.FSNamesystem: isPermissionEnabled = true
13/02/18 18:11:29 INFO namenode.FSNamesystem: HA Enabled: false
13/02/18 18:11:29 INFO namenode.FSNamesystem: Append Enabled: true
13/02/18 18:11:29 INFO namenode.NameNode: Caching file names occuring more
than 10 times
13/02/18 18:11:29 INFO namenode.FSNamesystem:
dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13/02/18 18:11:29 INFO namenode.FSNamesystem:
dfs.namenode.safemode.min.datanodes = 0
13/02/18 18:11:29 INFO namenode.FSNamesystem:
dfs.namenode.safemode.extension     = 30000
13/02/18 18:11:29 INFO common.Storage: Lock on
/home/hadoop/namedir/in_use.lock acquired by nodename 19850@Hadoop-database
13/02/18 18:11:29 INFO ipc.Server: Starting Socket Reader #1 for port 50100
13/02/18 18:11:29 INFO namenode.FSNamesystem: Registered FSNamesystemState
MBean
13/02/18 18:11:29 WARN common.Util: Path /home/hadoop/namedir should be
specified as a URI in configuration files. Please update hdfs configuration.
13/02/18 18:11:29 INFO namenode.FSNamesystem: Number of blocks under
construction: 0
13/02/18 18:11:29 INFO namenode.FSNamesystem: initializing replication
queues
13/02/18 18:11:29 INFO blockmanagement.BlockManager: Total number of blocks
           = 0
13/02/18 18:11:29 INFO blockmanagement.BlockManager: Number of invalid
blocks          = 0
13/02/18 18:11:29 INFO blockmanagement.BlockManager: Number of
under-replicated blocks = 0
13/02/18 18:11:29 INFO blockmanagement.BlockManager: Number of
 over-replicated blocks = 0
13/02/18 18:11:29 INFO blockmanagement.BlockManager: Number of blocks being
written    = 0
13/02/18 18:11:29 INFO hdfs.StateChange: STATE* Replication Queue
initialization scan for invalid, over- and under-replicated blocks
completed in 8 msec
13/02/18 18:11:29 INFO hdfs.StateChange: STATE* Leaving safe mode after 0
secs.
13/02/18 18:11:29 INFO hdfs.StateChange: STATE* Network topology has 0
racks and 0 datanodes
13/02/18 18:11:29 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0
blocks
13/02/18 18:11:29 ERROR lib.MethodMetric: Error invoking method
getTransactionsSinceLastLogRoll
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111)
at
org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144)
at
org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387)
at
org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:78)
at
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:194)
at
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:171)
at
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:150)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:321)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:307)
at
com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57)
at
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:220)
at
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:95)
at
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:244)
at
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:222)
at org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:54)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:601)
at
org.apache.hadoop.hdfs.se