Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> unable to restart namenode on hadoop 1.0.4


+
Ravi Shetye 2013-09-30, 07:33
+
Manoj Sah 2013-09-30, 08:01
Copy link to this message
-
Re: unable to restart namenode on hadoop 1.0.4
I do not think these are same issue, Please correct me if I am worng.
the SO link is abour SNN unable to establish communication with NN.
In my case I am unable to launch NN itself.

The NLP issue is at the highlighted line, but I am not sure how to go about
resolving it

  /** Add a node child to the inodes at index pos.
   * Its ancestors are stored at [0, pos-1].
   * QuotaExceededException is thrown if it violates quota limit */
  private <T extends INode> T addChild(INode[] pathComponents, int pos,
      T child, long childDiskspace, boolean inheritPermission,
      boolean checkQuota) throws QuotaExceededException {
    INode.DirCounts counts = new INode.DirCounts();
    child.spaceConsumedInTree(counts);
    if (childDiskspace < 0) {
      childDiskspace = counts.getDsCount();
    }
    updateCount(pathComponents, pos, counts.getNsCount(), childDiskspace,
        checkQuota);
    *T addedNode = ((INodeDirectory)pathComponents[pos-1]).addChild(*
*        child, inheritPermission);*
    if (addedNode == null) {
      updateCount(pathComponents, pos, -counts.getNsCount(),
          -childDiskspace, true);
    }
    return addedNode;
  }
On Mon, Sep 30, 2013 at 1:31 PM, Manoj Sah <[EMAIL PROTECTED]> wrote:

> Hi,
> http://stackoverflow.com/questions/5490805/hadoop-nullpointerexcep
>
> try this link
>
> Thanks
> Manoj
>
>
> On Mon, Sep 30, 2013 at 1:03 PM, Ravi Shetye <[EMAIL PROTECTED]> wrote:
>
>> Can some one please help me about how I go ahead debugging the issue.The
>> NN log has the following error stack
>>
>> 2013-09-30 07:28:42,768 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> started
>> 2013-09-30 07:28:42,967 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2013-09-30 07:28:42,972 WARN
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 2013-09-30 07:28:42,978 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2013-09-30 07:28:42,980 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> NameNode registered.
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>     = 64-bit
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>> memory = 27.3075 MB
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>    = 2^22 = 4194304 entries
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet:
>> recommended=4194304, actual=4194304
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2013-09-30 07:28:43,119 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> dfs.block.invalidate.limit=100
>> 2013-09-30 07:28:43,119 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>> accessTokenLifetime=0 min(s)
>> 2013-09-30 07:28:43,183 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStateMBean and NameNodeMXBean
>> 2013-09-30 07:28:43,207 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>> occuring more than 10 times
>> 2013-09-30 07:28:43,221 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Number of files = 528665
>> 2013-09-30 07:28:49,109 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Number of files under
>> construction = 7
>> 2013-09-30 07:28:49,111 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Image file of size 79872266
>> loaded in 5 seconds.
>> 2013-09-30 07:28:49,113 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.NullPointerException
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)

RAVI SHETYE
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB