Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> namenode instantiation error


Copy link to this message
-
Re: namenode instantiation error
format the filesystem

bin/hadoop namenode -format

then try to start namenode :)

On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:

> Hello Anand,
>
>     Is there any specific reason behind not using ssh??
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <[EMAIL PROTECTED]>
> wrote:
> > Hi, i am just learning the Hadoop and i am setting the development
> > environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> > in CentOS 6.2 . i can run the sample programs as usual but when i try and
> > run namenode this is the error it logs...
> >
> > [hive@localhost ~]$ hadoop namenode
> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> > STARTUP_MSG:   build > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May
> > 7 14:01:59 PDT 2012
> > ************************************************************/
> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > processName=NameNode, sessionId=null
> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> > NameNodeMeterics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> > dfs.block.invalidate.limit=1000
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> > FSNamesystemMetrics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> > failed.
> > java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> > 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)


Shashwat Shriparv
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB