Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> namenode instantiation error


+
anand sharma 2012-08-09, 10:16
+
Mohammad Tariq 2012-08-09, 10:21
+
shashwat shriparv 2012-08-09, 10:28
+
rahul p 2012-08-09, 11:50
+
Mohammad Tariq 2012-08-09, 11:59
+
rahul p 2012-08-09, 12:57
+
rahul p 2012-08-09, 14:29
+
Mohammad Tariq 2012-08-09, 12:05
+
anand sharma 2012-08-09, 12:41
+
Abhishek 2012-08-09, 12:59
+
anand sharma 2012-08-10, 04:06
+
Owen Duan 2012-08-09, 12:53
+
Nitin Pawar 2012-08-09, 12:58
+
anand sharma 2012-08-10, 04:07
Copy link to this message
-
RE: namenode instantiation error
Hi Anand,

Its clearly telling namenode not able to access the lock file inside name
dir.

 

/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)

 

Did you format the namenode using one user and starting namenode in another
user..?

 

Try formatting and starting from same user console.

 

From: anand sharma [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 10, 2012 9:37 AM
To: [EMAIL PROTECTED]
Subject: Re: namenode instantiation error

 

yes Owen i did.

On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <[EMAIL PROTECTED]> wrote:

have you tried hadoop namenode -format?

2012/8/9 anand sharma <[EMAIL PROTECTED]>

yea  Tariq !1 its a fresh installation i m doing it for the first time, hope
someone will know the error code and the reason of error.

 

On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:

Hi Anand,

      Have you tried any other Hadoop distribution or version also??In
that case first remove the older one and start fresh.

Regards,
    Mohammad Tariq

On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <[EMAIL PROTECTED]>
wrote:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying to learn my self can anyone help me on the
>> same.
>> i have installed CDH3.
>>
>>
>>
>> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <[EMAIL PROTECTED]>
wrote:
>>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <[EMAIL PROTECTED]>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> > cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> > and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> > May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
(auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storag
e.java:614)
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.j
ava:591)
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage
(Storage.java:449)
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage
.java:304)
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j
ava:110)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:372)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271
)
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storag
e.java:614)
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.j
ava:591)
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage
(Storage.java:449)
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage
.java:304)
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j
ava:110)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:372)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271
)
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
localhost.localdomain/127.0.0.1

 

 

 
+
Nitin Pawar 2012-08-09, 10:20
+
anand sharma 2012-08-10, 11:05
+
Harsh J 2012-08-10, 12:04
+
Mohammad Tariq 2012-08-10, 14:21
+
anand sharma 2012-08-11, 12:43
+
Mohamed Trad 2012-08-11, 15:55
+
shashwat shriparv 2012-08-10, 07:36
+
anand sharma 2012-08-10, 10:59
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB