Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> namenode instantiation error


+
anand sharma 2012-08-09, 10:16
+
Mohammad Tariq 2012-08-09, 10:21
+
shashwat shriparv 2012-08-09, 10:28
+
rahul p 2012-08-09, 11:50
+
Mohammad Tariq 2012-08-09, 11:59
+
rahul p 2012-08-09, 12:57
+
rahul p 2012-08-09, 14:29
+
Mohammad Tariq 2012-08-09, 12:05
+
anand sharma 2012-08-09, 12:41
+
Abhishek 2012-08-09, 12:59
+
anand sharma 2012-08-10, 04:06
+
Owen Duan 2012-08-09, 12:53
+
Nitin Pawar 2012-08-09, 12:58
+
anand sharma 2012-08-10, 04:07
+
Vinayakumar B 2012-08-10, 04:44
+
Nitin Pawar 2012-08-09, 10:20
+
anand sharma 2012-08-10, 11:05
+
Harsh J 2012-08-10, 12:04
Copy link to this message
-
Re: namenode instantiation error
Hello Anand,

   Sorry for being unresponsive. You have anyways got proper comments
from the expert. I would just like to add one thing here. Since you
want to reduce the complexity, I would suggest you to configure ssh.
It's a one time pain but saves lot of time and efforts. Otherwise you
have to go to each node even for the smallest thing. ssh configuration
is quite straight forward and if you need some help on that you can go
here :
http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html

Regards,
    Mohammad Tariq
On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <[EMAIL PROTECTED]> wrote:
> You do not need SSH generally. See
> http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
>
> 1. Your original issue is that you are starting the NameNode as the
> completely wrong user. Start it as the "hdfs" user, in a packaged
> environment. Run "sudo -u hdfs hadoop namenode" to start it in
> foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> start it in the background. This will fix it up for you.
>
> 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> So if you typed "Y" instead of "y", it would have succeeded.
>
> HTH!
>
> On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <[EMAIL PROTECTED]> wrote:
>> And are permission for that file which is causing problem..
>>
>> [root@localhost hive]# ls -l
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>>
>>
>>
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <[EMAIL PROTECTED]> wrote:
>>>
>>> Hi, i am just learning the Hadoop and i am setting the development
>>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
>>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
>>> run namenode this is the error it logs...
>>>
>>> [hive@localhost ~]$ hadoop namenode
>>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> STARTUP_MSG:   build >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>>> 14:01:59 PDT 2012
>>> ************************************************************/
>>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> processName=NameNode, sessionId=null
>>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> NameNodeMeterics using context
>>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=1000
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> FSNamesystemMetrics using context
>>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
>>> failed.
>>> java.io.FileNotFoundException:
>>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>>> at java.io.RandomAccessFile.open(Native Method)
+
anand sharma 2012-08-11, 12:43
+
Mohamed Trad 2012-08-11, 15:55
+
shashwat shriparv 2012-08-10, 07:36
+
anand sharma 2012-08-10, 10:59