Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> namenode not starting


+
Abhay Ratnaparkhi 2012-08-24, 07:28
+
Bejoy KS 2012-08-24, 07:31
+
vivek 2012-08-24, 07:31
+
Nitin Pawar 2012-08-24, 07:30
+
Håvard Wahl Kongsgård 2012-08-24, 12:37
+
Harsh J 2012-08-25, 14:15
+
Abhay Ratnaparkhi 2012-08-27, 05:49
+
Harsh J 2012-08-27, 07:30
+
Leo Leung 2012-08-27, 17:34
Copy link to this message
-
Re: namenode not starting
Hello Abhay,

    Along with dfs.name.dir, also include dfs.data.dir in hdfs-site.xml.

On Monday, August 27, 2012, Abhay Ratnaparkhi <[EMAIL PROTECTED]>
wrote:
> Thank you Harsh,
> I have set "dfs.name.dir" explicitly. Still don't know why the data loss
has happened.
> <property>
>   <name>dfs.name.dir</name>
>   <value>/wsadfs/${host.name}/name</value>
>   <description>Determines where on the local filesystem the DFS name node
>       should store the name table.  If this is a comma-delimited list
>       of directories then the name table is replicated in all of the
>       directories, for redundancy. </description>
> </property>
> The secondary namenode was same as namenode. Does this affect  anyway
since path of "dfs.name.dir" were same?
> I have now configured another machine as secondary namenode.
> I have now  formatted the filesystem since not seen any way of
recovering.
> I have some questions.
> 1. Apart from setting secondary namenode what are the other techniques
used for namenode directory backups?
> 2. Is there any way or tools to recover some of the data even if namenode
crashes.
> Regards,
> Abhay
>
>
>
>
> On Sat, Aug 25, 2012 at 7:45 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>
> Abhay,
>
> I suspect that if you haven't set your dfs.name.dir explicitly, then
> you haven't set fs.checkpoint.dir either, and since both use
> hadoop.tmp.dir paths, you may have lost your data completely and there
> is no recovery possible now.
>
> On Fri, Aug 24, 2012 at 1:10 PM, Abhay Ratnaparkhi
> <[EMAIL PROTECTED]> wrote:
>> Hello,
>>
>> I was using cluster for long time and not formatted the namenode.
>> I ran bin/stop-all.sh and bin/start-all.sh scripts only.
>>
>> I am using NFS for dfs.name.dir.
>> hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any way
to
>> recover the data?
>>
>> Thanks,
>> Abhay
>>
>>
>> On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS <[EMAIL PROTECTED]> wrote:
>>>
>>> Hi Abhay
>>>
>>> What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to
>>> /tmp the contents would be deleted on a OS restart. You need to change
this
>>> location before you start your NN.
>>> Regards
>>> Bejoy KS
>>>
>>> Sent from handheld, please excuse typos.
>>> ________________________________
>>> From: Abhay Ratnaparkhi <[EMAIL PROTECTED]>
>>> Date: Fri, 24 Aug 2012 12:58:41 +0530
>>> To: <[EMAIL PROTECTED]>
>>> ReplyTo: [EMAIL PROTECTED]
>>> Subject: namenode not starting
>>>
>>> Hello,
>>>
>>> I had a running hadoop cluster.
>>> I restarted it and after that namenode is unable to start. I am getting
>>> error saying that it's not formatted. :(
>>> Is it possible to recover the data on HDFS?
>>>
>>> 2012-08-24 03:17:55,378 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>> initialization failed.
>>> java.io.IOException: NameNode is not formatted.
>>>         at
>>>
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
>>>         at
>>>
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>>         at
>>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
>>>         at
>>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:270)
>>>         at
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
>>>         at
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
>>>         at
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:433)
>>>         at
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:421)
>>>         at
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>>> 2012-08-24 03:17:55,380 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> NameNode is not formatted.
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:270)

Regards,
    Mohammad Tariq