Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> NameNode fails


Copy link to this message
-
Re: NameNode fails
Hi Yogesh,

      Follow the link specified by Bejoy. It shows all the necessary steps.

Regards,
    Mohammad Tariq
On Fri, Jul 20, 2012 at 4:14 PM, Bejoy Ks <[EMAIL PROTECTED]> wrote:
> Hi Yogesh
>
> Just treat SecondaryName Node as a check pointing node. Name recovery
> is mostly done by writing the fsimage and edit log to a remote nfs
> mount other than the local fs. So that if the copy on local disk gets
> corrupted or lost on a disk failure or machine failure the one from
> remote mount can be used.
>
> You can read more here
> http://wiki.apache.org/hadoop/NameNodeFailover
>
> Regards
> Bejoy KS
>
>
> On Fri, Jul 20, 2012 at 3:58 PM,  <[EMAIL PROTECTED]> wrote:
>> Thanks Mohammad :-),
>>
>> I just read this concept of secondary NameNode. Thank you for your reply.
>> Mohammad I am not finding the way to implement would you please explain me regarding to recover namenode, Iam getting confuse.
>>
>> Thanks & Regards
>> Yogesh Kumar Dhari
>>
>> ________________________________________
>> From: Mohammad Tariq [[EMAIL PROTECTED]]
>> Sent: Friday, July 20, 2012 3:17 PM
>> To: [EMAIL PROTECTED]
>> Subject: Re: NameNode fails
>>
>> Hi yogesh,
>>
>>        First of all, we should always keep it mind that Secondary
>> Namenode is not a backup for the Namenode. By its name, it gives a
>> sense that its a backup for the Namenode, but in reality its not.
>> Namenode stores data in 2 files :
>> 1- fsimage - snapshot of the filesystem when namenode started
>> 2- Edit logs - contain the sequence of changes made to the filesystem
>> after namenode started.
>>
>> The sole purpose of Secondary Namenode is to have a checkpoint in HDFS
>> and it acts as a helper of the Namenode. It basically :
>> 1- gets the edit logs from the namenode in regular intervals and
>> applies to fsimage
>> 2- once it has new fsimage, it copies back to namenode
>>
>> Namenode uses this fsimage for the next restart.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Fri, Jul 20, 2012 at 1:59 PM,  <[EMAIL PROTECTED]> wrote:
>>> Hi Bejoy,
>>>
>>> Its done now, Error log was showing namenode is not formated,
>>> I closed all previous terminal and restarted it after formatting.
>>>
>>> its running now.
>>>
>>> Please suggest me that if in case it gets crashed then how do I recover it
>>> from Secondary Namenode. how should I proceed for that
>>>
>>>
>>> Thanks & regards
>>> Yogesh Kumar Dhari
>>> ________________________________
>>> From: Bejoy KS [[EMAIL PROTECTED]]
>>> Sent: Friday, July 20, 2012 12:56 PM
>>> To: Yogesh Kumar (WT01 - Communication and Media);
>>> [EMAIL PROTECTED]
>>> Subject: Re: NameNode fails
>>>
>>> Hi Yogesh
>>>
>>> Please post in the error logs/messages if you find any.
>>>
>>> Regards
>>> Bejoy KS
>>>
>>> Sent from handheld, please excuse typos.
>>> ________________________________
>>> From: <[EMAIL PROTECTED]>
>>> Date: Fri, 20 Jul 2012 07:21:24 +0000
>>> To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
>>> Subject: RE: NameNode fails
>>>
>>> Thanks Bejoy, Mohammad and Vignesh :-).
>>>
>>> I have done the suggested by you and made thses changes. and formatted the
>>> namenode and trying to start cluster.
>>> but now name node is not starting :-(
>>>
>>>
>>> hdfs-site.xml
>>>
>>> **********************************************************
>>> <configuration>
>>>     <property>
>>>         <name>dfs.replication</name>
>>>         <value>1</value>
>>>     </property>
>>>
>>>     <property>
>>>         <name>dfs.name.dir</name>
>>>         <value>/HADOOP/hadoop-0.20.2/hadoop_name_dirr</value>
>>>     </property>
>>>
>>>     <property>
>>>         <name>dfs.data.dir</name>
>>>         <value>/HADOOP/hadoop-0.20.2/hadoop_data_dirr</value>
>>>     </property>
>>>
>>> </configuration>
>>>
>>> **************************************************************
>>>
>>>
>>>
>>> hdfs-core.xml
>>>
>>> **************************************************************
>>>
>>> <configuration>
>>>     <property>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB