Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> backup namenode setting issue: namenode failed to start


Copy link to this message
-
Re: backup namenode setting issue: namenode failed to start
You do not even need to copy manually.
Say initially you have one active image in dfs.name.dir = dir1.
And dir 2 is empty.
You add dir2 to the the config: dfs.name.dir = dir1,dir2
HDFS will recognize the active image, will start name-node from it, and create an identical image copy in dir2.
Now both dir1 and dir2 contain synchronous active images.

Thanks,
--Konstantin

On 6/23/2010 12:20 PM, jiang licht wrote:
> This is what I did and it works. For a cluster already with data, we can stop hdfs and then copy everything from the root folder of meta data on master namenode (e.g. /home/hadoop in my example) to the backup folder and then start hdfs. In this way, the two folders will be in sync ...
>
>
> --Michael
>
> --- On Tue, 6/22/10, jiang licht<[EMAIL PROTECTED]>  wrote:
>
> From: jiang licht<[EMAIL PROTECTED]>
> Subject: Re: backup namenode setting issue: namenode failed to start
> To: [EMAIL PROTECTED]
> Date: Tuesday, June 22, 2010, 3:06 PM
>
> Thanks, Konstantin. I will look at options for mounting the folder. Is there any guide to a successful deployment of this method?
>
> Still have this question, does this backup method only work for a fresh cluster (It's my guess the namenode only stores a copy of new data information into folders specified in dfs.name.dir, then this method only works for a fresh cluster. For cluster that already has data, the meta data of the old data is not saved to the mounted folder. Is this correct?).
>
> --Michael
>
> --- On Mon, 6/21/10, Konstantin Shvachko<[EMAIL PROTECTED]>  wrote:
>
> From: Konstantin Shvachko<[EMAIL PROTECTED]>
> Subject: Re: backup namenode setting issue: namenode failed to start
> To: [EMAIL PROTECTED]
> Date: Monday, June 21, 2010, 1:58 PM
>
> Looks like the mounted file system /mnt/namenode-backup does not support locking.
> It should, otherwise hdfs cannot guarantee that only one name-node updates the directory.
> You might want to check with your sysadmins, may be the mount point is misconfigured.
>
> Thanks,
> --Konstantin
>
> On 6/21/2010 10:43 AM, jiang licht wrote:
>> According to hadoop tutorial on Yahoo developer netwrok and hadoop documentation on apache, a simple way to achieve namenode backup and recovery from single point namenode failure is to use a folder which is mounted on namenode machine but actually on a different machine to save dfs meta data as well, in addition to the folder on the namenode, as follows:
>>
>> <property>
>>        <name>dfs.name.dir</name>
>>        <value>/home/hadoop/dfs/name,/mnt/namenode-backup</value>
>>        <final>true</final>
>>      </property>where /mnt/namenode-backup is mounted on the namenode machine
>>
>> I followed this approach. However, we did this not to a fresh cluster, instead, we have run the cluster for a while, which means it has data already in hdfs.
>>
>> But
>>     this method or my deployment failed and namenode simply failed to start. I did almost the same: instead of mounting the namenode-backup under /mnt, I mount it under "/". The folder "/namenode-backup" belongs to account "hadoop", under which the cluster is running. Thus there is no access restriction issue.
>>
>> I got the following errors in the namenode log on the namenode machine:
>>
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = namenodedomainname/#.#.#.#
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2+228
>> STARTUP_MSG:   build =  -r cfc3233ece0769b11af9add328261295aaf4d1ad; compiled by 'root' on Mon Mar 22 03:11:39 EDT 2010
>> ************************************************************/
>> 2010-06-14 16:46:53,879 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=50001
>> 2010-06-14 16:46:53,886 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: namenodedomainname/#.#.#.#:50001
>> 2010-06-14 16:46:53,888 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB