Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> hadoop namenode recovery


Copy link to this message
-
Re: hadoop namenode recovery
Hi Panshul,

Usually for reliability there will be multiple dfs.name.dir configured. Of which one would be a remote location such as a nfs mount.
So that even if the NN machine crashes on a whole you still have the fs image and edit log  in nfs mount. This can be utilized for reconstructing the NN back again.

Regards
Bejoy KS

Sent from remote device, Please excuse typos

-----Original Message-----
From: Panshul Whisper <[EMAIL PROTECTED]>
Date: Mon, 14 Jan 2013 17:25:08
To: <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: hadoop namenode recovery

Hello,

Is there a standard way to prevent the failure of Namenode crash in a
Hadoop cluster?
or what is the standard or best practice for overcoming the Single point
failure problem of Hadoop.

I am not ready to take chances on a production server with Hadoop 2.0 Alpha
release, which claims to have solved the problem. Are there any other
things I can do to either prevent the failure or recover from the failure
in a very short time.

Thanking You,

--
Regards,
Ouch Whisper
010101010101

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB