If the good copy on NFS exists post-crash of the NN, use that for
lesser/zero loss than the SNN which can be an hour old (checkpoint
period) by default. Thats the whole point for running the NFS disk
mount (make sure its softmounted btw, you don't want your NN to hang
if the NFS is hung).
On Wed, Mar 27, 2013 at 8:58 AM, David Parks <[EMAIL PROTECTED]> wrote:
> Thanks for the update, I understand now that I'll be installing a "secondary
> name node" which performs checkpoints on the primary name node and keeps a
> working backup copy of the fsimage file.
> The primary name node should write its fsimage file to at least 2 different
> physical mediums for improved safety as well (example: locally and an nfs
> One point of query: were the primary name node to be lost, we would be best
> off re-building it and copying the fsimage files into place, either from the
> nfs share, or from the secondary name node, as the situation dictates.
> There's no mechanism to "fail over" to the "secondary name node" per-se.
> Am I on track here?
> -----Original Message-----
> From: Konstantin Shvachko [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, March 27, 2013 5:07 AM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: For a new installation: use the BackupNode or the
> There is no BackupNode in Hadoop 1.
> That was a bug in documentation.
> Here is the updated link:
> On Sat, Mar 23, 2013 at 12:04 AM, varun kumar <[EMAIL PROTECTED]> wrote:
>> Hope below link will be useful..
>> On Sat, Mar 23, 2013 at 12:29 PM, David Parks <[EMAIL PROTECTED]>
>>> For a new installation of the current stable build (1.1.2 ), is there
>>> any reason to use the CheckPointNode over the BackupNode?
>>> It seems that we need to choose one or the other, and from the docs
>>> it seems like the BackupNode is more efficient in its processes.
>> Varun Kumar.P