It's always better to have both 1 and 2 together. One common
misconception is that SNN is a backup of the NN, which is wrong. SNN is a
helper node to the NN. In case of any failure SNN is not gonna take up the
Yes, we can't guarantee that the SNN fsimage replica will always be up to
date. And when you are writing the metadata on a filer or NFS, you are just
creating an additional copy of the metadata. Don't mistake it with SNN.
When you specify value of your "dfs.name.dir" property as a comma separated
list, which is localFS+NFS, you are just making sure that even if something
goes wrong with the localFS, your metadata is still same in the NFS.
But, it is still better to have the SNN in a separate machine. But you can
never rely 100% on SNN, because of the fact you have already mentioned.
It'll not be in 100% sync.
On Wed, Apr 3, 2013 at 8:12 PM, Rahul Bhattacharjee <[EMAIL PROTECTED]
> Or both the options are used together. NFS + SNN ?
> On Wed, Apr 3, 2013 at 8:10 PM, Rahul Bhattacharjee <
> [EMAIL PROTECTED]> wrote:
>> Hi all,
>> I was reading about Hadoop and got to know that there are two ways to
>> protect against the name node failures.
>> 1) To write to a nfs mount along with the usual local disk.
>> 2) Use secondary name node. In case of failure of NN , the SNN can take
>> in charge.
>> My questions :-
>> 1) SNN is always lagging , so when SNN becomes primary in event of a NN
>> failure , then the edits which have not been merged into the image file
>> would be lost , so the system of SNN would not be consistent with the NN
>> before its failure.
>> 2) Also I have read that other purpose of SNN is to periodically merge
>> the edit logs with the image file. In case a setup goes with option #1
>> (writing to NFS, no SNN) , then who does this merging.