Rahul Bhattacharjee 2013-04-03, 14:40
Azuryy Yu 2013-04-03, 15:08
Rahul Bhattacharjee 2013-04-03, 14:42
Mohammad Tariq 2013-04-03, 14:57
Mohammad Tariq 2013-04-03, 14:58
If you are not in position to go for HA just keep your checkpoint period
shorter to have recent data recoverable from SNN.
and you always have a option
hadoop namenode -recover
try this on testing cluster and get versed to it.
and take backup of image at some solid state storage.
On Wed, Apr 3, 2013 at 9:56 PM, Harsh J <[EMAIL PROTECTED]> wrote:
> There is a 3rd, most excellent way: Use HDFS's own HA, see
> On Wed, Apr 3, 2013 at 8:10 PM, Rahul Bhattacharjee
> <[EMAIL PROTECTED]> wrote:
> > Hi all,
> > I was reading about Hadoop and got to know that there are two ways to
> > protect against the name node failures.
> > 1) To write to a nfs mount along with the usual local disk.
> > -or-
> > 2) Use secondary name node. In case of failure of NN , the SNN can take
> > charge.
> > My questions :-
> > 1) SNN is always lagging , so when SNN becomes primary in event of a NN
> > failure , then the edits which have not been merged into the image file
> > would be lost , so the system of SNN would not be consistent with the NN
> > before its failure.
> > 2) Also I have read that other purpose of SNN is to periodically merge
> > edit logs with the image file. In case a setup goes with option #1
> > to NFS, no SNN) , then who does this merging.
> > Thanks,
> > Rahul
> Harsh J
Rahul Bhattacharjee 2013-04-04, 03:12