On Wed, May 25, 2011 at 10:48PM, Thanh Do wrote:
> You can simulate disk failure by some fault injection techniques.
> Applying AspectJ is one of them.
Fault injection is there, so you can just check src/test/aop and
src/test/system for references, etc.
> On Wed, May 25, 2011 at 3:07 AM, ccxixicc <[EMAIL PROTECTED]> wrote:
> I'm using 0.20.2.A
> I had some test. I dont know how to simulate a disk failure, just chmod
> 000 dir1, the namenode shutdown immediately. And NN will hang if the nfs
> server down.
> ------------------A OriginalA ------------------
> From: A "Harsh J"<[EMAIL PROTECTED]>;
> Date: A Wed, May 25, 2011 03:49 PM
> To: A "hdfs-user"<[EMAIL PROTECTED]>;
> Subject: A Re: What if one of the directory(dfs.name.dir) rw error ?
> Yes. But depending on the version you're using, you may have to
> manually restart the NN after fixing the mount points, to get the
> directories in action again.
> 2011/5/25 ccxixicc <[EMAIL PROTECTED]>:
> > Hi,all
> > I set dfs.name.dir to a comma-delimited list of directories, dir1 is
> > /dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
> > What happens if /dev/sdb1 disk error, so dir1 cannot be read and
> > What happens if nfs server down, so dir3 cannot be read and write?
> > Will hadoop ignore the bad directory and use the good directory and
> > server?
> > Thanks.
> Harsh J