-Re: Using NFS mounted volume for Hadoop installation/configuration
Nan Zhu 2013-02-18, 18:14
I'm also maintaining an experimental Hadoop cluster, and I need to modify the Hadoop source code and test it,
so just use NFS to deploy the latest version of code, no problem found yet
School of Computer Science,
On Monday, 18 February, 2013 at 1:09 PM, Chris Embree wrote:
> I'm doing that currently. No problems to report so far.
> The only pitfall I've found is around NFS stability. If your NAS is 100% solid no problems. I've seen mtab get messed up and refuse to remount if NFS has any hiccups.
> If you want to really crazy, consider NFS for your datanode root fs. See the oneSIS project for details. http://onesis.sourceforge.net
> On Mon, Feb 18, 2013 at 1:00 PM, Mehmet Belgin <[EMAIL PROTECTED] (mailto:[EMAIL PROTECTED])> wrote:
> > Hi Everyone,
> > Will it be any problem if I put the hadoop executables and configuration on a NFS volume, which is shared by all masters and slaves? This way the configuration changes will be available for all nodes, without need for synching any files. While this looks almost like a no-brainer, I am wondering if there are any pitfalls I need to be aware of.
> > On a related question, is there a best practices (do's and don'ts ) document that you can suggest other than the regular documentation by Apache?
> > Thanks!
> > -Mehmet