-Re: Using NFS mounted volume for Hadoop installation/configuration
Chris Embree 2013-02-18, 19:31
Just for clarification, we only use NFS for binaries and config files.
HDFS and MarpRed write to local disk. We just don't install an OS there.
On Mon, Feb 18, 2013 at 1:44 PM, Paul Wilkinson
> That requirement for 100% availability is the issue. If NFS goes down, you
> lose all sorts of things that are critical. This will work for a dev
> cluster, but strongly isn't recommended for production.
> As a first step, consider rsync - that way everything is local, so fewer
> external dependencies. After that, consider not managing boxes by hand :)
> On 18 Feb 2013, at 18:09, Chris Embree <[EMAIL PROTECTED]> wrote:
> I'm doing that currently. No problems to report so far.
> The only pitfall I've found is around NFS stability. If your NAS is 100%
> solid no problems. I've seen mtab get messed up and refuse to remount if
> NFS has any hiccups.
> If you want to really crazy, consider NFS for your datanode root fs. See
> the oneSIS project for details. http://onesis.sourceforge.net
> On Mon, Feb 18, 2013 at 1:00 PM, Mehmet Belgin <
> [EMAIL PROTECTED]> wrote:
>> Hi Everyone,
>> Will it be any problem if I put the hadoop executables and configuration
>> on a NFS volume, which is shared by all masters and slaves? This way the
>> configuration changes will be available for all nodes, without need for
>> synching any files. While this looks almost like a no-brainer, I am
>> wondering if there are any pitfalls I need to be aware of.
>> On a related question, is there a best practices (do's and don'ts )
>> document that you can suggest other than the regular documentation by