-RE: Running Hadoop v2 clustered mode MR on an NFS mounted filesystem
java8964 2014-01-10, 15:42
When you said that the mappers seem to be accessing file sequentially, why do you think so?
NFS maybe changes something, but mappers shouldn't access file sequentially. NFS could make the file unsplittable, but you need to more test to verify it.
The class you want to check out is the org.apache.hadoop.mapred.FileInputFormat, especially method getSplits().
The above code is the key how the split list is generated. If it doesn't performance well for your underline storage system, you can always write your own InputFormat to utilize your own storage system.
From: [EMAIL PROTECTED]
Date: Wed, 8 Jan 2014 15:48:12 +0530
Subject: Re: Running Hadoop v2 clustered mode MR on an NFS mounted filesystem
To: [EMAIL PROTECTED]
Figured out 1. The output of the reduce was going to the slave node, while I was looking for it in the master node. Which is perfectly fine. Need guidance for 2. though!
On Wed, Jan 8, 2014 at 3:30 PM, Atish Kathpal <[EMAIL PROTECTED]> wrote:
By giving the complete URI, the MR jobs worked across both nodes. Thanks a lot for the advice.
Two issues though:1. On completion of the MR job, I see only the "_SUCCESS" file in the output directory, but no part-r file containing the actual results of the wordcount job. However I am seeing the correct output on running MR over HDFS. What is going wrong? Any place I can find logs for the MR job. I see no errors on the console.
Command used: hadoop jar /home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount file:///home/hduser/testmount/ file:///home/hduser/testresults/
2. I am observing that the mappers seem to be accessing files sequentially, splitting the files across mappers, and then reading data in parallelel, then moving on to the next file. What I want instead is that, files themselves should be accessed in parallel, that is, if there are 10 files to be MRed, then MR should ask for each of these files in parallel in one go, and then work on the splits of these files in parallel.
Why do I need this? Some of the data coming from the NFS mount point is coming from offline media (which takes ~5-10 seconds of time before first bytes are received). So I would like all required files to be asked at the onset itself from the NFS mount point. This way several offline media will be spun up parallely and as the data from these media gets available MR can process them.
Would be glad to get inputs on these points!
Tip for those who are trying similar stuff::In my case. after a while the jobs would fail, complaining of "java.lang.OutOfMemoryError: Java heap space", but I was able to rectify this with help from: http://stackoverflow.com/questions/13674190/cdh-4-1-error-running-child-java-lang-outofmemoryerror-java-heap-space
On Sun, Dec 22, 2013 at 2:47 PM, Atish Kathpal <[EMAIL PROTECTED]> wrote:
Thanks Devin, Yong, and Chris for your replies and suggestions. I will test the suggestions made by Yong and Devin and get back to you guys.
As on the bottlenecking issue, I agree, but I am trying to run few MR jobs on a traditional NAS server. I can live with a few bottlenecks, so long as I don't have to move the data to a dedicated HDFS cluster.
On Sat, Dec 21, 2013 at 8:06 AM, Chris Mawata <[EMAIL PROTECTED]> wrote:
Yong raises an important issue: You
have thrown out the I/O advantages of HDFS and also thrown out the
advantages of data locality. It would be interesting to know why
you are taking this approach.
On 12/20/2013 9:28 AM, java8964 wrote:
I believe the "-fs local" should be removed too.
The reason is that even you have a dedicated JobTracker after
removing "-jt local", but with "-fs local", I believe that all
the mappers will be run sequentially.
"-fs local" will force the mapreducer run in "local" mode,
which is really a test mode.
What you can do is to remove both "-fs local -jt local",
but give the FULL URI of the input and output path, to tell
Hadoop that they are local filesystem instead of HDFS.
Keep in mind followings:
1) The NFS mount need to be available in all your Task
Nodes, and mounted in the same way.
2) Even you can do that, but your sharing storage will be
your bottleneck. NFS won't work well for scalability.
Date: Fri, 20 Dec 2013 09:01:32 -0500
Subject: Re: Running Hadoop v2 clustered mode MR on an NFS
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
I think most of your problem is coming from
the options you are setting:
wordcount -fs local -jt local
You appear to be directing your namenode to run jobs
in the LOCAL job runner and directing it to read
from the LOCAL filesystem. Drop the -jt
argument and it should run in distributed mode if your
cluster is set up right. You don't need to do anything
special to point Hadoop towards a NFS location, other
than set up the NFS location properly and make sure if
you are directing to it by name that it will resolv