Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Fair scheduler.

Copy link to this message
Re: Fair scheduler.
You have a different issue (in addition to MAPREDUCE-4398). setting
mapreduce.jobtracker.staging.root.dir to /user will solve the first
problem. The "magic" number of 4 is the default number of hard-coded
job init threads (mapred.jobinit.threads). You have to submit 4 or
more jobs as the jobtracker user at the same time to make sure the job
init thread are initialized as the system user so they can access the
mapred.system.dir (for security reasons, it must be 700). Otherwise,
some of the job init threads will be initialized as whatever user who
first submits a job. This can lead to seemingly more bizarre behavior:
some time it works (the job is initialized by one of the system
threads) and sometime it doesn't (the job is initialized by one of the
user threads). Once you know the root cause, it's pretty trivial to
come up with a patch. The default fifo scheduler and capacity
scheduler do not have this bug.

On Tue, Oct 16, 2012 at 4:52 PM, Patai Sangbutsarakum
> Thanks everyone, Seem like i hit the dead end.
> It's kind of funny when i read that jira; run it 4 time and everything
> will work.. where that magic number from..lol
> respects
> On Tue, Oct 16, 2012 at 4:12 PM, Arpit Gupta <[EMAIL PROTECTED]> wrote:
>> https://issues.apache.org/jira/browse/MAPREDUCE-4398
>> is the bug that Robin is referring to.
>> --
>> Arpit Gupta
>> Hortonworks Inc.
>> http://hortonworks.com/
>> On Oct 16, 2012, at 3:51 PM, "Goldstone, Robin J." <[EMAIL PROTECTED]>
>> wrote:
>> This is similar to issues I ran into with permissions/ownership of
>> mapred.system.dir when using the fair scheduler.  We are instructed to set
>> the ownership of mapred.system.dir to mapred:hadoop and then when the job
>> tracker starts up (running as user mapred) it explicitly sets the
>> permissions on this directory to 700.  Meanwhile when I go to run a job as
>> a regular user, it is trying to write stuff into mapred.system.dir but it
>> can't due to the ownership/permissions that have been established.
>> Per discussion with Arpit Gupta, this is a bug with the fair scheduler and
>> it appears from your experience that there are similar issues with
>> hadoop.tmp.dir.  The whole idea of the fair scheduler is to run jobs under
>> the user's identity rather than as user mapred.  This is good from a
>> security perspective yet it seems no one bothered to account for this in
>> terms of the permissions that need to be set in the various directories to
>> enable this.
>> Until this is sorted out by the Hadoop developers, I've put my attempts to
>> use the fair scheduler on holdŠ
>> Regards,
>> Robin Goldstone, LLNL
>> On 10/16/12 3:32 PM, "Patai Sangbutsarakum" <[EMAIL PROTECTED]>
>> wrote:
>> Hi Harsh,
>> Thanks for breaking it down clearly. I would say i am successful 98%
>> from the instruction.
>> The 2% is about hadoop.tmp.dir
>> let's say i have 2 users
>> userA is a user that start hdfs and mapred
>> userB is a regular user
>> if i use default value of  hadoop.tmp.dir
>> /tmp/hadoop-${user.name}
>> I can submit job as usersA but not by usersB
>> ser=userB, access=WRITE, inode="/tmp/hadoop-userA/mapred/staging"
>> :userA:supergroup:drwxr-xr-x
>> i googled around; someone recommended to change hadoop.tmp.dir to
>> /tmp/hadoop.
>> This way it is almost a yay way; the thing is
>> if I submit as userA it will create /tmp/hadoop in local machine which
>> ownership will be userA.userA,
>> and once I tried to submit job from the same machine as userB I will
>> get  "Error creating temp dir in hadoop.tmp.dir /tmp/hadoop due to
>> Permission denied"
>> (as because /tmp/hadoop is own by userA.userA). vise versa if I delete
>> /tmp/hadoop and let the directory be created by userB, userA will not
>> be able to submit job.
>> Which is the right approach i should work with?
>> Please suggest
>> Patai
>> On Mon, Oct 15, 2012 at 3:18 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>> Hi Patai,