Chris Nauroth 2013-06-18, 18:58
Leo Leung 2013-06-18, 19:14
Jean-Baptiste Onofré 2013-06-18, 19:07
Prashant Kommireddi 2013-06-19, 05:42
Chris Nauroth 2013-06-19, 20:01
Prashant Kommireddi 2013-06-19, 20:31
We also hit this issue before.
1) We want run a Hadoop cluster with permission disabled.
2) With Job history server, yarn, hdfs daemons run under a special service user account, e.g. 'hadoop'
3) Users submit jobs to the cluster under their own account.
For the above scenario, submitting jobs fails in Hadoop 2.0 while succeeds in Hadoop 1.0.
In our investigation, the regression happened in jobclient and job history server, not on hdfs side.
The root cause is that jobclient will copy jar files to the staging area configed by "yarn.app.mapreduce.am.staging-dir".
The client will also set the permission on the directory and jar files to some pre-configured value, i.e. JobSubmissionFilesJOB_DIR_PERMISSION and JobSubmissionFilesJOB_FILE_PERMISSION.
On HDFS side, even if 'permissoin.enabled' is set to false, changing permissions are not allowed.
(This is the same in both Hadoop v1 and v2.)
JobHistoryServer also plays a part in this as its staging directory happens to be at the same locations as "yarn.app.mapreduce am.staging-dir".
It will create directories recursively with permissions set to HISTORY_STAGING_DIR_PERMISSIONS.
JobHistoryServer runs under the special service user account while JobClient is under the user who submitting jobs.
This lead to a failure in setPermission() during job submission.
There are multiple possible mitigations possible. Here are two examples.
1) config all users submitting jobs to supergroup.
2) during setup, pre-create the staging directory and chown to correct user.
In our case, we took approach 1) because the security check on HDFS was not very important for our scenarios (part of the reason why we can disable HDFS permission in the first place).
Hope this can help you solve your problem!
From: Prashant Kommireddi [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 19, 2013 1:32 PM
To: [EMAIL PROTECTED]
Subject: Re: DFS Permissions on Hadoop 2.x
How can we resolve the issue in the case I have mentioned? File a MR Jira that does not try to check permissions when dfs.permissions.enabled is set to false?
The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get around the fact that certain permissions are set on shared directories by a certain user that disallow any other users from using them. Or am I missing something entirely?
On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Just in case anyone is curious who didn't look at HDFS-4918, we established that this is actually expected behavior, and it's mentioned in the documentation. However, I filed HDFS-4919 to make the information clearer in the documentation, since this caused some confusion.
On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Thanks guys, I will follow the discussion there.
On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Yes, and I think this was lead by Snapshot.
I've file a JIRA here:
On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
This is a HDFS bug. Like all other methods that check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!
On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
<[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
> Looks like the jobs fail only on the first attempt and pass thereafter.
> Failure occurs while setting perms on "intermediate done directory". Here is
> what I think is happening:
> 1. Intermediate done dir is (ideally) created as part of deployment (for eg,