Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Hadoop Mapreduce fails with permission management enabled


Copy link to this message
-
Hadoop Mapreduce fails with permission management enabled
I enabled the permission management in my hadoop cluster, but I'm facing a
problem sending jobs with pig. This is the scenario:

1 - I have hadoop/hadoop user

2 - I have myuserapp/myuserapp user that runs PIG script.

3 - We setup the path /myapp to be owned by myuserapp

4 - We set pig.temp.dir to /myapp/pig/tmp

But when we pig try to run the jobs we got the following error:

job_201303221059_0009    all_actions,filtered,raw_data    DISTINCT
Message: Job failed! Error - Job initialization failed:
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=realtime, access=EXECUTE,
inode="system":hadoop:supergroup:rwx------

 Hadoop jobtracker requires this permission to statup it's server.

My hadoop policy looks like:

<property>
<name>security.client.datanode.protocol.acl</name>
<value>hadoop,myuserapp supergroup,myuserapp</value>
</property>
<property>
<name>security.inter.tracker.protocol.acl</name>
<value>hadoop,myuserapp supergroup,myuserapp</value>
</property>
<property>
<name>security.job.submission.protocol.acl</name>
<value>hadoop,myuserapp supergroup,myuserapp</value>
<property>

My hdfs-site.xml:

<property>
<name>dfs.permissions</name>
<value>true</value>
</property>

<property>
 <name>dfs.datanode.data.dir.perm</name>
 <value>755</value>
</property>

<property>
 <name>dfs.web.ugi</name>
 <value>hadoop,supergroup</value>
</property>

My core site:

...
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
...

And finally my mapred-site.xml

...
<property>
 <name>mapred.local.dir</name>
 <value>/tmp/mapred</value>
</property>

<property>
 <name>mapreduce.jobtracker.jobhistory.location</name>
 <value>/opt/logs/hadoop/history</value>
</property>
<property>
<name>mapreduce.jobtracker.staging.root.dir</name>
<value>/user</value>
</property>

Is there a missing configuration? How can I deal with multiples users
running jobs in a restrict HDFS cluster?