Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Re: Hadoop Mapreduce fails with permission management enabled


Copy link to this message
-
Re: Hadoop Mapreduce fails with permission management enabled
Permission denied: user=*realtime*, access=EXECUTE,
inode="*system*":*hadoop:**supergroup:rwx------*
It seems like you tried to run a job with a user 'realtime' but this one
has no access to the 'system' directory, which according to the right
'hadoop:supergroup:rwx------' seems quite logical. It belongs to someone
else 'hadoop/supergroup' and this user does not like sharing '------'.

I would guess that the 'system' directory is the last level of the
mapred.system.dir.

The setting should be changed according to your environment.

Regards

Bertrand

On Wed, Mar 27, 2013 at 9:02 PM, Marcos Sousa <[EMAIL PROTECTED]>wrote:

> I enabled the permission management in my hadoop cluster, but I'm facing a
> problem sending jobs with pig. This is the scenario:
>
> 1 - I have hadoop/hadoop user
>
> 2 - I have myuserapp/myuserapp user that runs PIG script.
>
> 3 - We setup the path /myapp to be owned by myuserapp
>
> 4 - We set pig.temp.dir to /myapp/pig/tmp
>
> But when we pig try to run the jobs we got the following error:
>
> job_201303221059_0009    all_actions,filtered,raw_data    DISTINCT    Message: Job failed! Error - Job initialization failed: org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=realtime, access=EXECUTE, inode="system":hadoop:supergroup:rwx------
>
>  Hadoop jobtracker requires this permission to statup it's server.
>
> My hadoop policy looks like:
>
> <property>
> <name>security.client.datanode.protocol.acl</name>
> <value>hadoop,myuserapp supergroup,myuserapp</value>
> </property>
> <property>
> <name>security.inter.tracker.protocol.acl</name>
> <value>hadoop,myuserapp supergroup,myuserapp</value>
> </property>
> <property>
> <name>security.job.submission.protocol.acl</name>
> <value>hadoop,myuserapp supergroup,myuserapp</value>
> <property>
>
> My hdfs-site.xml:
>
> <property>
> <name>dfs.permissions</name>
> <value>true</value>
> </property>
>
> <property>
>  <name>dfs.datanode.data.dir.perm</name>
>  <value>755</value>
> </property>
>
> <property>
>  <name>dfs.web.ugi</name>
>  <value>hadoop,supergroup</value>
> </property>
>
> My core site:
>
> ...
> <property>
> <name>hadoop.security.authorization</name>
> <value>true</value>
> </property>
> ...
>
> And finally my mapred-site.xml
>
> ...
> <property>
>  <name>mapred.local.dir</name>
>  <value>/tmp/mapred</value>
> </property>
>
> <property>
>  <name>mapreduce.jobtracker.jobhistory.location</name>
>  <value>/opt/logs/hadoop/history</value>
> </property>
> <property>
>
> <name>mapreduce.jobtracker.staging.root.dir</name>
>
> <value>/user</value>
>
> </property>
>
> Is there a missing configuration? How can I deal with multiples users
> running jobs in a restrict HDFS cluster?
>
>
+
Marcos Sousa 2013-03-28, 14:20
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB