Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> File permissions in HDFS


Copy link to this message
-
File permissions in HDFS
 I am running Hadoop jobs on a cluster whose jobtracker and
file system are on a machine in my network called mycluster.

 conf.set("fs.default.name","hdfs://mycluster:9000");
 conf.set("mapred.job.tracker","mycluster:9001);

 If I set in the configuration as shown , set the configuration in a Tool
and call run
the Tool run command, the job runs on the cluster.

My problem is that even when in the configuration I say
conf.set("user.name","MyDesiredUser");

the job runs as the local user.

On my cluster the files and directories are created as the local user
StevesPc\Steve.
In my cluster hdfs-site.xml has the property

<property>
   <name>dfs.permissions</name>
   <value>false</value>
   <final>true</final>
</property>

and in hadoop-policy.xml
all properies are set to * as shown below
 <property>
    <name>security.client.protocol.acl</name>
    <value>*</value>
  </property>

On my cluster things work but on other clusters they do not. The
documentation is not very
clear on permissions and I would like a good discussion on this issue.
Ideally I would want
to set the user (and provide a password) for a hadoop job but still run on
the local box

--
Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB