Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - File permissions on S3FileSystem


Copy link to this message
-
File permissions on S3FileSystem
Danny Leshem 2010-04-22, 11:16
Hello,

I'm running a Hadoop cluster using 3 small Amazon EC2 machines and the
S3FileSystem.
Till lately I've been using 0.20.2 and everything was ok.

Now I'm using the latest trunc 0.22.0-SNAPSHOT and getting the following
thrown:

Exception in thread "main" java.io.IOException: The ownership/permissions on
the staging directory
s3://my-s3-bucket/mnt/hadoop.tmp.dir/mapred/staging/root/.staging is not as
expected. It is owned by  and permissions are rwxrwxrwx. The directory must
be owned by the submitter root or by root and permissions must be rwx------
    at
org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:107)
    at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:312)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:961)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:977)
    at com.mycompany.MyJob.runJob(MyJob.java:153)
    at com.mycompany.MyJob.run(MyJob.java:177)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at com.mycompany.MyOtherJob.runJob(MyOtherJob.java:62)
    at com.mycompany.MyOtherJob.run(MyOtherJob.java:112)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at com.mycompany.MyOtherJob.main(MyOtherJob.java:117)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:187)

(The "it is owned by ... and permissions " is not a mistake, seems like the
empty string is printed there)

My configuration is as follows:

core-site:
fs.default.name=s3://my-s3-bucket
fs.s3.awsAccessKeyId=[key id omitted]
fs.s3.awsSecretAccessKey=[secret key omitted]
hadoop.tmp.dir=/mnt/hadoop.tmp.dir

hdfs-site: empty

mapred-site:
mapred.job.tracker=[domU-XX-XX-XX-XX-XX-XX.compute-1.internal:9001]
mapred.map.tasks=6
mapred.reduce.tasks=6

Any help would be appreciated...

Best,
Danny