Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Accessing Hadoop DFS for Data Storage and Retrieval Using Java


Copy link to this message
-
Re: Accessing Hadoop DFS for Data Storage and Retrieval Using Java
add ugi configuration like this :
conf.set("hadoop.job.ugi",your_hadoop_user_name+","+your_hadoop_group_name);
2010/3/9 Miguel Ángel Álvarez de la Concepción <[EMAIL PROTECTED]>

>  Hi,
>
>
>
> I tried to run the Java code and it doesn't work.
>
>
>
> I pasted the code below:
>
>
>
> public class testHadoop {
>
>     public static final String DIR_HADOOP = "hdfs://my.machine.com";
>
>     public static final String PORT_HADOOP = "9000";
>
>
>
>     public static void main(String[] args) {
>
>         Configuration config = new Configuration();
>
>         config.set("fs.default.name", DIR_HADOOP + ":" + PORT_HADOOP);
>
>
>
>         try {
>
>             FileSystem haddopFileSystem = FileSystem.get(config);
>
>
>
>             String directory = "test";
>
>             Path hadoopDirectory = new
> Path(haddopFileSystem.getWorkingDirectory() + "/" + directory);
>
>
>
>             haddopFileSystem.mkdirs(hadoopDirectory);
>
>
>
>             Path sourceDirectory = new
> Path("C://Windows/media/ringout.wav");
>
>
>
>             haddopFileSystem.copyFromLocalFile(sourceDirectory,
> hadoopDirectory);
>
>
>
>             Path sourceFile = new
> Path(haddopFileSystem.getWorkingDirectory() + "/test/ringout.wav");
>
>             Path targetDirectory = new Path("C://");
>
>
>
>             haddopFileSystem.copyToLocalFile(sourceFile, targetDirectory);
>
>
>
>             haddopFileSystem.delete(hadoopDirectory, true);
>
>         } catch(IOException ex) {
>
>             Logger.getLogger(testHadoop.class.getName()).log(Level.SEVERE,
> null, ex);
>
>         }
>
>     }
>
> }
>
>
>
> The result of this code is an exception:
>
>
>
> org.apache.hadoop.security.AccessControlException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=varadero\miguelangel, access=WRITE,
> inode="tmp":root:supergroup:rwxr-xr-x
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
>
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
>
>         at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:914)
>
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:262)
>
>         at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1120)
>
>         at hadoop.testHadoop.main(testHadoop.java:37)
>
> Caused by: org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=varadero\miguelangel, access=WRITE,
> inode="tmp":root:supergroup:rwxr-xr-x
>
>         at
> org.apache.hadoop.hdfs.server.namenode.PermissionChecker.check(PermissionChecker.java:176)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.PermissionChecker.check(PermissionChecker.java:157)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.PermissionChecker.checkPermission(PermissionChecker.java:105)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4514)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4484)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:1766)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:1735)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.mkdirs(NameNode.java:542)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>

Best Regards

Jeff Zhang
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB