Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Re: How to Create file in HDFS using java Client with Permission


Copy link to this message
-
Re: How to Create file in HDFS using java Client with Permission
As per the error below, the user trying to write/read the file does not
have appropriate permission.

File not found org.apache.hadoop.security.
AccessControlException: Permission denied: user=hadoop, access=WRITE,
inode="/user/dasmohap/samir_tmp":dasmohap:dasmohap:drwxr-xr-x

HTH,
Anil

On Fri, Mar 1, 2013 at 3:20 AM, samir das mohapatra <[EMAIL PROTECTED]
> wrote:

> Hi All,
>     I wanted to know to to create file in hdfs using java program.
>
>   I wrote some code it is working fine in dev cluster but I am getting
> error in other cluster.
>
> Error:
> Writing data into HDFS...................
> Creating file
> File not found org.apache.hadoop.security.AccessControlException:
> Permission denied: user=hadoop, access=WRITE,
> inode="/user/dasmohap/samir_tmp":dasmohap:dasmohap:drwxr-xr-x
>     at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1755)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:1690)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1669)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:409)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:205)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44068)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>
>
> Regards,
> samir.
>

--
Thanks & Regards,
Anil Gupta
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB