Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Urgent Requirement: How to copy File from One cluster to another cluster using java client(throught java Program)


Copy link to this message
-
Re: Urgent Requirement: How to copy File from One cluster to another cluster using java client(throught java Program)
Marco Shaw 2013-03-01, 19:59
Hi Samir,

I may be alone here, but I would prefer you not use "urgent" when
asking for free help from a mailing list.

My recommendation is that if this is really urgent and you need
instant support for your Hadoop installation, that you consider
getting a proper support contract to help you when you get stuck and
need help right away.

Again, it might just be me, but free support is...  free and usually
volunteer based.

Marco

On Fri, Mar 1, 2013 at 3:43 PM, samir das mohapatra
<[EMAIL PROTECTED]> wrote:
> Hi All,
>     Any one had gone through scenario to  copy file from one cluster to
> another cluster using java application program (Using Hadoop FileSystem API)
> .
>
>   I have done some thing using java application but within the same cluster
> it is working file while I am copying the file from one cluster to another
> cluster I am getting the error.
>
> File not found org.apache.hadoop.security.
> AccessControlException: Permission denied: user=hadoop, access=WRITE,
> inode="/user/dasmohap/samir_tmp":dasmohap:dasmohap:drwxr-xr-x
>     at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1755)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:1690)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1669)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:409)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:205)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44068)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>
>
> Regards,
> samir.
>
> --
>
>
>