Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # dev - plz help! HDFS to S3 copy issues


Copy link to this message
-
Re: plz help! HDFS to S3 copy issues
Steve Loughran 2012-07-12, 03:02
This is a hadoop-user q, not a development one -please use the right as
user questions get ignored on the dev ones.

also:
http://wiki.apache.org/hadoop/ConnectionRefused

On 11 July 2012 19:23, Momina Khan <[EMAIL PROTECTED]> wrote:

> i use the following command to try to copy data from hdfs to my s3 bucket
>
> ubuntu@domU-12-31-39-04-6E-58:
> /state/partition1/hadoop-1.0.1$ *bin/hadoop distcp hdfs://
> 10.240.113.162:9001/data/ s3://ID:**SECRET@momin*a
>
> java throws a connection refused exception ... i am running just this one
> instance the same URI works fine for other hdfs commands ... even localhost
> gives the same error ....
> plz help i have also tried hftp but i am guessing connection refused is not
> a distcp error ... have tried all i can ... i have the authentication
> certificate private key in place ... could it be an authentication failure
> ... but the connection refused is mentioned on the hdfs URI.
>
> plz anyone help ... have tried google extensively!*
>
> *Find the call trace attached below:
>
> ubuntu@domU-12-31-39-04-6E-58:
> /state/partition1/hadoop-1.0.1$ *bin/hadoop distcp hdfs://
> 10.240.113.162:9001/data/ s3://ID:**SECRET@momina
> *
>
> 12/07/05 12:48:37 INFO tools.DistCp: srcPaths=[hdfs://
> 10.240.113.162:9001/data]
> 12/07/05 12:48:37 INFO tools.DistCp: destPath=s3://ID:SECRET@momina
>
> 12/07/05 12:48:38 INFO ipc.Client: Retrying connect to server:
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001. Already
> tried 0 time(s).
> 12/07/05 12:48:39 INFO ipc.Client: Retrying connect to server:
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001. Already
> tried 1 time(s).
> 12/07/05 12:48:40 INFO ipc.Client: Retrying connect to server:
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001. Already
> tried 2 time(s).
> 12/07/05 12:48:41 INFO ipc.Client: Retrying connect to server:
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001. Already
> tried 3 time(s).
> 12/07/05 12:48:42 INFO ipc.Client: Retrying connect to server:
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001. Already
> tried 4 time(s).
> 12/07/05 12:48:43 INFO ipc.Client: Retrying connect to server:
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001. Already
> tried 5 time(s).
> 12/07/05 12:48:44 INFO ipc.Client: Retrying connect to server:
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001. Already
> tried 6 time(s).
> 12/07/05 12:48:45 INFO ipc.Client: Retrying connect to server:
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001. Already
> tried 7 time(s).
> 12/07/05 12:48:46 INFO ipc.Client: Retrying connect to server:
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001. Already
> tried 8 time(s).
> 12/07/05 12:48:47 INFO ipc.Client: Retrying connect to server:
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001. Already
> tried 9 time(s).
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: java.net.ConnectException: Call to
> domU-12-31-39-04-6E-58.compute-1.internal/10.240.113.162:9001 failed on
> connection exception: java.net.ConnectException: Connection refused
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1095)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1071)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>     at $Proxy1.getProtocolVersion(Unknown Source)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>     at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
>     at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>     at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)