Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> writing to hdfs via java api


+
Jay Vyas 2011-10-28, 05:52
+
Harsh J 2011-10-28, 06:03
+
Jay Vyas 2011-10-28, 23:04
+
Tom Melendez 2011-10-29, 00:24
+
Jay Vyas 2011-10-29, 02:57
+
Tom Melendez 2011-10-29, 03:41
+
Alex Gauthier 2011-10-29, 03:43
+
Alex Gauthier 2011-10-29, 03:43
+
JAX 2011-10-29, 04:16
+
Alex Gauthier 2011-10-29, 04:17
+
JAX 2011-10-29, 04:19
Copy link to this message
-
Re: writing to hdfs via java api
hdfs scheme should work but you will have to change the port. To find
the correct port # look for fs.default.name prop in the core-site.xml
or the namenode ui should also state the port.

--
Arpit

On Oct 27, 2011, at 10:52 PM, Jay Vyas <[EMAIL PROTECTED]> wrote:

> I found a way to connect to hadoop via hftp, and it works fine, (read only)
> :
>
>    uri = "hftp://172.16.xxx.xxx:50070/";
>
>    System.out.println( "uri: " + uri );
>    Configuration conf = new Configuration();
>
>    FileSystem fs = FileSystem.get( URI.create( uri ), conf );
>    fs.printStatistics();
>
> However, it appears that hftp is read only, and I want to read/write as well
> as copy files, that is, I want to connect over hdfs . How can I enable hdfs
> connections so that i can edit the actual , remote filesystem using the file
> / path's APIs  ?  Are there ssh settings that have to be set before i can do
> this > ?
>
> I tried to change the protocol above from "hftp" -> "hdfs", but I got the
> following exception ...
>
> Exception in thread "main" java.io.IOException: Call to /
> 172.16.112.131:50070 failed on local exception: java.io.EOFException at
> org.apache.hadoop.ipc.Client.wrapException(Client.java:1139) at
> org.apache.hadoop.ipc.Client.call(Client.java:1107) at
> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226) at
> $Proxy0.getProtocolVersion(Unknown Source) at
> org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398) at
> org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384) at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111) at
> org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:213) at
> org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:180) at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514) at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67) at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548) at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530) at
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228) at
> sb.HadoopRemote.main(HadoopRemote.java:24)