Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Connect to HDFS running on a different Hadoop-Version


Copy link to this message
-
Re: Connect to HDFS running on a different Hadoop-Version
Did you try using hftp:// instead of hdfs://. This would work across different rpc versions as long as the code base is not from significantly different branches.

EOFException might also be related to RPC version mismatch. If the release of Hadoop is based off the 0.20.2xx (Hadoop with security) you would see this.

-rajive

On Jan 25, 2012, at 4:37, Romeo Kienzler <[EMAIL PROTECTED]> wrote:

> Dear List,
>
> we're trying to use a central HDFS storage in order to be accessed from various other Hadoop-Distributions.
>
> Do you think this is possible? We're having trouble, but not related to different RPC-Versions.
>
> When trying to access a Cloudera CDH3 Update 2 (cdh3u2) HDFS from BigInsights 1.3 we're getting this error:
>
> Bad connection to FS. Command aborted. Exception: Call to localhost.localdomain/127.0.0.1:50070 failed on local exception: java.io.EOFException
> java.io.IOException: Call to localhost.localdomain/127.0.0.1:50070 failed on local exception: java.io.EOFException
>        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1142)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1110)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>        at $Proxy0.getProtocolVersion(Unknown Source)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
>        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:213)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:180)
>        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>        at com.ibm.biginsights.hadoop.patch.PatchedDistributedFileSystem.initialize(PatchedDistributedFileSystem.java:19)
>        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514)
>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
>        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:111)
>        at org.apache.hadoop.fs.FsShell.init(FsShell.java:82)
>        at org.apache.hadoop.fs.FsShell.run(FsShell.java:1785)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>        at org.apache.hadoop.fs.FsShell.main(FsShell.java:1939)
> Caused by: java.io.EOFException
>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>        at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:815)
>        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:724)
>
>
> But we've already replaced the client hadoop-common.jar's with the Cloudera ones.
>
> Please note also that we're getting an EOFException and not an RPC.VersionMismatch.
>
> FsShell.java:
>
>        try {
>            init();
>        } catch (RPC.VersionMismatch v) {
>            System.err.println("Version Mismatch between client and server"
>                    + "... command aborted.");
>            return exitCode;
>        } catch (IOException e) {
>            System.err.println("Bad connection to FS. command aborted.");
>            System.err
>                    .println("Bad connection to FS. Command aborted. Exception: "
>                            + e.getLocalizedMessage());
>            e.printStackTrace();
>            return exitCode;
>        }
>
> Any ideas?
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB