There can be two reasons here: (a) your client libs and the server
libs have a version mismatch that includes incompatible RPC protocol
changes, making it impossible for communication to happen, or (b) the
port you are connecting to in your app, is not really the JobTracker
For (a), fixing dependencies in the client runtime/project to match
the version of hadoop deployed on the server would usually fix it.
For (b), inspecting the server's core-site.xml (For the
fs.default.name port, which is the NN's port) and mapred-site.xml (For
the mapred.job.tracker port, which is the JT's port), would help you
figure out what the deployment looks like and fix the port configs in
code as well, to connect to the right one.
Does either of this help?
On Fri, Feb 15, 2013 at 6:22 AM, Alex Thieme <[EMAIL PROTECTED]> wrote:
> Any thoughts on why my connection to the hadoop server fails? An help
> provided would be greatly appreciated.
> Alex Thieme
> [EMAIL PROTECTED]
> On Feb 13, 2013, at 1:41 PM, Alex Thieme <[EMAIL PROTECTED]> wrote:
> It appears this is the full extent of the stack trace. Anything prior to the
> org.apache.hadoop calls are from my container where hadoop is called from.
> Caused by: java.io.IOException: Call to /127.0.0.1:9001 failed on local
> exception: java.io.EOFException
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
> at org.apache.hadoop.ipc.Client.call(Client.java:743)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at org.apache.hadoop.mapred.$Proxy55.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:429)
> at org.apache.hadoop.mapred.JobClient.init(JobClient.java:423)
> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:410)
> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:50)
> at com.allenabi.sherlock.graph.OfflineDataTool.run(OfflineDataTool.java:25)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> ... 64 more
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:375)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
> Alex Thieme
> [EMAIL PROTECTED]
> On Feb 12, 2013, at 8:16 PM, Hemanth Yamijala <[EMAIL PROTECTED]>
> Can you please include the complete stack trace and not just the root. Also,
> have you set fs.default.name to a hdfs location like hdfs://localhost:9000 ?
> On Wednesday, February 13, 2013, Alex Thieme wrote:
>> Thanks for the prompt reply and I'm sorry I forgot to include the
>> exception. My bad. I've included it below. There certainly appears to be a
>> server running on localhost:9001. At least, I was able to telnet to that
>> address. While in development, I'm treating the server on localhost as the
>> remote server. Moving to production, there'd obviously be a different remote
>> server address configured.
>> Root Exception stack trace:
>> at java.io.DataInputStream.readInt(DataInputStream.java:375)
>> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>> + 3 more (set debug level logging or '-Dmule.verbose.exceptions=true'
>> for everything)
>> On Feb 12, 2013, at 4:22 PM, Nitin Pawar <[EMAIL PROTECTED]> wrote:
>> conf.set("mapred.job.tracker", "localhost:9001");
>> this means that your jobtracker is on port 9001 on localhost
>> if you change it to the remote host and thats the port its running on then
>> it should work as expected
>> whats the exception you are getting?