Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Java submit job to remote server


Copy link to this message
-
Re: Java submit job to remote server
Can you please include the complete stack trace and not just the root.
Also, have you set fs.default.name to a hdfs location like
hdfs://localhost:9000 ?

Thanks
Hemanth

On Wednesday, February 13, 2013, Alex Thieme wrote:

> Thanks for the prompt reply and I'm sorry I forgot to include the
> exception. My bad. I've included it below. There certainly appears to be a
> server running on localhost:9001. At least, I was able to telnet to that
> address. While in development, I'm treating the server on localhost as the
> remote server. Moving to production, there'd obviously be a different
> remote server address configured.
>
> Root Exception stack trace:
> java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:375)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>     + 3 more (set debug level logging or '-Dmule.verbose.exceptions=true'
> for everything)
>
> ********************************************************************************
>
> On Feb 12, 2013, at 4:22 PM, Nitin Pawar <[EMAIL PROTECTED]> wrote:
>
> conf.set("mapred.job.tracker", "localhost:9001");
>
> this means that your jobtracker is on port 9001 on localhost
>
> if you change it to the remote host and thats the port its running on then
> it should work as expected
>
> whats the exception you are getting?
>
>
> On Wed, Feb 13, 2013 at 2:41 AM, Alex Thieme <[EMAIL PROTECTED]> wrote:
>
> I apologize for asking what seems to be such a basic question, but I would
> use some help with submitting a job to a remote server.
>
> I have downloaded and installed hadoop locally in pseudo-distributed mode.
> I have written some Java code to submit a job.
>
> Here's the org.apache.hadoop.util.Tool
> and org.apache.hadoop.mapreduce.Mapper I've written.
>
> If I enable the conf.set("mapred.job.tracker", "localhost:9001") line,
> then I get the exception included below.
>
> If that line is disabled, then the job is completed. However, in reviewing
> the hadoop server administration page (
> http://localhost:50030/jobtracker.jsp) I don't see the job as processed
> by the server. Instead, I wonder if my Java code is simply running the
> necessary mapper Java code, bypassing the locally installed server.
>
> Thanks in advance.
>
> Alex
>
> public class OfflineDataTool extends Configured implements Tool {
>
>     public int run(final String[] args) throws Exception {
>         final Configuration conf = getConf();
>         //conf.set("mapred.job.tracker", "localhost:9001");
>
>         final Job job = new Job(conf);
>         job.setJarByClass(getClass());
>         job.setJobName(getClass().getName());
>
>         job.setMapperClass(OfflineDataMapper.class);
>
>         job.setInputFormatClass(TextInputFormat.class);
>
>         job.setMapOutputKeyClass(Text.class);
>         job.setMapOutputValueClass(Text.class);
>
>         job.setOutputKeyClass(Text.class);
>         job.setOutputValueClass(Text.class);
>
>         FileInputFormat.addInputPath(job, new
> org.apache.hadoop.fs.Path(args[0]));
>
>         final org.apache.hadoop.fs.Path output = new org.a
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB