Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Re: Java submit job to remote server


Copy link to this message
-
Re: Java submit job to remote server
conf.set("mapred.job.tracker", "localhost:9001");

this means that your jobtracker is on port 9001 on localhost

if you change it to the remote host and thats the port its running on then
it should work as expected

whats the exception you are getting?
On Wed, Feb 13, 2013 at 2:41 AM, Alex Thieme <[EMAIL PROTECTED]> wrote:

> I apologize for asking what seems to be such a basic question, but I would
> use some help with submitting a job to a remote server.
>
> I have downloaded and installed hadoop locally in pseudo-distributed mode.
> I have written some Java code to submit a job.
>
> Here's the org.apache.hadoop.util.Tool
> and org.apache.hadoop.mapreduce.Mapper I've written.
>
> If I enable the conf.set("mapred.job.tracker", "localhost:9001") line,
> then I get the exception included below.
>
> If that line is disabled, then the job is completed. However, in reviewing
> the hadoop server administration page (
> http://localhost:50030/jobtracker.jsp) I don't see the job as processed
> by the server. Instead, I wonder if my Java code is simply running the
> necessary mapper Java code, bypassing the locally installed server.
>
> Thanks in advance.
>
> Alex
>
> public class OfflineDataTool extends Configured implements Tool {
>
>     public int run(final String[] args) throws Exception {
>         final Configuration conf = getConf();
>         //conf.set("mapred.job.tracker", "localhost:9001");
>
>         final Job job = new Job(conf);
>         job.setJarByClass(getClass());
>         job.setJobName(getClass().getName());
>
>         job.setMapperClass(OfflineDataMapper.class);
>
>         job.setInputFormatClass(TextInputFormat.class);
>
>         job.setMapOutputKeyClass(Text.class);
>         job.setMapOutputValueClass(Text.class);
>
>         job.setOutputKeyClass(Text.class);
>         job.setOutputValueClass(Text.class);
>
>         FileInputFormat.addInputPath(job, new
> org.apache.hadoop.fs.Path(args[0]));
>
>         final org.apache.hadoop.fs.Path output = new
> org.apache.hadoop.fs.Path(args[1]);
>         FileSystem.get(conf).delete(output, true);
>         FileOutputFormat.setOutputPath(job, output);
>
>         return job.waitForCompletion(true) ? 0 : 1;
>     }
>
>     public static void main(final String[] args) {
>         try {
>             int result = ToolRunner.run(new Configuration(), new
> OfflineDataTool(), new String[]{"offline/input", "offline/output"});
>             log.error("result = {}", result);
>         } catch (final Exception e) {
>             throw new RuntimeException(e);
>         }
>     }
> }
>
> public class OfflineDataMapper extends Mapper<LongWritable, Text, Text,
> Text> {
>
>     public OfflineDataMapper() {
>         super();
>     }
>
>     @Override
>     protected void map(final LongWritable key, final Text value, final
> Context context) throws IOException, InterruptedException {
>         final String inputString = value.toString();
>         OfflineDataMapper.log.error("inputString = {}", inputString);
>     }
> }
>
>
--
Nitin Pawar