Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> 答复: cannot submit a job via java client in hadoop- 2.0.5-alpha


Copy link to this message
-
答复: cannot submit a job via java client in hadoop- 2.0.5-alpha


Actually ,I have mapreduce.framework.name configured in mapred-site.xml, see
below:

 

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

<description>Execution framework set to Hadoop YARN.</description>

</property>

 

 

发件人: hadoop hive [mailto:[EMAIL PROTECTED]]
发送时间: Wednesday, July 10, 2013 18:39
收件人: [EMAIL PROTECTED]
主题: Re: cannot submit a job via java client in hadoop- 2.0.5-alpha

 

Here its showing like you are not using mapreduce.framework.name as yarn,
please resend it we are unable to see the configuration

 

On Wed, Jul 10, 2013 at 1:33 AM, Francis.Hu <[EMAIL PROTECTED]>
wrote:

Hi,All

 

I have a hadoop- 2.0.5-alpha cluster with 3 data nodes . I have Resource
Manager and all data nodes started and can access web ui of Resource
Manager.

I wrote a java client to submit a job as TestJob class below. But the job is
never submitted successfully. It throws out exception all the time.

My configurations are attached.  Can anyone help me? Thanks.

 

---------my-java client

public class TestJob {

    

    public void execute() {

 

        Configuration conf1 = new Configuration();

        conf1.addResource("resources/core-site.xml");

        conf1.addResource("resources/hdfs-site.xml");

        conf1.addResource("resources/yarn-site.xml");

        conf1.addResource("resources/mapred-site.xml");

        JobConf conf = new JobConf(conf1);

        

        conf.setJar("/home/francis/hadoop-jobs/MapReduceJob.jar");

        conf.setJobName("Test");

 

        conf.setInputFormat(TextInputFormat.class);

        conf.setOutputFormat(TextOutputFormat.class);

 

        conf.setOutputKeyClass(Text.class);

        conf.setOutputValueClass(IntWritable.class);

 

        conf.setMapperClass(DisplayRequestMapper.class);

        conf.setReducerClass(DisplayRequestReducer.class);

 

        FileInputFormat.setInputPaths(conf,new
Path("/home/francis/hadoop-jobs/2013070907.FNODE.2.txt"));

        FileOutputFormat.setOutputPath(conf, new
Path("/home/francis/hadoop-jobs/result/"));

 

        try {

            JobClient client = new JobClient(conf);

            RunningJob job = client.submitJob(conf);

            job.waitForCompletion();

        } catch (IOException e) {

            e.printStackTrace();

        }

    }

}

 

----------Exception

 

jvm 1    | java.io.IOException: Cannot initialize Cluster. Please check your
configuration for mapreduce.framework.name and the correspond server
addresses.

jvm 1    |      at
org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:119)

jvm 1    |      at
org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:81)

jvm 1    |      at
org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:74)

jvm 1    |      at
org.apache.hadoop.mapred.JobClient.init(JobClient.java:482)

jvm 1    |      at
org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:461)

jvm 1    |      at
com.rh.elastic.hadoop.job.TestJob.execute(TestJob.java:59)

 

 

Thanks,

Francis.Hu

 

 

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB