Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> RE: Can't initialize cluster


Copy link to this message
-
RE: Can't initialize cluster
To be clear when this code is run with 'java -jar' it runs without
exception. The exception occurs when I run with 'hadoop jar'.

 

From: Kevin Burton [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 30, 2013 11:36 AM
To: [EMAIL PROTECTED]
Subject: Can't initialize cluster

 

I have a simple MapReduce job that I am trying to get to run on my cluster.
When I run it I get:

 

13/04/30 11:27:45 INFO mapreduce.Cluster: Failed to use
org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid
"mapreduce.jobtracker.address" configuration value for LocalJobRunner :
"devubuntu05:9001"

13/04/30 11:27:45 ERROR security.UserGroupInformation:
PriviledgedActionException as:kevin (auth:SIMPLE) cause:java.io.IOException:
Cannot initialize Cluster. Please check your configuration for
mapreduce.framework.name and the correspond server addresses.

Exception in thread "main" java.io.IOException: Cannot initialize Cluster.
Please check your configuration for mapreduce.framework.name and the
correspond server addresses.

 

My core-site.xml looks like:

 

<property>

  <name>fs.default.name</name>

  <value>hdfs://devubuntu05:9000</value>

  <description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>

</property>

 

So I am unclear as to why it is looking at devubuntu05:9001?

 

Here is the code:

 

    public static void WordCount( String[] args )  throws Exception {

        Configuration conf = new Configuration();

        String[] otherArgs = new GenericOptionsParser(conf,
args).getRemainingArgs();

        if (otherArgs.length != 2) {

            System.err.println("Usage: wordcount <in> <out>");

            System.exit(2);

        }

        Job job = new Job(conf, "word count");

        job.setJarByClass(WordCount.class);

        job.setMapperClass(WordCount.TokenizerMapper.class);

        job.setCombinerClass(WordCount.IntSumReducer.class);

        job.setReducerClass(WordCount.IntSumReducer.class);

        job.setOutputKeyClass(Text.class);

        job.setOutputValueClass(IntWritable.class);

 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(job, new
Path(otherArgs[0]));

 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(job,
new Path(otherArgs[1]));

        System.exit(job.waitForCompletion(true) ? 0 : 1);

 

Ideas?

+
Mohammad Tariq 2013-04-30, 17:32
+
Kevin Burton 2013-04-30, 18:06
+
Harsh J 2013-05-01, 06:02
+
Kevin Burton 2013-04-30, 16:36
+
rkevinburton@... 2013-05-01, 13:42
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB