Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Can't initialize cluster


Copy link to this message
-
Re: Can't initialize cluster
Set "HADOOP_MAPRED_HOME" in your hadoop-env.sh file and re-run the job. See
if it helps.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 10:10 PM, Kevin Burton <[EMAIL PROTECTED]>wrote:

> To be clear when this code is run with ‘java –jar’ it runs without
> exception. The exception occurs when I run with ‘hadoop jar’.****
>
> ** **
>
> *From:* Kevin Burton [mailto:[EMAIL PROTECTED]]
> *Sent:* Tuesday, April 30, 2013 11:36 AM
> *To:* [EMAIL PROTECTED]
> *Subject:* Can't initialize cluster****
>
> ** **
>
> I have a simple MapReduce job that I am trying to get to run on my
> cluster. When I run it I get:****
>
> ** **
>
> 13/04/30 11:27:45 INFO mapreduce.Cluster: Failed to use
> org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid
> "mapreduce.jobtracker.address" configuration value for LocalJobRunner :
> "devubuntu05:9001"****
>
> 13/04/30 11:27:45 ERROR security.UserGroupInformation:
> PriviledgedActionException as:kevin (auth:SIMPLE)
> cause:java.io.IOException: *Cannot initialize Cluster*. Please check your
> configuration for mapreduce.framework.name and the correspond server
> addresses.****
>
> Exception in thread "main" java.io.IOException: Cannot initialize Cluster.
> Please check your configuration for mapreduce.framework.name and the
> correspond server addresses.****
>
> ** **
>
> My core-site.xml looks like:****
>
> ** **
>
> <property>****
>
>   <name>fs.default.name</name>****
>
>   <value>hdfs://devubuntu05:9000</value>****
>
>   <description>The name of the default file system. A URI whose scheme and
> authority determine the FileSystem implementation. </description>****
>
> </property>****
>
> ** **
>
> So I am unclear as to why it is looking at devubuntu05:9001?****
>
> ** **
>
> Here is the code:****
>
> ** **
>
>     public static void WordCount( String[] args )  throws Exception {****
>
>         Configuration conf = new Configuration();****
>
>         String[] otherArgs = new GenericOptionsParser(conf,
> args).getRemainingArgs();****
>
>         if (otherArgs.length != 2) {****
>
>             System.err.println("Usage: wordcount <in> <out>");****
>
>             System.exit(2);****
>
>         }****
>
>         Job job = new Job(conf, "word count");****
>
>         job.setJarByClass(WordCount.class);****
>
>         job.setMapperClass(WordCount.TokenizerMapper.class);****
>
>         job.setCombinerClass(WordCount.IntSumReducer.class);****
>
>         job.setReducerClass(WordCount.IntSumReducer.class);****
>
>         job.setOutputKeyClass(Text.class);****
>
>         job.setOutputValueClass(IntWritable.class);****
>
>
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(job, new
> Path(otherArgs[0]));****
>
>
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(job,
> new Path(otherArgs[1]));****
>
>         System.exit(job.waitForCompletion(true) ? 0 : 1);****
>
> ** **
>
> Ideas?****
>