Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - ISSUE while configuring ECLIPSE with MAP-REDUCE


Copy link to this message
-
Re: ISSUE while configuring ECLIPSE with MAP-REDUCE
Mohammad Tariq 2012-11-21, 06:22
You are facing this problem because of the clash between the old and the
new API. Do as Mr. Bharath has specified and use only the new API i.e
org.apache.hadoop.mapreduce.
This should solve your problem.

Regards,
    Mohammad Tariq

On Wed, Nov 21, 2012 at 11:41 AM, bharath vissapragada <
[EMAIL PROTECTED]> wrote:

> Switch to the new api and use 'Job' class ! That should solve the problem!
>
>
> On Wed, Nov 21, 2012 at 11:24 AM, yogesh dhari <[EMAIL PROTECTED]>wrote:
>
>>  I am using Apache Hadoop-0.20.2
>>
>> Regards
>> Yogesh Kumar
>>
>> ------------------------------
>> From: [EMAIL PROTECTED]
>> To: [EMAIL PROTECTED]
>> Subject: ISSUE while configuring ECLIPSE with MAP-REDUCE
>> Date: Wed, 21 Nov 2012 11:17:42 +0530
>>
>>
>>  Hi Hadoop Champs,
>>
>> *I am facing this issue while trying to configure Eclipse with
>> Map-Reduce.*
>>
>> Exception in thread "main" java.lang.Error: Unresolved compilation
>> problems:
>>     The method setInputFormat(Class<? extends InputFormat>) in the type
>> JobConf is not applicable for the arguments (Class<TextInputFormat>)
>>     The method setOutputFormat(Class<? extends OutputFormat>) in the type
>> JobConf is not applicable for the arguments (Class<TextOutputFormat>)
>>     The method setInputPaths(Job, String) in the type FileInputFormat is
>> not applicable for the arguments (JobConf, Path)
>>     The method setOutputPath(Job, Path) in the type FileOutputFormat is
>> not applicable for the arguments (JobConf, Path)
>>
>>     at TestDriver.main(TestDriver.java:30)
>>
>>
>>
>>
>> *I have these classes and flow pattern.*
>>
>>
>> import org.apache.hadoop.fs.Path;
>> import org.apache.hadoop.io.IntWritable;
>> import org.apache.hadoop.io.Text;
>> import org.apache.hadoop.mapred.JobClient;
>> import org.apache.hadoop.mapred.JobConf;
>> import org.apache.hadoop.mapred.Mapper;
>> import org.apache.hadoop.mapred.Reducer;
>> import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
>> import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
>> import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
>> import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
>> import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter;
>>
>>
>>
>> public class TestDriver {
>>
>>     public static void main(String[] args) {
>>         JobClient client = new JobClient();
>>         JobConf conf = new JobConf(TestDriver.class);
>>
>>         // TODO: specify output types
>>         conf.setOutputKeyClass(Text.class);
>>         conf.setOutputValueClass(IntWritable.class);
>>
>>         // TODO: specify input and output DIRECTORIES (not files)
>>         //conf.setInputPath(new Path("src"));
>>         //conf.setOutputPath(new Path("out"));
>>
>>        * *conf.*setInputFormat*(TextInputFormat.class); * /*   ERROR
>> shown is :: The method setInputFormat(Class<? extends InputFormat>) in
>> the type JobConf is not applicable for the  arguments
>> (Class<TextInputFormat>) */*
>>
>>         conf.*setOutputFormat*(TextOutputFormat.class);  * /*  ERROR
>> shown is :: The method setOutputFormat(Class<? extends OutputFormat>) in
>> the type JobConf is not applicable for the  arguments
>> (Class<TextOutputFormat>)*  */
>>
>>         FileInputFormat.*setInputPaths*(conf, new Path("In")); * /*
>> ERROR shown is ::  The method setInputPaths(Job, String) in the type
>> FileInputFormat is not applicable for the arguments (JobConf, Path)  */*
>>
>>
>>         FileOutputFormat.*setOutputPath*(conf, new Path("Out")); * /*
>> ERROR shown is :: The method setOutputPath(Job, Path) in the type
>> FileOutputFormat is not applicable for the arguments (JobConf, Path)  */
>> *
>>
>>
>>         // TODO: specify a mapper
>>
>> conf.setMapperClass(org.apache.hadoop.mapred.lib.IdentityMapper.class);
>>
>>         // TODO: specify a reducer
>>
>> conf.setReducerClass(org.apache.hadoop.mapred.lib.IdentityReducer.class);
>>
>>         client.setConf(conf);
>>         try {
>>             JobClient.runJob(conf);