Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.LongWritable, recieved org.apache.hadoop.io.Text


Copy link to this message
-
Re: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.LongWritable, recieved org.apache.hadoop.io.Text
This was answered at http://search-hadoop.com/m/j1M3R1Mjjx31

On Fri, Aug 3, 2012 at 3:52 AM, Harit Himanshu <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I face another issue, now, Here is my program
>
>
>     public static class MapClass extends Mapper<LongWritable, Text, Text,
> LongWritable> {
>
>         public void map(LongWritable key, Text value, Context context)
> throws IOException, InterruptedException {
>             // your map code goes here
>             String[] fields = value.toString().split(",");
>             Text yearInText = new Text();
>             LongWritable out = new LongWritable();
>             String year = fields[1];
>             String claims = fields[8];
>
>             if (claims.length() > 0 && (!claims.startsWith("\""))) {
>                 yearInText.set(year.toString());
>                 out.set(Long.parseLong(claims));
>                 context.write(yearInText, out);
>             }
>         }
>     }
>
>
>     public static class Reduce extends Reducer<Text, LongWritable, Text,
> Text> {
>
>         public void reduce(Text key, Iterable<LongWritable> values, Context
> context) throws IOException, InterruptedException {
>             // your reduce function goes here
>             Text value = new Text();
>             value.set(values.toString());
>             context.write(key, value);
>         }
>     }
>
>     public int run(String args[]) throws Exception {
>         Job job = new Job();
>         job.setJarByClass(TopKRecord.class);
>
>         job.setMapperClass(MapClass.class);
>         job.setReducerClass(Reduce.class);
>
>         FileInputFormat.setInputPaths(job, new Path(args[0]));
>         FileOutputFormat.setOutputPath(job, new Path(args[1]));
>
>         job.setMapOutputValueClass(LongWritable.class);
>         job.setJobName("TopKRecord");
>
> //        job.setNumReduceTasks(0);
>         boolean success = job.waitForCompletion(true);
>         return success ? 0 : 1;
>     }
>
>     public static void main(String args[]) throws Exception {
>         int ret = ToolRunner.run(new TopKRecord(), args);
>         System.exit(ret);
>     }
> }
>
> When I run this in hadoop, I see the following error
>
> 12/08/02 15:12:59 INFO mapred.JobClient: Task Id :
> attempt_201208021025_0011_m_000001_0, Status : FAILED
> java.io.IOException: Type mismatch in key from map: expected
> org.apache.hadoop.io.LongWritable, recieved org.apache.hadoop.io.Text
> at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1014)
> at
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)
> at
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
> at com.hadoop.programs.TopKRecord$MapClass.map(TopKRecord.java:39)
> at com.hadoop.programs.TopKRecord$MapClass.map(TopKRecord.java:26)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> at org.apache.hadoop.mapred.Child.main(Child.java:249)
>
> I asked this question on SO, and got response that I need to
> setMapOutputValue class and I tried that too (see in code above) by
> following
> http://hadoop.apache.org/common/docs/r0.20.2/api/org/apache/hadoop/mapreduce/Job.html#setMapOutputKeyClass%28java.lang.Class%29
>
> How do I fix this?
>
> Thank you
> + Harit

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB