Harsh J <harsh@...> writes:
> Ah, sorry I didn't read the exact problem.
> Yes that static call you make to addInputPath goes all the way up to
> (inheritance!) FileInputFormat.addInputPath, which just adds input
> paths and doesn't automatically imprint itself as the input format
> class at the same time.
> On Fri, May 31, 2013 at 9:35 PM, Jens Scheidtmann
> <jens.scheidtmann@...> wro0te:
> > Dear Harsh,
> > thanks for your answer. Your post talks about the intermediate and final
> > result types.
> > These are already configured in my job as:
> > job.setOutputKeyClass(IntWritable.class);
> > job.setOutputValueClass(IntWritable.class);
> > My problem was input key and value types, though.
> > Your post let me look in the right direction. I added
> > job.setInputFormatClass(SequenceFileInputFormat.class);
> > which did the trick.I thought this would be done by the
> > SequenceFileAsBinaryInputFormat.addInputPath(jobConf, new
> > Path(args[i]));
> > Best regards,
> > Jens
Hey, I have a similar problem. I am trying to read a sequence
file(compressed with snappy)
I did my Mapper class as:
public static class Map extends Mapper<LongWritable, BytesWritable, Text,
My map function as : public void map(LongWritable key, Text value, Context
When I try to run it says 'Type mismatch in key from map: expected
org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable'
Can anyone please help?