Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> different input/output formats


Copy link to this message
-
Re: different input/output formats
Hi  Mark

  public void map(LongWritable offset, Text
val,OutputCollector<
FloatWritable,Text> output, Reporter reporter)
   throws IOException {
       output.collect(new FloatWritable(*1*), val); *//chanage 1 to 1.0f
then it will work.*
    }

let me know the status after the change
On Wed, May 30, 2012 at 1:27 AM, Mark question <[EMAIL PROTECTED]> wrote:

> Hi guys, this is a very simple  program, trying to use TextInputFormat and
> SequenceFileoutputFormat. Should be easy but I get the same error.
>
> Here is my configurations:
>
>        conf.setMapperClass(myMapper.class);
>        conf.setMapOutputKeyClass(FloatWritable.class);
>        conf.setMapOutputValueClass(Text.class);
>        conf.setNumReduceTasks(0);
>        conf.setOutputKeyClass(FloatWritable.class);
>        conf.setOutputValueClass(Text.class);
>
>        conf.setInputFormat(TextInputFormat.class);
>        conf.setOutputFormat(SequenceFileOutputFormat.class);
>
>        TextInputFormat.addInputPath(conf, new Path(args[0]));
>        SequenceFileOutputFormat.setOutputPath(conf, new Path(args[1]));
>
>
> myMapper class is:
>
> public class myMapper extends MapReduceBase implements
> Mapper<LongWritable,Text,FloatWritable,Text> {
>
>    public void map(LongWritable offset, Text
> val,OutputCollector<FloatWritable,Text> output, Reporter reporter)
>    throws IOException {
>        output.collect(new FloatWritable(1), val);
>     }
> }
>
> But I get the following error:
>
> 12/05/29 12:54:31 INFO mapreduce.Job: Task Id :
> attempt_201205260045_0032_m_000000_0, Status : FAILED
> java.io.IOException: wrong key class: org.apache.hadoop.io.LongWritable is
> not class org.apache.hadoop.io.FloatWritable
>    at
> org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:998)
>    at
>
> org.apache.hadoop.mapred.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:75)
>    at
>
> org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.collect(MapTask.java:705)
>    at
>
> org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:508)
>    at
>
> filter.stat.cosine.preprocess.SortByNorm1$Norm1Mapper.map(SortByNorm1.java:59)
>    at
>
> filter.stat.cosine.preprocess.SortByNorm1$Norm1Mapper.map(SortByNorm1.java:1)
>    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:397)
>    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
>    at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.Use
>
> Where is the writing of LongWritable coming from ??
>
> Thank you,
> Mark
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB