Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: Error on createDatumWriter?


Copy link to this message
-
Re: Error on createDatumWriter?
Mon!

Can you see that task attempt's log? What error does it contain?

On Wednesday, October 23, 2013 6:38 PM, Sandgorgon <[EMAIL PROTECTED]> wrote:
 
Hello Guys,

If I could ask for some help with an error I am getting please? An
excerpt of the errors I am getting are as follows - I have no clue as to
what the error points to and what is it that I should do to fix it:

Error:
org.apache.avro.generic.GenericData.createDatumWriter(Lorg/apache/avro/Schema;)Lorg/apache/avro/io/DatumWriter;
13/10/23 17:07:42 INFO mapreduce.Job: Task Id :
attempt_1379090674214_0017_m_000002_1, Status : FAILED
Error:
org.apache.avro.generic.GenericData.createDatumWriter(Lorg/apache/avro/Schema;)Lorg/apache/avro/io/DatumWriter;
Container killed by the ApplicationMaster.

### The following is just additional information that hopefully will be
useful.

Here is the environment I am working in: CDH 4.4.0, YARN implementation
using Avro v1.7.5, I -jarslib the avro/jackson jars with my MR job.

### My Mapper:

public class AttributeFilterMapper extends
Mapper<AvroKey<GenericData.Record>, NullWritable,
AvroKey<GenericData.Record>, NullWritable> {

    @Override
    public void map(AvroKey<GenericData.Record> key, NullWritable nil,
    Context context) throws IOException, InterruptedException {
        GenericData.Record record = key.datum();

        if(record.get("delta_filter").equals("neumann"))
        context.write(new AvroKey<GenericData.Record>(key.datum()),
        nil);
    }
### My Reducer

public class DefaultFormatReducer extends
Reducer<AvroKey<GenericData.Record>, NullWritable, Text, Text>{

    @Override
    protected void reduce(AvroKey<GenericData.Record> key,
    Iterable<NullWritable> values, Context context) throws IOException,
    InterruptedException {
        GenericData.Record record = key.datum();

        context.write(new Text("nd"), new
        Text(record.get("delta_filter").toString()));
    }
}

### My Main

    @Override
    public int run(String[] args) throws Exception {
        Job job = Job.getInstance(super.getConf());
        job.setJobName("BigDataHeader");
        job.setJar("bdh.jar");

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        Schema headerSchema = new
        Parser().parse(getClass().getResourceAsStream("/com/micron/cesoft/hadoop/tte/Header.avsc"));
        AvroJob.setInputKeySchema(job, headerSchema);
        AvroJob.setMapOutputKeySchema(job, headerSchema);

        job.setMapperClass(AttributeFilterMapper.class);
        job.setReducerClass(DefaultFormatReducer.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        return job.waitForCompletion(true) ? 0 : 1;
    }

Any help would be appreciated as this is my first run-in with using
Avro.

Best Regards,
Mon
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB