Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Avro >> mail # user >> map reduce job for avro 1.5 broken


Copy link to this message
-
RE: map reduce job for avro 1.5 broken

The signature of our map() is:
public void map(Utf8 input, AvroCollector<Pair<Utf8, GenericRecord>> collector, Reporter reporter) throws IOException;
and the correspond reduce() Is:
public void reduce(Utf8 key, Iterable<GenericRecord> values, AvroCollector<GenericRecord> collector, Reporter reporter) throws IOException;
The schema for GenericRecord are the same.

For this map/reduce job, we have 23 reducers.  Four of them succeeded and the rest failed because of this exception.
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Date: Mon, 28 Mar 2011 17:10:36 -0700
Subject: Re: map reduce job for avro 1.5 broken
I was able to serialize and read back the following schema with 1.5 using GenericDatumWriter and GenericDatumReader:
{"name":"foo", "type":"record", "fields":[  {"name":"mymap", "type":[    {"type":"map", "values":["int","long","float","string"]},    "null"]  }]}
Your traces below look like they are in the resolver.  Are your writer and reader schemas the same?  If it is related to schema resolution we need both versions of the schema — as it was written ('writer' schema) and what it is being resolved to ('reader' schema).
On 3/28/11 2:45 PM, "ey-chih chow" <[EMAIL PROTECTED]> wrote:

Hi,
We have an avro map/reduce job used to be working with avro 1.4, but broken with avro 1.5 when the reducer tried to do de-serilization.  By looking at the trace, it looked like the reducer was broken when trying to resolve 'union' of a 'map' definition in our avdl schema.  We have three fields in our schema relating to this.  These are:
union {map <union {int,long,float,string}>, null} evpl; union {map <union {int,long,float,string}>, null} plst; union {map <union {int,long,float,string}>, null} change;

Can anybody let me know if this is a 1.5 bug?  The stack trace was as follows:
java.lang.ArrayIndexOutOfBoundsException: -1576799025 at org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:364) at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:229) at org.apache.avro.io.parsing.Parser.advance(Parser.java:88) at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:206) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142) at org.apache.avro.generic.GenericDatumReader.readMap(GenericDatumReader.java:232) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:141) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142) at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:166) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:138) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:129) at org.apache.avro.mapred.AvroSerialization$AvroWrapperDeserializer.deserialize(AvroSerialization.java:86) at org.apache.avro.mapred.AvroSerialization$AvroWrapperDeserializer.deserialize(AvroSerialization.java:68) at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:1136) at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1076) at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:246) at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:242) at org.apache.avro.mapred.HadoopReducerBase$ReduceIterable.next(HadoopReducerBase.java:47) at com.ngmoco.ngpipes.etl.NgEventETLReducer.reduce(NgEventETLReducer.java:46) at com.ngmoco.ngpipes.etl.NgEventETLReducer.reduce(NgEventETLReducer.java:1) at org.apache.avro.mapred.HadoopReducerBase.reduce(HadoopReducerBase.java:60) at org.apache.avro.mapred.HadoopReducerBase.reduce(HadoopReducerBase.java:30) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:468) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:416) at org.apache.hadoop.mapred.Child$4.run(Child.java:240) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) at org.apache.hadoop.mapred.Child.main(Child.java:234)
Thanks.
Ey-Chih Chow        
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB