Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Avro, mail # user - is this a bug?


Copy link to this message
-
RE: is this a bug?
ey-chih chow 2011-03-10, 22:28

I changed the Games__ field of the DeviceRow to
union {null, array<DynamicColumn4Games>} Games__;
the system seemed no longer complaining.  Is this a right fix?  Thanks.
Ey-Chih Chow

From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: RE: is this a bug?
Date: Thu, 10 Mar 2011 11:33:13 -0800
Thanks.  I tried to migrate to 1.5.0 from 1.4.0.  I came up with some error messages that are never shown up for 1.4.0.  Could you tell me what we should change?  Our avdl record, DeviceRow, has a field defined as follows:

union {array<DynamicColumn4Games>, null} Games__;

The error messages are as follows:

11/03/10 11:31:02 INFO mapred.TaskInProgress: Error from attempt_20110310113041953_0001_m_000000_0: java.lang.NullPointerException: in com.ngmoco.hbase.DeviceRow in union null of union in field Games__ of com.ngmoco.hbase.DeviceRow
    at org.apache.avro.reflect.ReflectDatumWriter.write(ReflectDatumWriter.java:104)
    at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:57)
    at org.apache.avro.mapred.AvroSerialization$AvroWrapperSerializer.serialize(AvroSerialization.java:131)
    at org.apache.avro.mapred.AvroSerialization$AvroWrapperSerializer.serialize(AvroSerialization.java:114)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:900)
    at org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:466)
    at org.apache.avro.mapred.HadoopMapper$MapCollector.collect(HadoopMapper.java:69)
    at com.ngmoco.ngpipes.sourcing.NgActivityGatheringMapper.map(NgActivityGatheringMapper.java:91)
    at com.ngmoco.ngpipes.sourcing.NgActivityGatheringMapper.map(NgActivityGatheringMapper.java:1)
    at org.apache.avro.mapred.HadoopMapper.map(HadoopMapper.java:80)
    at org.apache.avro.mapred.HadoopMapper.map(HadoopMapper.java:34)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
    at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.lang.NullPointerException: in union null of union in field Games__ of com.ngmoco.hbase.DeviceRow
    at org.apache.avro.generic.GenericDatumWriter.npe(GenericDatumWriter.java:92)
    at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:86)
    at org.apache.avro.reflect.ReflectDatumWriter.write(ReflectDatumWriter.java:102)
    ... 14 more

From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Date: Tue, 8 Mar 2011 15:06:20 -0800
Subject: Re: is this a bug?
I haven't completely gone through your messages to understand your problem completely.  However, there were a couple fixes in 1.5.0 that could be related.
What happens if you use the 1.5.0 release candidate?
Staged maven repository for release candidate: https://repository.apache.org/content/repositories/orgapacheavro-001/release candidate: http://people.apache.org/~cutting/avro-1.5.0-rc3/
Note there are some API changes that may affect you a little, see CHANGES.txt
-Scott
On 3/8/11 2:35 PM, "ey-chih chow" <[EMAIL PROTECTED]> wrote:

Can anybody tell me if this this a bug?  We use avro map/reduce API v 1.4 in all of our code.  Some of the jobs show weird behavior.  We want to know if this is fixable.  Otherwise, we have to take out all the avro APIs and use the conventional MR APIs instead.
Ey-Chih Chow

From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: RE: is this a bug?
Date: Fri, 4 Mar 2011 16:57:02 -0800
I did some more investigation.  I found weird behavior in the readString() method of BinaryDecoder.java in Avro source code if we have the statement record.put("rowkey", key) in the reduce() method.  Does this mean that there is a bug in BinaryDecoder.java ?  Thanks.
Ey-Chih Chow

From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: RE: is this a bug?
Date: Fri, 4 Mar 2011 00:48:55 -0800
What follows are fragments of trace logs of our MR jobs corresponding respectively to with and without the statement 'record.put("rowkey", key)' mentioned in the previous messages.  From the last line, logged at the entry of the reduce() method, of each of these two logs you can see the difference.  I.e. for the first segment, the log is 'working on 0000000200000000000000000000000000002 whose rowKey is 0000000300000000000000000000000000003' for the second segment, the log is 'working on 0000000200000000000000000000000000002 whose rowKey is 0000000200000000000000000000000000002',  where the second log is what we expected, corresponding to the correct key values pair passed to the reduce() method.  Note that these two fragments of logs are generated by adding some additional log statements to Hadoop and Avro source code.

Can anybody help to see if this is a bug in Avro or Hadoop code?

=============================================================================================================
log fragment with the statement 'record.put("rowkey", key)

2011-03-03 18:00:00,180 INFO org.apache.hadoop.mapred.ReduceTask: trace bug isSkipping():false
2011-03-03 18:00:00,190 INFO org.apache.avro.mapred.AvroSerialization: trace bug deserialize() reader org.apache.avro.specific.SpecificDatumReader@1a001ff
2011-03-03 18:00:00,198 INFO org.apache.avro.generic.GenericDatumReader: trace bug type of expected STRING
2011-03-03 18:00:00,199 INFO org.apache.avro.mapred.AvroSerialization: trace bug deserialized datum 0000000000000000000000000000000000000
2011-03-03 18:00:00,199 INFO org.apache.hadoop.mapred.TaskRunner: trace bug1 deserializer is org.apache.avro.mapred.AvroSerialization$AvroWrapperDeserializer@1abcc03
2011-03-03 18:00:00,199 INFO org.apache.hadoop.mapred.TaskRunner: trace bug1 key is 0000000000000000000000000000000000000
2011-03-03 18:00:00,199 INFO org.apache.hadoop.mapred.ReduceTask: trace bug done with set values
2011-03-03 18:00:00,199 INFO org.apache.hadoop.mapred.ReduceTask: trace bug key is 0000000000000000000000000000000000000 val