Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Avro >> mail # user >> Backwards compatible - Optional fields


Copy link to this message
-
Re: Backwards compatible - Optional fields
A reader must always have the schema of the written data to decode it.

When creating your Decoder, you must pass both the reader's schema and the
schema as written.

Once given this pair, Avro can know to skip data as written if the reader
does not need it, or to inject default values for the reader if the writer
did not provide it.

The flaw in your code is here where you only provide the reader's schema:

new SpecificDatumReader<A>(a.getSchema());
On 10/2/12 2:04 PM, "Gabriel Ki" <[EMAIL PROTECTED]> wrote:

> Hi all,
>
> I had an impression that reader works with older version object as long as the
> new fields are optional.  Is that true?  If not,
> what would you recommend?  Thanks a lot in advance.
>
> For example:
>
> {
>   "namespace": "org.apache.avro.examples",
>   "protocol": "MyProtocol",
>
>   "types": [
>     { "name": "Metadata", "type": "record", "fields": [
>       {"name": "S1", "type": "string"}
>     ]},
>
>     { "name": "Metadatav2", "type": "record", "fields": [
>       {"name": "S1", "type": "string"},
>       {"name": "S2", "type": ["string", "null"]}      // optional field in the
> new version
>     ]}
>   ]
> }
>
>     public static <A extends SpecificRecordBase> A parseAvroObject(final A a,
> final byte[] bb)
>     throws IOException {
>         if (bb == null) {
>             return null;
>         }
>         ByteArrayInputStream bais = new ByteArrayInputStream(bb);
>         DatumReader<A> dr = new SpecificDatumReader<A>(a.getSchema());
>         Decoder d = DecoderFactory.get().binaryDecoder(bais, null);
>         return dr.read(a, d);
>     }
>
>
> public static void main(String[] args) throws IOException {
>
>         Metadata.Builder mb = Metadata.newBuilder();
>         mb.setS1("S1 value");
>         byte[] bs = toBytes(mb.build());
>
>         Metadata m = parseAvroObject(new Metadata(), bs);
>         System.out.println("parse as Metadata " + m);
>
>         // This I thought it worked with older avro
>         Metadatav2 m2 = parseAvroObject(new Metadatav2(), bs);
>         System.out.println("parse as Metadatav2 " + m2);
>      }
>
>
> Exception in thread "main" java.io.EOFException
>     at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:145)
>     at org.apache.avro.io.BinaryDecoder.readIndex(BinaryDecoder.java:405)
>     at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:229)
>     at org.apache.avro.io.parsing.Parser.advance(Parser.java:88)
>     at
> org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:206)
>     at
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142)
>     at
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:
> 166)
>     at
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:138)
>     at
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:129)
>
>
> Thanks
> -gabe