-Re: More questions on avro serialization
Neha Narkhede 2013-08-22, 16:37
The point of the magic byte is to indicate the current version of the
message format. One part of the format is the fact that it is Avro encoded.
I'm not sure how Camus gets a 4 byte id, but at LinkedIn we use the 16 byte
MD5 hash of the schema. Since AVRO-1124 is not resolved yet, I'm not sure
if I can comment on the compatibility just yet.
On Wed, Aug 21, 2013 at 9:00 PM, Mark <[EMAIL PROTECTED]> wrote:
> Neha, thanks for the response.
> So the only point of the magic byte is to indicate that the rest of the
> message is Avro encoded? I noticed that in Camus a 4 byte int id of the
> schema is written instead of the 16 byte SHA. Is this the new preferred
> way? Which is compatible with
> Thanks again
> On Aug 21, 2013, at 8:38 PM, Neha Narkhede <[EMAIL PROTECTED]>
> > We define the LinkedIn Kafka message to have a magic byte (indicating
> > serialization), MD5 header followed by the payload. The Hadoop consumer
> > reads the MD5, looks up the schema in the repository and deserializes the
> > message.
> > Thanks,
> > Neha
> > On Wed, Aug 21, 2013 at 8:15 PM, Mark <[EMAIL PROTECTED]> wrote:
> >> Does LinkedIn include the SHA of the schema into the header of each Avro
> >> message they write or do they wrap the avro message and prepend the SHA?
> >> In either case, how does the Hadoop consumer know what schema to read?