Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Avro >> mail # user >> Make a copy of an avro record


Copy link to this message
-
Re: Make a copy of an avro record
Thanks James and Doug. I was able to simply cast the output of
SpecificData...deepCopy to my type and it seems to bypass the problematic
methods decorated with @override.

What about the potential incompatibility with earlier versions of java due
to the change in semantics of @override? If this is really an issue this
seems like it would affect a lot of users particularly people using Avro
MapReduce on a cluster where upgrading java is not a trivial proposition.
In my particular case, the reduce processing requires loading all values
associated with the key into memory, which necessitates a deep copy because
the iterable object passed to the reducer seems to be reusing the same
instance.

Using SpecificData.get().deepCopy(record) seems like a viable workaround.
Nonetheless, it does seem a bit problematic if the compiler is generating
code that is incompatible with earlier versions of java.

J

On Mon, Mar 12, 2012 at 9:05 AM, Doug Cutting <[EMAIL PROTECTED]> wrote:

> On 03/11/2012 10:22 PM, James Baldassari wrote:
> > If you want to make a deep copy of a specific record, the easiest way is
> > probably to use the Builder API,
> > e.g. GraphNodeData.newBuilder(recordToCopy).build().
>
> SpecificData.get().deepCopy(record) should work too.
>
> Doug
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB