Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Avro, mail # user - Possible bug: byteBuffer limit not respected when copying


Copy link to this message
-
Re: Possible bug: byteBuffer limit not respected when copying
Jeremy Lewi 2012-03-14, 17:30
I filed a bug and attached a patch
https://issues.apache.org/jira/browse/AVRO-1045

J

On Sun, Mar 11, 2012 at 3:38 PM, Jeremy Lewi <[EMAIL PROTECTED]> wrote:

> Hi,
>
> In org.apache.avro.generic.GenericData.deepCopy - the code for copying a
> ByteBuffer is
>         ByteBuffer byteBufferValue = (ByteBuffer) value;
>         byte[] bytesCopy = new byte[byteBufferValue.capacity()];
>         byteBufferValue.rewind();
>         byteBufferValue.get(bytesCopy);
>         byteBufferValue.rewind();
>         return ByteBuffer.wrap(bytesCopy);
>
> I think this is problematic because it will cause an UnderFlow exception
> to be thrown if the ByteBuffer limit is less than the capacity of the byte
> buffer.
>
> My use case is as follows. I have ByteBuffer's backed by large arrays so I
> can avoid resizing the array every time I write data. So limit < capacity.
> When the data is written, or copied
> I think avro should respect this. When data is serialized, avro should
> automatically use the minimum number of bytes.
> When an object is copied, I think it makes sense to preserve the capacity
> of the underlying buffer as opposed to compacting it.
>
> So I think the code could be fixed by replacing get with
> byteBufferValue.get(bytesCopy, 0 , byteBufferValue.limit());
>
> Before I file a bug is there anything I'm missing?
>
> J
>
>
>