-Re: Possible bug: byteBuffer limit not respected when copying
Jeremy Lewi 2012-03-14, 17:30
I filed a bug and attached a patch
On Sun, Mar 11, 2012 at 3:38 PM, Jeremy Lewi <[EMAIL PROTECTED]> wrote:
> In org.apache.avro.generic.GenericData.deepCopy - the code for copying a
> ByteBuffer is
> ByteBuffer byteBufferValue = (ByteBuffer) value;
> byte bytesCopy = new byte[byteBufferValue.capacity()];
> return ByteBuffer.wrap(bytesCopy);
> I think this is problematic because it will cause an UnderFlow exception
> to be thrown if the ByteBuffer limit is less than the capacity of the byte
> My use case is as follows. I have ByteBuffer's backed by large arrays so I
> can avoid resizing the array every time I write data. So limit < capacity.
> When the data is written, or copied
> I think avro should respect this. When data is serialized, avro should
> automatically use the minimum number of bytes.
> When an object is copied, I think it makes sense to preserve the capacity
> of the underlying buffer as opposed to compacting it.
> So I think the code could be fixed by replacing get with
> byteBufferValue.get(bytesCopy, 0 , byteBufferValue.limit());
> Before I file a bug is there anything I'm missing?