It's true that HDFS (and Hadoop generally) doesn't currently have a
ByteBuffer-based pread API. There is a JIRA open for this issue,
I do not know if implementing a ByteBuffer API for pread would be as
big of a performance gain as implementing it for regular read. One
issue is that when you do a pread, you always destroy the old
BlockReader object and create a new one. This overhead may tend to
make the overhead of doing a single buffer copy less significant in
terms of total cost. I suppose it partly depends on how big the
buffer is that is being copied... a really large pread would certainly
benefit from avoiding the copy into a byte array.
On Tue, Dec 31, 2013 at 1:01 AM, lei liu <[EMAIL PROTECTED]> wrote:
> There is ByteBuffer read API for sequential read in CDH4.3.1,
> example:public synchronized int read(final ByteBuffer buf) throws
> IOException API. But there is not ByteBuffe read API for pread.
> Why don't support ByteBuffer read API for pread in CDH4.3.1?