Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # dev >> ByteBuffer-based read API for pread


Copy link to this message
-
Re: ByteBuffer-based read API for pread
It's true that HDFS (and Hadoop generally) doesn't currently have a
ByteBuffer-based pread API.  There is a JIRA open for this issue,
HDFS-3246.

I do not know if implementing a ByteBuffer API for pread would be as
big of a performance gain as implementing it for regular read.  One
issue is that when you do a pread, you always destroy the old
BlockReader object and create a new one.  This overhead may tend to
make the overhead of doing a single buffer copy less significant in
terms of total cost.  I suppose it partly depends on how big the
buffer is that is being copied... a really large pread would certainly
benefit from avoiding the copy into a byte array.

cheers,
Colin

On Tue, Dec 31, 2013 at 1:01 AM, lei liu <[EMAIL PROTECTED]> wrote:
>  There is ByteBuffer read API for sequential read in CDH4.3.1,
> example:public synchronized int read(final ByteBuffer buf) throws
> IOException  API. But there is not ByteBuffe read API for pread.
>
> Why don't support ByteBuffer read API for pread in CDH4.3.1?
>
> Thanks,
>
> LiuLei
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB