On Jan 8, 2009, at 10:13 AM, George Porter wrote:
> Hi Jun,
> The earlier responses to your email reference the JIRA that I opened
> about this issue. Short-circuiting the primary HDFS datapath does
> improve throughput, and the amount depends on your workload (random
> reads especially). Some initial experimental results are posted to
> JIRA. A second advantage is that since the JVM hosting the HDFS
> is doing the reading, the O/S will satisfy future disk requests from
> cache, which isn't really possible when you read over the network
> to another JVM on the same host).
> There are several real disadvantages, the largest of which include
> 1) it
> adds a new datapath, and 2) bypasses various security and auditing
> features of HDFS.
We are in middle of adding security to HDFS.
Having the client read the blocks directly would violate security.
Security is a specially thorny problem to solve in this case.
Further the internal structure and hence the path name of the file are
not visible outside.
One could consider hacking this (ignoring security) but even this gets
tricky as the directory in which the block is saved may change if
some one starts to write to the file (which can happen with the
recent append work),
Interesting optimization but tricky to do in a clean way (at least not
obvious to me).
> I would certainly like to think through a more clean
> interface for achieving this goal, especially since reading local data
> should be the common case. Any thoughts you might have would be
> Jun Rao wrote:
> > Hi,
> > Today, HDFS always reads through a socket even when the data is
> local to
> > the client. This adds a lot of overhead, especially for warm
> reads. It
> > should be possible for a dfs client to test if a block to be read
> is local
> > and if so, bypass socket and read through local FS api directly.
> > should improve random access performance significantly (e.g., for
> > Has this been considered in HDFS? Thanks,
> > Jun