The cache "layer" is written in Go and wraps around the DataNode so that
all traffic between the DataNode and NameNode as well as DataNode and
client flow through the cache layer. The layer currently employs a simple
LRU cache, where files that are requested are placed on the cache and
loaded into RAM. Requests for data first come through the cache layer which
will respond to them immediately if the given file is found in the cache
and pass it on to the DataNode otherwise.
One of the goals is to add caching while leaving the rest of the stack
(including Hadoop/HDFS) completely untouched; this is something where
HDFS-4949 and my cache layer differ (although it isn't really that big of a
Also, another portion of the cache layer runs on the client machine (e.g.
worker node) that proxies all communication between client and NameNode in
order to do metadata caching (e.g. getFileListing calls are cached and
updated on a regular basis).
I'm not sure what level of implementation detail you were looking for;
please let me know if anything is missing.
This is something I'm still trying to figure out because I'm still
measuring how much latency is saved with the caching. Purely based on
speculation, I have a feeling that some machine learning algorithms that
run on Hadoop, use multiple passes over the same data and consist of
multi-step procedures will be able to extract significant gains from the
On Mon, Dec 30, 2013 at 2:48 PM, Andrew Wang <[EMAIL PROTECTED]>wrote:
> Hi Dhaivat,
> I did a good chunk of the design and implementation of HDFS-4949, so if you
> could post a longer writeup of your envisioned use cases and
> implementation, I'd definitely be interested in taking a look.
> It's also good to note that HDFS-4949 is only the foundation for a whole
> slew of potential enhancements. We're planning to add some form of
> automatic cache replacement, which as a first step could just be an
> external policy that manages your static caching directives. It should also
> already be possible to integrate a job scheduler with HDFS-4949, since it
> both exposes the cache state of the cluster and allows a scheduler to
> prefetch data into RAM. Finally, we're also thinking about caching at finer
> granularities, e.g. block or sub-block rather than file-level caching,
> which is nice for apps that only read regions of a file.
> On Mon, Dec 23, 2013 at 9:57 PM, Dhaivat Pandya <[EMAIL PROTECTED]
> > Hi Harsh,
> > Thanks a lot for the response. As it turns out, I figured out the
> > registration mechanism this evening and how the sourceId is relayed to
> > NN.
> > As for your question about the cache layer it is a similar basic concept
> > the plan mentioned, but the technical details differ significantly. First
> > of all, instead of having the user tell the namenode to perform caching
> > it seems from the proposal on JIRA), there is a distributed caching
> > algorithm that decides what files will be cached. Secondly, I am
> > implementing a hook-in with the job scheduler that arranges jobs
> > to what files are cached at a given point in time (and also allows files
> > be cached based on what jobs are to be run).
> > Also, the cache layer does a bit of metadata caching; the numbers on it
> > not all in, but thus far, some of the *metadata* caching surprisingly
> > a pretty nice reduction in response time.
> > Any thoughts on the cache layer would be greatly appreciated.
> > Thanks,
> > Dhaivat
> > On Mon, Dec 23, 2013 at 11:46 PM, Harsh J <[EMAIL PROTECTED]> wrote:
> > > Hi,
> > >
> > > On Mon, Dec 23, 2013 at 9:41 AM, Dhaivat Pandya <
> [EMAIL PROTECTED]
> > >
> > > wrote:
> > > > Hi,
> > > >
> > > > I'm currently trying to build a cache layer that should sit "on top"
> > > the
> > > > datanode. Essentially, the namenode should know the port number of