Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Memory mapped resources


Copy link to this message
-
Re: Memory mapped resources
Ted Dunning 2011-04-12, 04:09
Yes.  But only one such block. That is what I meant by chunk.

That is fine if you want that chunk but if you want to mmap the entire file,
it isn't real useful.

On Mon, Apr 11, 2011 at 6:48 PM, Jason Rutherglen <
[EMAIL PROTECTED]> wrote:

> What do you mean by local chunk?  I think it's providing access to the
> underlying file block?
>
> On Mon, Apr 11, 2011 at 6:30 PM, Ted Dunning <[EMAIL PROTECTED]>
> wrote:
> > Also, it only provides access to a local chunk of a file which isn't very
> > useful.
> >
> > On Mon, Apr 11, 2011 at 5:32 PM, Edward Capriolo <[EMAIL PROTECTED]>
> > wrote:
> >>
> >> On Mon, Apr 11, 2011 at 7:05 PM, Jason Rutherglen
> >> <[EMAIL PROTECTED]> wrote:
> >> > Yes you can however it will require customization of HDFS.  Take a
> >> > look at HDFS-347 specifically the HDFS-347-branch-20-append.txt patch.
> >> >  I have been altering it for use with HBASE-3529.  Note that the patch
> >> > noted is for the -append branch which is mainly for HBase.
> >> >
> >> > On Mon, Apr 11, 2011 at 3:57 PM, Benson Margulies
> >> > <[EMAIL PROTECTED]> wrote:
> >> >> We have some very large files that we access via memory mapping in
> >> >> Java. Someone's asked us about how to make this conveniently
> >> >> deployable in Hadoop. If we tell them to put the files into hdfs, can
> >> >> we obtain a File for the underlying file on any given node?
> >> >>
> >> >
> >>
> >> This features it not yet part of hadoop so doing this is not
> "convenient".
> >
> >
>