Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Memory mapped resources


+
Benson Margulies 2011-04-11, 22:57
+
Jason Rutherglen 2011-04-11, 23:05
+
Edward Capriolo 2011-04-12, 00:32
+
Ted Dunning 2011-04-12, 01:30
+
Jason Rutherglen 2011-04-12, 01:48
+
Ted Dunning 2011-04-12, 04:09
+
Kevin.Leach@... 2011-04-12, 12:51
+
Ted Dunning 2011-04-12, 15:07
+
Jason Rutherglen 2011-04-12, 13:32
Copy link to this message
-
Re: Memory mapped resources
Well, no.

You could mmap all the blocks that are local to the node your program is on.
 The others you will have to read more conventionally.  If all blocks are
guaranteed local, this would work.  I don't think that guarantee is possible
on a non-trivial cluster.

On Tue, Apr 12, 2011 at 6:32 AM, Jason Rutherglen <
[EMAIL PROTECTED]> wrote:

> Then one could MMap the blocks pertaining to the HDFS file and piece
> them together.  Lucene's MMapDirectory implementation does just this
> to avoid an obscure JVM bug.
>
> On Mon, Apr 11, 2011 at 9:09 PM, Ted Dunning <[EMAIL PROTECTED]>
> wrote:
> > Yes.  But only one such block. That is what I meant by chunk.
> > That is fine if you want that chunk but if you want to mmap the entire
> file,
> > it isn't real useful.
> >
> > On Mon, Apr 11, 2011 at 6:48 PM, Jason Rutherglen
> > <[EMAIL PROTECTED]> wrote:
> >>
> >> What do you mean by local chunk?  I think it's providing access to the
> >> underlying file block?
> >>
> >> On Mon, Apr 11, 2011 at 6:30 PM, Ted Dunning <[EMAIL PROTECTED]>
> >> wrote:
> >> > Also, it only provides access to a local chunk of a file which isn't
> >> > very
> >> > useful.
> >> >
> >> > On Mon, Apr 11, 2011 at 5:32 PM, Edward Capriolo <
> [EMAIL PROTECTED]>
> >> > wrote:
> >> >>
> >> >> On Mon, Apr 11, 2011 at 7:05 PM, Jason Rutherglen
> >> >> <[EMAIL PROTECTED]> wrote:
> >> >> > Yes you can however it will require customization of HDFS.  Take a
> >> >> > look at HDFS-347 specifically the HDFS-347-branch-20-append.txt
> >> >> > patch.
> >> >> >  I have been altering it for use with HBASE-3529.  Note that the
> >> >> > patch
> >> >> > noted is for the -append branch which is mainly for HBase.
> >> >> >
> >> >> > On Mon, Apr 11, 2011 at 3:57 PM, Benson Margulies
> >> >> > <[EMAIL PROTECTED]> wrote:
> >> >> >> We have some very large files that we access via memory mapping in
> >> >> >> Java. Someone's asked us about how to make this conveniently
> >> >> >> deployable in Hadoop. If we tell them to put the files into hdfs,
> >> >> >> can
> >> >> >> we obtain a File for the underlying file on any given node?
> >> >> >>
> >> >> >
> >> >>
> >> >> This features it not yet part of hadoop so doing this is not
> >> >> "convenient".
> >> >
> >> >
> >
> >
>
+
Jason Rutherglen 2011-04-12, 15:24
+
Ted Dunning 2011-04-12, 15:35
+
Benson Margulies 2011-04-12, 17:40
+
Jason Rutherglen 2011-04-12, 18:09
+
Ted Dunning 2011-04-12, 19:05
+
Luke Lu 2011-04-12, 19:50
+
Luca Pireddu 2011-04-13, 07:21
+
M. C. Srivas 2011-04-13, 02:16
+
Ted Dunning 2011-04-13, 04:09
+
Benson Margulies 2011-04-13, 10:54
+
M. C. Srivas 2011-04-13, 14:33
+
Benson Margulies 2011-04-13, 14:35
+
Lance Norskog 2011-04-14, 02:41
+
Michael Flester 2011-04-12, 14:06