Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Java Commited Virtual Memory significally larged then Heap Memory


Copy link to this message
-
Re: Java Commited Virtual Memory significally larged then Heap Memory
No, I use only malloc env var, and I set it (as suggested before) into
hbase-env.sh, and it looks like it eats more less memory (in my case 4.7G vs
3.3G with 2Gheap)

2011/1/12 Friso van Vollenhoven <[EMAIL PROTECTED]>

> Thanks.
>
> I went back to hbase 0.89 with 0.1 LZO, which works fine and does not show
> this issue.
>
> I tried with a newer Hbase and LZO version, also with the MALLOC... setting
> but without max direct memory set, so I was wondering whether you need a
> combination of the two to fix things (apparently not).
>
> Now i am wondering whether I did something wrong setting the env var. It
> should just be picked up when it's in hbase-env.sh, right?
>
>
> Friso
>
>
>
> On 12 jan 2011, at 10:59, Andrey Stepachev wrote:
>
> > with MALLOC_ARENA_MAX=2
> >
> > I check -XX:MaxDirectMemorySize=256m, before, but it doesn't affect
> anything
> > (even no OOM
> > exceptions or so on).
> >
> > But it looks like i have exactly the same issue (it looks like). I have
> many
> > 64Mb anon memory blocks.
> > (sometimes they 132MB). And on heavy load i have rapidly growing rss size
> of
> > jvm process.
> >
> > 2011/1/12 Friso van Vollenhoven <[EMAIL PROTECTED]>
> >
> >> Just to clarify: you fixed it by setting the MALLOC_MAX_ARENA=? in
> >> hbase-env.sh?
> >>
> >> Did you also use the -XX:MaxDirectMemorySize=256m ?
> >>
> >> It would be nice to check that this is a different than the leakage with
> >> LZO...
> >>
> >>
> >> Thanks,
> >> Friso
> >>
> >>
> >> On 12 jan 2011, at 07:46, Andrey Stepachev wrote:
> >>
> >>> My bad. All things work. Thanks for  Todd Lipcon :)
> >>>
> >>> 2011/1/11 Andrey Stepachev <[EMAIL PROTECTED]>
> >>>
> >>>> I tried to set MALLOC_ARENA_MAX=2. But still the same issue like in
> LZO
> >>>> problem thread. All those 65M blocks here. And JVM continues to eat
> >> memory
> >>>> on heavy write load. And yes, I use "improved" kernel
> >>>> Linux 2.6.34.7-0.5.
> >>>>
> >>>> 2011/1/11 Xavier Stevens <[EMAIL PROTECTED]>
> >>>>
> >>>> Are you using a newer linux kernel with the new and "improved" memory
> >>>>> allocator?
> >>>>>
> >>>>> If so try setting this in hadoop-env.sh:
> >>>>>
> >>>>> export MALLOC_ARENA_MAX=<number of cores you want to use>
> >>>>>
> >>>>> Maybe start by setting it to 4.  You can thank Todd Lipcon if this
> >> works
> >>>>> for you.
> >>>>>
> >>>>> Cheers,
> >>>>>
> >>>>>
> >>>>> -Xavier
> >>>>>
> >>>>> On 1/11/11 7:24 AM, Andrey Stepachev wrote:
> >>>>>> No. I don't use LZO. I tried even remove any native support (i.e.
> all
> >>>>> .so
> >>>>>> from class path)
> >>>>>> and use java gzip. But nothing.
> >>>>>>
> >>>>>>
> >>>>>> 2011/1/11 Friso van Vollenhoven <[EMAIL PROTECTED]>
> >>>>>>
> >>>>>>> Are you using LZO by any chance? If so, which version?
> >>>>>>>
> >>>>>>> Friso
> >>>>>>>
> >>>>>>>
> >>>>>>> On 11 jan 2011, at 15:57, Andrey Stepachev wrote:
> >>>>>>>
> >>>>>>>> After starting the hbase in jroсkit found the same memory leakage.
> >>>>>>>>
> >>>>>>>> After the launch
> >>>>>>>>
> >>>>>>>> Every 2,0 s: date & & ps - sort =- rss-eopid, rss, vsz, pcpu |
> head
> >>>>>>>> Tue Jan 11 16:49:31 2011
> >>>>>>>>
> >>>>>>>> 11 16:49:31 MSK 2011
> >>>>>>>> PID RSS VSZ% CPU
> >>>>>>>> 7863 2547760 5576744 78.7
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> JR dumps:
> >>>>>>>>
> >>>>>>>> Total mapped 5576740KB (reserved = 2676404KB) - Java heap
> 2048000KB
> >>>>>>>> (reserved = 1472176KB) - GC tables 68512KB - Thread stacks 37236KB
> >> (#
> >>>>>>>> threads = 111) - Compiled code 1048576KB (used = 2599KB) -
> Internal
> >>>>>>>> 1224KB - OS 549688KB - Other 1800976KB - Classblocks 1280KB
> >> (malloced
> >>>>>>>> = 1110KB # 3285) - Java class data 20224KB (malloced = 20002KB #
> >> 15134
> >>>>>>>> in 3285 classes) - Native memory tracking 1024KB (malloced = 325KB
> >> +10
> >>>>>>>> KB # 20)
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> After running the mr which make high write load (~1hour)
> >>>>>>>>
> >>>>>>>> Every 2,0 s: date & & ps - sort =- rss-eopid, rss, vsz, pcpu |