Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - memory leak


Copy link to this message
-
Re: memory leak
ramkrishna vasudevan 2012-11-21, 05:33
Actually we once faced a memory leak issue with GZip and our profiler also
showed us the same.
Am not sure of this profiler what it is saying.

Can you try disabling the GZip and try doing it with NO compression.  This
will try to figure out which is the actual problem creator.

Regards
Ram

On Wed, Nov 21, 2012 at 10:51 AM, Yusup Ashrap <[EMAIL PROTECTED]> wrote:

> hi Ramkrishna
> 5 tables on this node is using gzip compression.
> At first I assumed this was caused by gzip compression, but as u can see
> from my  google-perftools profiling result above that
>  there is no gzip related compression memory allocated.(or am I misreading
> profiling resutls?)
>
> thanks,regards
>
>
> On Wed, Nov 21, 2012 at 12:07 PM, ramkrishna vasudevan <
> [EMAIL PROTECTED]> wrote:
>
> > Hi Yusup
> >
> > ARe you using Gzip compression for your storefiles by any chance?
> >
> > Regards
> > Ram
> >
> > On Wed, Nov 21, 2012 at 9:15 AM, Yusup Ashrap <[EMAIL PROTECTED]> wrote:
> >
> > > hi all, I am encountering with high memory usage problem in my
> production
> > > environment.I doubt this is caused by memory leak or something,
> > > and I hope someone could tell me what is going on , or what should I do
> > to
> > > to keep a lower memory footprint, or any ways or tools to find out what
> > is
> > > causing so much memory footprint.my cluster have 24 nodes, and here is
> > some
> > > info from my random one node.
> > >
> > > hbase version:0.90.2
> > > ReadRequest AVG:1,716.02
> > > WriteRequest AVG:435.47
> > > Region count: 281
> > >
> > > *top:*
> > >
> > > top - 11:19:40 up 530 days, 12:29,  1 user,  load average: 4.10, 3.97,
> > 4.28
> > > Tasks: 239 total,   2 running, 237 sleeping,   0 stopped,   0 zombie
> > > Cpu(s):  5.0%us,  1.2%sy,  0.0%ni, 85.3%id,  7.5%wa,  0.0%hi,  1.0%si,
> > >  0.0%st
> > > Mem:  24676836k total, 24599764k used,    77072k free,    30052k
> buffers
> > > Swap:  8385760k total,    20568k used,  8365192k free,  1954280k cached
> > >
> > > 11226 hbase     18   0 23.5g  21g  18m S 75.9 89.5   1219:37 java
> > >   (regionserver)
> > > 31579 hbase     19   0 2719m 171m  14m S 35.3  0.7  11502:28 java
> > >  (datanode)
> > >
> > >
> > > here is my regionserver configuration.
> > >
> > >  export HBASE_REGIONSERVER_OPTS=" -Xms16g -Xmx16g -Xmn2g
> > > -XX:SurvivorRatio=16 -XX:+UseCMSInitiatingOccupancyOnly
> > -XX:CMSInitiatingO
> > > ccupancyFraction=75 -Xloggc:$HBASE_HOME/logs/gc-regionserver-`date
> > > +%Y%m%d-%H-%M`.log"
> > >
> > >
> > > I used google-perftools  to do the heap profiling and i got this.
> > >
> > > Total: 213.9 MB
> > >    154.4  72.2%  72.2%    154.4  72.2% os::malloc
> > >     18.8   8.8%  81.0%     20.6   9.7% CMSCollector::CMSCollector
> > >     13.0   6.1%  87.1%     13.0   6.1%
> ParNewGeneration::ParNewGeneration
> > >      9.6   4.5%  91.6%      9.6   4.5% ObjectSynchronizer::omAlloc
> > >      3.0   1.4%  93.0%      3.0   1.4% init
> > >      2.8   1.3%  94.3%      2.8   1.3% AllocateHeap
> > >      2.3   1.1%  95.3%      2.3   1.1% zcalloc
> > >      1.7   0.8%  96.1%      1.7   0.8% nmethod::nmethod
> > >      1.3   0.6%  96.7%      1.3   0.6% SymbolTable::basic_add
> > >      1.2   0.6%  97.3%      1.4   0.6%
> > > ConcurrentMarkSweepGeneration::ConcurrentMarkSweepGeneration
> > >      1.1   0.5%  97.8%      1.1   0.5% Thread::operator new
> > >      0.8   0.4%  98.2%      0.8   0.4% ParkEvent::Allocate
> > >      0.6   0.3%  98.5%      0.6   0.3% readCEN
> > >      0.6   0.3%  98.7%      0.6   0.3% Arena::grow
> > >      0.4   0.2%  98.9%      0.4   0.2% Hashtable::new_entry
> > >      0.3   0.2%  99.1%      0.3   0.2% frame::oops_interpreted_do
> > >      0.3   0.1%  99.2%    150.8  70.5% JavaCalls::call
> > >      0.3   0.1%  99.3%      0.3   0.1% CHeapObj::operator new
> > >      0.2   0.1%  99.4%      0.2   0.1% Hashtable::Hashtable
> > >      0.2   0.1%  99.5%      0.2   0.1% JavaThread::initialize
> > >      0.1   0.1%  99.6%      0.2   0.1%
> > > Deoptimization::fetch_unroll_info_helper