Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Compaction problem


+
tarang dawer 2013-03-22, 11:44
+
Jean-Marc Spaggiari 2013-03-22, 12:35
+
tarang dawer 2013-03-22, 14:44
+
Anoop John 2013-03-22, 14:51
+
tarang dawer 2013-03-22, 15:34
+
tarang dawer 2013-03-26, 06:32
+
ramkrishna vasudevan 2013-03-26, 06:40
Copy link to this message
-
RE: Compaction problem
@tarang
As per 4G max heap size, you will get by deafult 1.4G total memory for all the memstores (5/6 regions).. By default you will get 35% of the heap size for memstore. Is your process only write centric? If rare read happens, think of increasing this global heap space setting..Else can increase 4G heap size? (Still 1G for a memstore might be too much. You are now getting flushes because of global heap preassure before each memstore reaches 1GB. )
//hbase.regionserver.global.memstore.lowerlimit  & hbase.regionserver.global.memstore.upperlimit

hbase.hregion.max.filesize is given as 1 GB. Try increasing this. See region splits frequently happening with your case.

See all compaction related params... also tells us abt the status of the Qs

-Anoop-

________________________________________
From: tarang dawer [[EMAIL PROTECTED]]
Sent: Friday, March 22, 2013 9:04 PM
To: [EMAIL PROTECTED]
Subject: Re: Compaction problem

3 region servers 2 region servers having 5 regions each , 1 having 6
+2(meta and root)
1 CF
set HBASE_HEAPSIZE in hbase-env.sh as 4gb .

is the flush size okay ? or do i need to reduce/increase it ?

i'll look into the flushQ and compactionQ size and get back to you .

do these parameters seem okay to you ? if something seems odd / not in
order , please do tell

Thanks
Tarang Dawer

On Fri, Mar 22, 2013 at 8:21 PM, Anoop John <[EMAIL PROTECTED]> wrote:

> How many regions per  RS? And CF in table?
> What is the -Xmx for RS process? You will bget 35% of that memory for all
> the memstores in the RS.
> hbase.hregion.memstore.flush.size = 1GB!!
>
> Can you closely observe the flushQ size and compactionQ size?  You may be
> getting so many small file flushes(Due to global heap pressure) and
> subsequently many minor compactions.
>
> -Anoop-
>
> On Fri, Mar 22, 2013 at 8:14 PM, tarang dawer <[EMAIL PROTECTED]
> >wrote:
>
> > Hi
> > As per my use case , I have to write around 100gb data , with a ingestion
> > speed of around 200 mbps. While writing , i am getting a performance hit
> by
> > compaction , which adds to the delay.
> > I am using a 8 core machine with 16 gb RAM available., 2 Tb hdd 7200RPM.
> > Got some idea from the archives and  tried pre splitting the regions ,
> > configured HBase with following parameters(configured the parameters in a
> > haste , so please guide me if anything's out of order) :-
> >
> >
> >         <property>
> >                 <name>hbase.hregion.memstore.block.multiplier</name>
> >                 <value>4</value>
> >         </property>
> >         <property>
> >                  <name>hbase.hregion.memstore.flush.size</name>
> >                  <value>1073741824</value>
> >         </property>
> >
> >         <property>
> >                 <name>hbase.hregion.max.filesize</name>
> >                 <value>1073741824</value>
> >         </property>
> >         <property>
> >                 <name>hbase.hstore.compactionThreshold</name>
> >                 <value>5</value>
> >         </property>
> >         <property>
> >               <name>hbase.hregion.majorcompaction</name>
> >                   <value>0</value>
> >         </property>
> >         <property>
> >                 <name>hbase.hstore.blockingWaitTime</name>
> >                 <value>30000</value>
> >         </property>
> >          <property>
> >                  <name>hbase.hstore.blockingStoreFiles</name>
> >                  <value>200</value>
> >          </property>
> >
> >   <property>
> >         <name>hbase.regionserver.lease.period</name>
> >         <value>3000000</value>
> >   </property>
> >
> >
> > but still m not able to achieve the optimal rate , getting around 110
> mbps.
> > Need some optimizations ,so please could you help out ?
> >
> > Thanks
> > Tarang Dawer
> >
> >
> >
> >
> >
> > On Fri, Mar 22, 2013 at 6:05 PM, Jean-Marc Spaggiari <
> > [EMAIL PROTECTED]> wrote:
> >
> > > Hi Tarang,
> > >
> > > I will recommand you to take a look at the list archives first to see
+
Asaf Mesika 2013-03-26, 21:26
+
Jean-Marc Spaggiari 2013-03-27, 15:05
+
Asaf Mesika 2013-03-27, 19:55
+
Jean-Marc Spaggiari 2013-03-27, 21:54