Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Compaction problem


Copy link to this message
-
Re: Compaction problem
Anoop John 2013-03-22, 14:51
How many regions per  RS? And CF in table?
What is the -Xmx for RS process? You will bget 35% of that memory for all
the memstores in the RS.
hbase.hregion.memstore.flush.size = 1GB!!

Can you closely observe the flushQ size and compactionQ size?  You may be
getting so many small file flushes(Due to global heap pressure) and
subsequently many minor compactions.

-Anoop-

On Fri, Mar 22, 2013 at 8:14 PM, tarang dawer <[EMAIL PROTECTED]>wrote:

> Hi
> As per my use case , I have to write around 100gb data , with a ingestion
> speed of around 200 mbps. While writing , i am getting a performance hit by
> compaction , which adds to the delay.
> I am using a 8 core machine with 16 gb RAM available., 2 Tb hdd 7200RPM.
> Got some idea from the archives and  tried pre splitting the regions ,
> configured HBase with following parameters(configured the parameters in a
> haste , so please guide me if anything's out of order) :-
>
>
>         <property>
>                 <name>hbase.hregion.memstore.block.multiplier</name>
>                 <value>4</value>
>         </property>
>         <property>
>                  <name>hbase.hregion.memstore.flush.size</name>
>                  <value>1073741824</value>
>         </property>
>
>         <property>
>                 <name>hbase.hregion.max.filesize</name>
>                 <value>1073741824</value>
>         </property>
>         <property>
>                 <name>hbase.hstore.compactionThreshold</name>
>                 <value>5</value>
>         </property>
>         <property>
>               <name>hbase.hregion.majorcompaction</name>
>                   <value>0</value>
>         </property>
>         <property>
>                 <name>hbase.hstore.blockingWaitTime</name>
>                 <value>30000</value>
>         </property>
>          <property>
>                  <name>hbase.hstore.blockingStoreFiles</name>
>                  <value>200</value>
>          </property>
>
>   <property>
>         <name>hbase.regionserver.lease.period</name>
>         <value>3000000</value>
>   </property>
>
>
> but still m not able to achieve the optimal rate , getting around 110 mbps.
> Need some optimizations ,so please could you help out ?
>
> Thanks
> Tarang Dawer
>
>
>
>
>
> On Fri, Mar 22, 2013 at 6:05 PM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
> > Hi Tarang,
> >
> > I will recommand you to take a look at the list archives first to see
> > all the discussions related to compaction. You will found many
> > interesting hints and tips.
> >
> >
> >
> http://search-hadoop.com/?q=compactions&fc_project=HBase&fc_type=mail+_hash_+user
> >
> > After that, you will need to provide more details regarding how you
> > are using HBase and how the compaction is impacting you.
> >
> > JM
> >
> > 2013/3/22 tarang dawer <[EMAIL PROTECTED]>:
> > > Hi
> > > I am using HBase 0.94.2 currently. While using it  , its write
> > performance,
> > > due to compaction is being affeced by compaction.
> > > Please could you suggest some quick tips in relation to how to deal
> with
> > it
> > > ?
> > >
> > > Thanks
> > > Tarang Dawer
> >
>