Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - flushing + compactions after config change


+
Viral Bajaria 2013-06-27, 07:40
Copy link to this message
-
Re: flushing + compactions after config change
Anoop John 2013-06-27, 07:51
>the flush size is at 128m and there is no memory pressure
You mean there is enough memstore reserved heap in the RS, so that there
wont be premature flushes because of global heap pressure?  What is the RS
max mem and how many regions and CFs in each?  Can you check whether the
flushes happening because of too many WAL files?

-Anoop-

On Thu, Jun 27, 2013 at 1:10 PM, Viral Bajaria <[EMAIL PROTECTED]>wrote:

> Hi All,
>
> I wanted some help on understanding what's going on with my current setup.
> I updated from config to the following settings:
>
>   <property>
>     <name>hbase.hregion.max.filesize</name>
>     <value>107374182400</value>
>   </property>
>
>   <property>
>     <name>hbase.hregion.memstore.block.multiplier</name>
>     <value>4</value>
>   </property>
>
>   <property>
>     <name>hbase.hregion.memstore.flush.size</name>
>     <value>134217728</value>
>   </property>
>
>   <property>
>     <name>hbase.hstore.blockingStoreFiles</name>
>     <value>50</value>
>   </property>
>
>   <property>
>     <name>hbase.hregion.majorcompaction</name>
>     <value>0</value>
>   </property>
>
> Prior to this, all the settings were default values. I wanted to increase
> the write throughput on my system and also control when major compactions
> happen. In addition to that, I wanted to make sure that my regions don't
> split quickly.
>
> After the change in settings, I am seeing a huge storm of memstore flushing
> and minor compactions some of which get promoted to major compaction. The
> compaction queue is also way too high. For example a few of the line that I
> see in the logs are as follows:
>
> http://pastebin.com/Gv1S9GKX
>
> The regionserver whose logs are pasted above keeps on flushing and creating
> those small files shows the follwoing metrics:
> memstoreSizeMB=657, compactionQueueSize=233, flushQueueSize=0,
> usedHeapMB=3907, maxHeapMB=10231
>
> I am unsure why it's causing such high amount of flush (< 100m) even though
> the flush size is at 128m and there is no memory pressure.
>
> Any thoughts ? Let me know if you need any more information, I also have
> ganglia running and can provide more metrics if needed.
>
> Thanks,
> Viral
>
+
Viral Bajaria 2013-06-27, 08:18
+
谢良 2013-06-27, 08:21
+
Viral Bajaria 2013-06-27, 08:29
+
Anoop John 2013-06-27, 09:03
+
Azuryy Yu 2013-06-27, 09:22
+
Viral Bajaria 2013-06-27, 09:47
+
Azuryy Yu 2013-06-27, 09:48
+
Viral Bajaria 2013-06-27, 10:08
+
谢良 2013-06-27, 10:36
+
Azuryy Yu 2013-06-27, 14:53
+
Viral Bajaria 2013-06-27, 21:06
+
Jean-Daniel Cryans 2013-06-27, 21:40
+
Viral Bajaria 2013-06-27, 23:27
+
Jean-Daniel Cryans 2013-06-28, 16:31
+
Viral Bajaria 2013-06-28, 21:39
+
Jean-Daniel Cryans 2013-06-28, 23:53
+
Himanshu Vashishtha 2013-07-01, 06:08
+
Azuryy Yu 2013-06-28, 01:09
+
Viral Bajaria 2013-06-28, 01:22
+
Anoop John 2013-06-28, 12:08
+
谢良 2013-06-27, 08:53