Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Re: Hbase tuning for heavy write cluster


Copy link to this message
-
Re: Hbase tuning for heavy write cluster
Yes, it is normal.

On Jan 25, 2014, at 2:12 AM, Rohit Dev <[EMAIL PROTECTED]> wrote:

> I changed these settings:
> - hbase.hregion.memstore.flush.size - 536870912
> - hbase.hstore.blockingStoreFiles - 30
> - hbase.hstore.compaction.max - 15
> - hbase.hregion.memstore.block.multiplier - 3
>
> Things seems to be getting better now, not seeing any of those
> annoying ' Blocking updates' messages. Except that, I'm seeing
> increase in 'Compaction Queue' size on some servers.
>
> I noticed memstores are getting flushed, but some with 'compaction
> requested=true'[1]. Is this normal ?
>
>
> [1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore
> flush of ~512.0 M/536921056, currentsize=3.0 M/3194800 for region
> tsdb,\x008ZR\xE1t\xC0\x00\x00\x02\x01\xB0\xF9\x00\x00(\x00\x0B]\x00\x008M((\x00\x00Bk\x9F\x0B,1390598160292.7fb65e5fd5c4cfe08121e85b7354bae9.
> in 3422ms, sequenceid=18522872289, compaction requested=true
>
> On Fri, Jan 24, 2014 at 6:51 PM, Bryan Beaudreault
> <[EMAIL PROTECTED]> wrote:
>> Also, I think you can up the hbase.hstore.blockingStoreFiles quite a bit
>> higher.  You could try something like 50.  It will reduce read performance
>> a bit, but shouldn't be too bad especially for something like opentsdb I
>> think.  If you are going to up the blockingStoreFiles you're probably also
>> going to want to up hbase.hstore.compaction.max.
>>
>> For my tsdb cluster, which is 8 i2.4xlarges in EC2, we have 90 regions for
>> tsdb.  We were also having issues with blocking, and I upped
>> blockingStoreFiles to 35, compaction.max to 15, and
>> memstore.block.multiplier to 3.  We haven't had problems since.  Memstore
>> flushsize for the tsdb table is 512MB.
>>
>> Finally, 64GB heap may prove problematic, but it's worth a shot.  I'd
>> definitely recommend java7 with the G1 garbage collector though.  In
>> general, Java would have a hard time with heap sizes greater than 20-25GB
>> without some careful tuning.
>>
>>
>> On Fri, Jan 24, 2014 at 9:44 PM, Bryan Beaudreault <[EMAIL PROTECTED]
>>> wrote:
>>
>>> It seems from your ingestion rate you are still blowing through HFiles too
>>> fast.  You're going to want to up the MEMSTORE_FLUSHSIZE for the table from
>>> the default of 128MB.  If opentsdb is the only thing on this cluster, you
>>> can do the math pretty easily to find the maximum allowable, based on your
>>> heap size and accounting for 40% (default) used for the block cache.
>>>
>>>
>>> On Fri, Jan 24, 2014 at 9:38 PM, Rohit Dev <[EMAIL PROTECTED]> wrote:
>>>
>>>> Hi Kevin,
>>>>
>>>> We have about 160 regions per server with 16Gig region size and 10
>>>> drives for Hbase. I've looked at disk IO and that doesn't seem to be
>>>> any problem ( % utilization is < 2 across all disk)
>>>>
>>>> Any suggestion what heap size I should allocation, normally I allocate
>>>> 16GB.
>>>>
>>>> Also, I read increasing  hbase.hstore.blockingStoreFiles and
>>>> hbase.hregion.memstore.block.multiplier is good idea for write-heavy
>>>> cluster, but in my case it seem to be heading to wrong direction.
>>>>
>>>> Thanks
>>>>
>>>> On Fri, Jan 24, 2014 at 6:31 PM, Kevin O'dell <[EMAIL PROTECTED]>
>>>> wrote:
>>>>> Rohit,
>>>>>
>>>>>  64GB heap is not ideal, you will run into some weird issues. How many
>>>>> regions are you running per server, how many drives in each node, any
>>>> other
>>>>> settings you changed from default?
>>>>> On Jan 24, 2014 6:22 PM, "Rohit Dev" <[EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We are running Opentsdb on CDH 4.3 hbase cluster, with most of the
>>>>>> default settings. The cluster is heavy on write and I'm trying to see
>>>>>> what parameters I can tune to optimize the write performance.
>>>>>>
>>>>>>
>>>>>> # I get messages related to Memstore[1] and Slow Response[2] very
>>>>>> often, is this an indication of any issue ?
>>>>>>
>>>>>> I tried increasing some parameters on one node:
>>>>>> - hbase.hstore.blockingStoreFiles - from default 7 to 15