Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - more regionservers does not improve performance


Copy link to this message
-
Re: more regionservers does not improve performance
Suraj Varma 2012-10-13, 02:30
Hi Jonathan:
What specific metric on ganglia did you notice for "IO is spiking"? Is
it your disk IO? Is your disk swapping? Do you see cpu iowait spikes?

I see you have given 8g to the RegionServer ... how much RAM is
available total on that node? What heap are the individual mappers &
DN set to run on (i.e. check whether you are overallocated on heap
when the _mappers_ run ... causing disk swapping ... leading to IO?).

There can be multiple causes ... so, you may need to look at ganglia
stats and narrow the bottleneck down as described in
http://hbase.apache.org/book/casestudies.perftroub.html

Here's a good reference for all the memstore related tweaks you can
try (and also to understand what each configuration means):
http://blog.sematext.com/2012/07/16/hbase-memstore-what-you-should-know/

Also, provide more details on your schema (CFs, row size), Put sizes,
etc as well to see if that triggers an idea from the list.
--S
On Fri, Oct 12, 2012 at 12:46 PM, Bryan Beaudreault
<[EMAIL PROTECTED]> wrote:
> I recommend turning on debug logging on your region servers.  You may need
> to tune down certain packages back to info, because there are a few spammy
> ones, but overall it helps.
>
> You should see messages such as "12/10/09 14:22:57 INFO
> regionserver.HRegion: Blocking updates for 'IPC Server handler 41 on 60020'
> on region XXX: memstore size 256.0m is >= than blocking 256.0m size".  As
> you can see, this is an INFO anyway so you should be able to see it now if
> it is happening.
>
> You can try upping the number of IPC handlers and the memstore flush
> threshold.  Also, maybe you are bottlenecked by the WAL.  Try doing
> put.setWriteToWAL(false), just to see if it increases performance.  If so
> and you want to be a bit more safe with regard to the wal, you can try
> turning on deferred flush on your table.  I don't really know how to
> increase performance of the wal aside from that, if this does seem to have
> an affect.
>
>
>
> On Fri, Oct 12, 2012 at 3:15 PM, Jonathan Bishop <[EMAIL PROTECTED]>wrote:
>
>> Kevin,
>>
>> Sorry, I am fairly new to HBase. Can you be specific about what settings I
>> can change, and also where they are specified?
>>
>> Pretty sure I am not hotspotting, and increasing memstore does not seem to
>> have any effect.
>>
>> I do not seen any messages in my regionserver logs concerning blocking.
>>
>> I am suspecting that I am hitting some limit in our grid, but would like to
>> know where that limit is being imposed.
>>
>> Jon
>>
>> On Fri, Oct 12, 2012 at 6:44 AM, Kevin O'dell <[EMAIL PROTECTED]
>> >wrote:
>>
>> > Jonathan,
>> >
>> >   Lets take a deeper look here.
>> >
>> > What is your memstore set at for the table/CF in question?  Lets compare
>> > that value with the flush size you are seeing for your regions.  If they
>> > are really small flushes is it all to the same region?  If so that is
>> going
>> > to be schema issues.  If they are full flushes you can up your memstore
>> > assuming you have the heap to cover it.  If they are smaller flushes but
>> to
>> > different regions you most likely are suffering from global limit
>> pressure
>> > and flushing too soon.
>> >
>> > Are you flushing prematurely due to HLogs rolling?  Take a look for too
>> > many hlogs and look at the flushes.  It may benefit you to raise that
>> > value.
>> >
>> > Are you blocking?  As Suraj was saying you may be blocking in 90second
>> > blocks.  Check the RS logs for those messages as well and then Suraj's
>> > advice.
>> >
>> > This is where I would start to optimize your write path.  I hope the
>> above
>> > helps.
>> >
>> > On Fri, Oct 12, 2012 at 3:34 AM, Suraj Varma <[EMAIL PROTECTED]>
>> wrote:
>> >
>> > > What have you configured your hbase.hstore.blockingStoreFiles and
>> > > hbase.hregion.memstore.block.multiplier? Both of these block updates
>> > > when the limit is hit. Try increasing these to say 20 and 4 from the
>> > > default 7 and 2 and see if it helps.
>> > >
>> > > If this still doesn't help, see if you can set up ganglia to get a