Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> more regionservers does not improve performance

Copy link to this message
Re: more regionservers does not improve performance
I'm intrigued by this statement in your first mail:

> What is strange is that I do not get much run time improvement by
> increasing the number regionservers beyond about 4. Indeed, it seems that
> the system runs slower with 8 regionservers than with 4.

So ... are you saying that if you shut down four of your region
servers and task trackers right now ... you are able to generate more
throughput (requests/sec)? Merely adding more region servers slows
things down?
Or did you change other things (like more region splits, etc) between
these two states?

Also - your 40 mappers ... are they using TableMapReduceUtil based
splits ... or custom splits? Are the mappers going across the network
to region servers on other nodes? Or are they all local calls?

Just trying to understanding your cluster setup a bit more ...

On Fri, Oct 12, 2012 at 7:30 PM, Suraj Varma <[EMAIL PROTECTED]> wrote:
> Hi Jonathan:
> What specific metric on ganglia did you notice for "IO is spiking"? Is
> it your disk IO? Is your disk swapping? Do you see cpu iowait spikes?
> I see you have given 8g to the RegionServer ... how much RAM is
> available total on that node? What heap are the individual mappers &
> DN set to run on (i.e. check whether you are overallocated on heap
> when the _mappers_ run ... causing disk swapping ... leading to IO?).
> There can be multiple causes ... so, you may need to look at ganglia
> stats and narrow the bottleneck down as described in
> http://hbase.apache.org/book/casestudies.perftroub.html
> Here's a good reference for all the memstore related tweaks you can
> try (and also to understand what each configuration means):
> http://blog.sematext.com/2012/07/16/hbase-memstore-what-you-should-know/
> Also, provide more details on your schema (CFs, row size), Put sizes,
> etc as well to see if that triggers an idea from the list.
> --S
> On Fri, Oct 12, 2012 at 12:46 PM, Bryan Beaudreault
> <[EMAIL PROTECTED]> wrote:
>> I recommend turning on debug logging on your region servers.  You may need
>> to tune down certain packages back to info, because there are a few spammy
>> ones, but overall it helps.
>> You should see messages such as "12/10/09 14:22:57 INFO
>> regionserver.HRegion: Blocking updates for 'IPC Server handler 41 on 60020'
>> on region XXX: memstore size 256.0m is >= than blocking 256.0m size".  As
>> you can see, this is an INFO anyway so you should be able to see it now if
>> it is happening.
>> You can try upping the number of IPC handlers and the memstore flush
>> threshold.  Also, maybe you are bottlenecked by the WAL.  Try doing
>> put.setWriteToWAL(false), just to see if it increases performance.  If so
>> and you want to be a bit more safe with regard to the wal, you can try
>> turning on deferred flush on your table.  I don't really know how to
>> increase performance of the wal aside from that, if this does seem to have
>> an affect.
>> On Fri, Oct 12, 2012 at 3:15 PM, Jonathan Bishop <[EMAIL PROTECTED]>wrote:
>>> Kevin,
>>> Sorry, I am fairly new to HBase. Can you be specific about what settings I
>>> can change, and also where they are specified?
>>> Pretty sure I am not hotspotting, and increasing memstore does not seem to
>>> have any effect.
>>> I do not seen any messages in my regionserver logs concerning blocking.
>>> I am suspecting that I am hitting some limit in our grid, but would like to
>>> know where that limit is being imposed.
>>> Jon
>>> On Fri, Oct 12, 2012 at 6:44 AM, Kevin O'dell <[EMAIL PROTECTED]
>>> >wrote:
>>> > Jonathan,
>>> >
>>> >   Lets take a deeper look here.
>>> >
>>> > What is your memstore set at for the table/CF in question?  Lets compare
>>> > that value with the flush size you are seeing for your regions.  If they
>>> > are really small flushes is it all to the same region?  If so that is
>>> going
>>> > to be schema issues.  If they are full flushes you can up your memstore
>>> > assuming you have the heap to cover it.  If they are smaller flushes but