Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Poor HBase map-reduce scan performance


Copy link to this message
-
Re: Poor HBase map-reduce scan performance
Bryan Keller 2013-05-01, 15:00
Yes I would like to try this, if you can point me to the pom.xml patch that
would save me some time.

On Tuesday, April 30, 2013, lars hofhansl wrote:

> If you can, try 0.94.4+; it should significantly reduce the amount of
> bytes copied around in RAM during scanning, especially if you have wide
> rows and/or large key portions. That in turns makes scans scale better
> across cores, since RAM is shared resource between cores (much like disk).
>
>
> It's not hard to build the latest HBase against Cloudera's version of
> Hadoop. I can send along a simple patch to pom.xml to do that.
>
> -- Lars
>
>
>
> ________________________________
>  From: Bryan Keller <[EMAIL PROTECTED] <javascript:;>>
> To: [EMAIL PROTECTED] <javascript:;>
> Sent: Tuesday, April 30, 2013 11:02 PM
> Subject: Re: Poor HBase map-reduce scan performance
>
>
> The table has hashed keys so rows are evenly distributed amongst the
> regionservers, and load on each regionserver is pretty much the same. I
> also have per-table balancing turned on. I get mostly data local mappers
> with only a few rack local (maybe 10 of the 250 mappers).
>
> Currently the table is a wide table schema, with lists of data structures
> stored as columns with column prefixes grouping the data structures (e.g.
> 1_name, 1_address, 1_city, 2_name, 2_address, 2_city). I was thinking of
> moving those data structures to protobuf which would cut down on the number
> of columns. The downside is I can't filter on one value with that, but it
> is a tradeoff I would make for performance. I was also considering
> restructuring the table into a tall table.
>
> Something interesting is that my old regionserver machines had five 15k
> SCSI drives instead of 2 SSDs, and performance was about the same. Also, my
> old network was 1gbit, now it is 10gbit. So neither network nor disk I/O
> appear to be the bottleneck. The CPU is rather high for the regionserver so
> it seems like the best candidate to investigate. I will try profiling it
> tomorrow and will report back. I may revisit compression on vs off since
> that is adding load to the CPU.
>
> I'll also come up with a sample program that generates data similar to my
> table.
>
>
> On Apr 30, 2013, at 10:01 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:
>
> > Your average row is 35k so scanner caching would not make a huge
> difference, although I would have expected some improvements by setting it
> to 10 or 50 since you have a wide 10ge pipe.
> >
> > I assume your table is split sufficiently to touch all RegionServer...
> Do you see the same load/IO on all region servers?
> >
> > A bunch of scan improvements went into HBase since 0.94.2.
> > I blogged about some of these changes here:
> http://hadoop-hbase.blogspot.com/2012/12/hbase-profiling.html
> >
> > In your case - since you have many columns, each of which carry the
> rowkey - you might benefit a lot from HBASE-7279.
> >
> > In the end HBase *is* slower than straight HDFS for full scans. How
> could it not be?
> > So I would start by looking at HDFS first. Make sure Nagle's is disbaled
> in both HBase and HDFS.
> >
> > And lastly SSDs are somewhat new territory for HBase. Maybe Andy Purtell
> is listening, I think he did some tests with HBase on SSDs.
> > With rotating media you typically see an improvement with compression.
> With SSDs the added CPU needed for decompression might outweigh the
> benefits.
> >
> > At the risk of starting a larger discussion here, I would posit that
> HBase's LSM based design, which trades random IO with sequential IO, might
> be a bit more questionable on SSDs.
> >
> > If you can, it would be nice to run a profiler against one of the
> RegionServers (or maybe do it with the single RS setup) and see where it is
> bottlenecked.
> > (And if you send me a sample program to generate some data - not 700g,
> though :) - I'll try to do a bit of profiling during the next days as my
> day job permits, but I do not have any machines with SSDs).
> >
> > -- Lars
> >
> >