Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - HBase load distribution vs. scan efficiency


Copy link to this message
-
Re: HBase load distribution vs. scan efficiency
James Taylor 2014-01-21, 04:24
The salt byte is a stable hash of the rest of the row key. The system has
to remember the total number of buckets, as that's what's used to mod the
hash value with. Adding new regions/regions servers is fine, as it's
orthogonal to the bucket number, though typically the cluster size
determines the total number of salt buckets. Phoenix does not allow you to
change the number of salt buckets for a table after it's created (you'd
need to re-write the table in order to do that).

The salt byte is completely transparent in Phoenix, as your API is SQL
through JDBC. Phoenix manages setting the salt byte, skipping it when
interpreting the row key columns, knowing that a range scan needs to run on
all possible bucket numbers, and that point gets don't, etc.

Thanks,
James
On Mon, Jan 20, 2014 at 6:59 PM, William Kang <[EMAIL PROTECTED]>wrote:

> Hi James,
> Thanks for the link.
>
> Does this mean that the system has to remember the prefix, and append the
> prefix to the original key before the scan starts?
>
> If this is the case, if I somehow decide to change the prefix (maybe added
> many more RS, or want to use a different salting mechanism), it might cause
> all sorts of issues?
>
> If this is not the case, how would a user know what prefix to append to
> start the scan? This is why I asked about row filter, since you can use
> regex to match the original key and skip the prefix. But I am wondering the
> implications of performance if uses row filter.
>
> Many thanks.
>
>
> William
>
>
> On Mon, Jan 20, 2014 at 8:15 PM, James Taylor <[EMAIL PROTECTED]
> >wrote:
>
> > Hi William,
> > Phoenix uses this "bucket mod" solution as well (
> > http://phoenix.incubator.apache.org/salted.html). For the scan, you have
> > to
> > run it in every possible bucket. You can still do a range scan, you just
> > have to prepend the bucket number to the start/stop key of each scan you
> > do, and then you do a merge sort with the results. Phoenix does all this
> > transparently for you.
> > Thanks,
> > James
> >
> >
> > On Mon, Jan 20, 2014 at 4:51 PM, William Kang <[EMAIL PROTECTED]
> > >wrote:
> >
> > > Hi,
> > > Thank you guys. This is an informative email chain.
> > >
> > > I have one follow up question about using the "bucket mod" solution.
> Once
> > > you add the bucket number as the prefix to the key, how do you retrieve
> > the
> > > rows? Do you have to use a rowfilter? Will there be any performance
> issue
> > > of using the row filter since it seems that would be a full table scan?
> > >
> > > Many thanks.
> > >
> > >
> > > William
> > >
> > >
> > > On Mon, Jan 20, 2014 at 5:06 AM, Amit Sela <[EMAIL PROTECTED]>
> wrote:
> > >
> > > > The number of scans depends on the number of regions a day's data
> uses.
> > > You
> > > > need to manage compaction and splitting manually.
> > > > If a days data is 100MB and you want regions to be no more than 200MB
> > > than
> > > > it's two regions to scan per day, if it's 1GB than 10 etc.
> > > > Compression will help you maximize data per region and as I've
> recently
> > > > learned, if your key occupies most of the byes in KeyValue (key is
> > longer
> > > > than family, qualifier and value) than compression can be very
> > > efficient, I
> > > > have a case where 100GB is compressed to 7.
> > > >
> > > >
> > > >
> > > > On Mon, Jan 20, 2014 at 6:56 AM, Vladimir Rodionov
> > > > <[EMAIL PROTECTED]>wrote:
> > > >
> > > > > Ted, how does it differ from row key salting?
> > > > >
> > > > > Best regards,
> > > > > Vladimir Rodionov
> > > > > Principal Platform Engineer
> > > > > Carrier IQ, www.carrieriq.com
> > > > > e-mail: [EMAIL PROTECTED]
> > > > >
> > > > > ________________________________________
> > > > > From: Ted Yu [[EMAIL PROTECTED]]
> > > > > Sent: Sunday, January 19, 2014 6:53 PM
> > > > > To: [EMAIL PROTECTED]
> > > > > Subject: Re: HBase load distribution vs. scan efficiency
> > > > >
> > > > > Bill:
> > > > > See  http://blog.sematext.com/2012/04/09/hbasewd