Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Key formats and very low cardinality leading fields


Copy link to this message
-
Re: Key formats and very low cardinality leading fields
Jean-Marc Spaggiari 2012-09-03, 19:20
Yes, you're right, but again, it will depend on the number of
regionservers and the distribution of your data.

If you have 3 region servers and your data is evenly distributed, that
mean all the data starting with a 1 will be on server 1, and so on.

So if you write a million of lines starting with a 1, they will all
land on the same server.

Of course, you can pre-split your table. Like 1a to 1z and assign each
region to one of you 3 servers. That way you will avoir hotspotting
even if you write million of lines starting with a 1.

If you have une hundred regions, you will face the same issue at the
beginning, but the more data your will add, the more your table will
be split across all the servers and the less hotspottig you will have.

Can't you just revert your fields and put the 1 to 30 at the end of the key?

2012/9/3, Eric Czech <[EMAIL PROTECTED]>:
> Thanks for the response Jean-Marc!
>
> I understand what you're saying but in a more extreme case, let's say
> I'm choosing the leading number on the range 1 - 3 instead of 1 - 30.
> In that case, it seems like all of the data for any one prefix would
> already be split well across the cluster and as long as the second
> value isn't written sequentially, there wouldn't be an issue.
>
> Is my reasoning there flawed at all?
>
> On Mon, Sep 3, 2012 at 2:31 PM, Jean-Marc Spaggiari
> <[EMAIL PROTECTED]> wrote:
>> Hi Eric,
>>
>> In HBase, data is stored sequentially based on the key alphabetical
>> order.
>>
>> It will depend of the number of reqions and regionservers you have but
>> if you write data from 23AAAAAA to 23ZZZZZZ they will most probably go
>> to the same region even if the cardinality of the 2nd part of the key
>> is high.
>>
>> If the first number is always changing between 1 and 30 for each
>> write, then you will reach multiple region/servers if you have, else,
>> you might have some hot-stopping.
>>
>> JM
>>
>> 2012/9/3, Eric Czech <[EMAIL PROTECTED]>:
>>> Hi everyone,
>>>
>>> I was curious whether or not I should expect any write hot spots if I
>>> structured my composite keys in a way such that the first field is a
>>> low cardinality (maybe 30 distinct values) value and the next field
>>> contains a very high cardinality value that would not be written
>>> sequentially.
>>>
>>> More concisely, I want to do this:
>>>
>>> Given one number between 1 and 30, write many millions of rows with
>>> keys like <number chosen> : <some generally distinct, non-sequential
>>> value>
>>>
>>> Would there be any problem with the millions of writes happening with
>>> the same first field key prefix even if the second field is largely
>>> unique?
>>>
>>> Thank you!
>>>
>