Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Multiple tables vs big fat table


Copy link to this message
-
Re: Multiple tables vs big fat table
Ian Varley 2011-11-21, 17:11
Certainly, and that's all valid. I just wanted to make it clear to Mark (and others reading) that scans aren't inherently "bad" in HBase, and they don't need to scan the entire table (and usually shouldn't). Short, local scans are very efficient, provided your row keys are sorted in a way that's meaningful to your app. You don't have to do it that way (you can use a hash in the key) but it's a valid design pattern and makes great use of HBase's architecture (the fact that row keys are sorted on disk and in memory).

Ian

On Nov 21, 2011, at 11:04 AM, Michael Segel wrote:

>
> Ian,
>
> The long and short...
>
> I was using the example to show scan() vs get()  and how HBase scales in a linear fashion.
>
> To your point... if you hash your row key, you can't use start/stop row key values in your scan.
>
> We have an application where you would have to do a complete scan in order to find a subset of rows which required some work. To get around from having to do a full scan, you could use a secondary index, or table to store the row keys that you want to work with.
> The trick is then to query the subset and then split the resulting list. (You can do this by overloading the inputFormat class to take a java list object as your input in to a map/reduce job and then create n evenly splits... (Ok n even splits + 1 split holding the remainder...) )
>
> There are always going to be design tradeoffs. The app was designed to give the best performance on reads and then sacrifice m/r performance... (reads from outside of a M/R )
>
>
>> From: [EMAIL PROTECTED]
>> To: [EMAIL PROTECTED]
>> Date: Mon, 21 Nov 2011 08:21:56 -0800
>> Subject: Re: Multiple tables vs big fat table
>>
>> One clarification; Michael, when you say:
>>
>> "If I do a scan(), I'm actually going to go through all of the rows in the table."
>>
>> That's if you're doing a *full* table scan, which you'd have to do if you wanted selectivity based on some attribute that isn't part of the key. This is to be avoided in anything other than a map/reduce scenario; you definitely don't want to scan an entire 100TB table every time you want to return 10 rows to your user in real time.
>>
>> By contrast, however, HBase is perfectly capable of doing *limited* range scans, over some set of sorted rows that are contiguous with respect to their row keys. This continues to be linear in the size of the scanned range, *not* the size of the whole table. In fact, the get() operation is actually built on top of this same scan() operation, but simply restricts itself to one row. (This pre-supposes that you're not manually using a hash for your row keys, of course).
>>
>> So if you're scanning by a fixed range of your row key space, that continues to be constant with respect to the size of the whole table.
>>
>> Ian
>>
>> On Nov 21, 2011, at 10:13 AM, Michael Segel wrote:
>>
>>>
>>> Mark,
>>> I sometimes answer these things while on my iPad. Its not the best way to type in long answers.  :-)
>>>
>>> Yes, you are correct, I'm saying exactly that.  
>>>
>>> So imagine you have an HBase Table on a cluster with 10 nodes and 10TB of data.
>>> If I do a get() I'm asking for a specific row and it will take some time, depending on the row size. For the sake of the example, lets say 5ms.
>>> If I do a scan(), I'm actually going to go through all of the rows in the table.
>>>
>>> Now the Table and the cluster grows to 100 nodes and 100TB of data.
>>> If I do the get(), it should still take roughly 5ms.
>>> However if I do the scan() its going to take longer because you're now going through much more data.
>>>
>>> Note: I'm talking about a single threaded scan() from a non M/R app or from HBase shell.
>>>
>>> This is kind of why getting the right row key, understanding how your data is going to be used, and your schema  are all kind of important when it comes to performance.
>>> (Even flipping the order of the elements that make up your key can have an impact.)
>>>
>>> IMHO I think you need to do a lot more thinking and planning when you work with a NoSQL database than you would w an RDBMs.