Rita 2012-08-04, 02:43
Bijeet Singh 2012-08-04, 04:54
anil gupta 2012-08-04, 05:39
Hamed Ghavamnia 2012-08-04, 07:44
Ok, a couple of things....
First, a contrarian piece of advice....
Don't base your performance tuning on your initial load, but on your system at its steady state.
It's a simple concept that people forget and it can cause problems down the road....
So we have two problems...
Rita w 13 billion rows and Hamed w 15,000 row inserts per second.
Both are distinct problems...
Rita, what constraints do you have? Have you thought about your schema? Have you thought about your region size? Have you tuned up HBase? How long do you have to load the data?
What is the growth and use of the data?
(these are pretty much the same for a DW,ODS, OLTP and NoSQL that and DBA would face.)
While you were already pointed to the bulk load, I thought you should also think about the other issues too.
15k rows a second?
You have a slightly different problem. Rita asks about initial load. You have an issue with sustained input rate .
You already see a problem with sequential keys...
What is you planned access patterns? Size of the row, growth rate? Decay rate?
(do you even delete the data?)
Does the Schema make sense, or do you want to look at Asynchronous HBase?
Then there are other considerations...
Like your network and hardware...
What are you running on?
Memory, CPU, disk ... (ssd's?)
A lot of unknown factors... So to help we're going to need more information....
Sent from a remote device. Please excuse any typos...
On Aug 4, 2012, at 2:44 AM, Hamed Ghavamnia <[EMAIL PROTECTED]> wrote:
> I'm facing a somewhat similar problem. I need to insert 15000 rows per
> second into hbase. I'm getting really bad results using the simple put
> api's (with multithreading). I've tried map/reduce integration as well. The
> problem seems to be the type of the row keys. My row keys have an
> incremental type, which makes hbase store them in the same region and
> therefore on the same node. I've tried changing my keys to a more random
> type, but still hbase stores them in the same region.
> Any solutions would be appreciated, some things which have crossed my mind:
> 1. To presplit my regions, but I'm not sure if the problem has anything to
> do with the regions.
> 2. Use the bulk load stated in you emails, but I don't where to start from.
> Do you have a link to a sample code which can be used?
> Any ideas?
> On Sat, Aug 4, 2012 at 10:09 AM, anil gupta <[EMAIL PROTECTED]> wrote:
>> Hi Rita,
>> HBase Bulk Loader is a viable solution for loading such huge data set. Even
>> if your import file has a separator other than tab you can use ImportTsv as
>> long as the separator is single character. If in case you want to put in
>> your business logic while writing the data to HBase then you can write your
>> own mapper class and use it with bulk loader. Hence, you can heavily
>> customize the bulk loader as per your needs.
>> These links might be helpful for you:
>> Anil Gupta
>> On Fri, Aug 3, 2012 at 9:54 PM, Bijeet Singh <[EMAIL PROTECTED]>
>>> Well, if the file that you have contains TSV, you can directly use the
>>> ImportTSV utility of HBase to do a bulk load.
>>> More details about that can be found here :
>>> The other option for you is to run a MR job on the file that you have, to
>>> generate the HFiles, which you can later import
>>> to HBase using completebulkload. HFiles are created using the
>>> HFileOutputFormat class.The output of Map should
>>> be Put or KeyValue. For Reduce you need to use configureIncrementalLoad
>>> which sets up reduce tasks.
>>> On Sat, Aug 4, 2012 at 8:13 AM, Rita <[EMAIL PROTECTED]> wrote:
>>>> I have a file which has 13 billion rows of key an value which I would
>>>> to place in Hbase. I was wondering if anyone has a good example to
Hamed Ghavamnia 2012-08-05, 05:31