Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> How well does HBase run on low/medium memory/cpu clusters?


Copy link to this message
-
RE: How well does HBase run on low/medium memory/cpu clusters?
Ah, the question I have isn't about schema design. What exists as multiple
tables in MySQL would become one table probably in HBase. My comment about
"joining" a 7M and a 15M row table in MySQL is because of our daily "scan"
to update that range of 7M rows. In MySQL, that's a CSV import followed by
an update (requiring a nasty join). This would go down pretty well with a
properly designed rowkey in HBase and perhaps a mapreduce job for the big
update.

My question is more about what kind of hardware I really need in order to
support a reasonable amount of random  access lookups, and the occasional
range scan over say 7M rows.

I would like to think that a cluster of dual-core, 1.7GB ram boxes could
perform reasonably well. That is to say, I don't need an expensive cluster
of 15GB ram boxes.

But perhaps I don't know enough. Is HBase typically CPU bound? Memory bound?
Disk bound?  Given the expectation of a reasonable rate (given the cluster
size) of random reads (from a web app) and ~hourly range scans of 7M rows.

Dave
-----Original Message-----
From: Michael Segel [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 10, 2012 8:52 PM
To: [EMAIL PROTECTED]
Subject: Re: How well does HBase run on low/medium memory/cpu clusters?

Well you don't want to do joins in HBase.

There are a couple of ways to do this, however, I think based on what you
have said... the larger issue for either solution (HBase or MySQL would be
your schema design.)

Basically you said you have Table A w 50 Million rows and Table B of 7
Million rows.

You don't really talk about any indexes or Foreign Key constraints between
the two tables.
Or what that data is...

Can you provide more information?

Right now you haven't provided enough information to solve your problem.

On Oct 10, 2012, at 3:16 AM, David Parks <[EMAIL PROTECTED]> wrote:

> In looking at the AWS MapReduce version of HBase, it doesn't  even  
> give an option to run it on lower end hardware.
>
> I am considering HBase as an alternative to one large table we have in
> MySQL which is causing problems. It's 50M rows, a pretty straight
> forward set of product  items.
>
> The challenge  is that I need to do 10+ range scans a day over about
> 7M items each where we check for updates. This is ideal for HBase, but
> hell for MySQL (a join of a 7M row table with a 50M row table is
> giving us fits-a-plenty).
>
> But beyond the daily range scans the actual workload on the boxes
> should be reasonable, just random access reads. So it doesn't seem
> like I should need significant memory/CPU requirements...
>
> But here's where I don't find a lot of information - as  someone
> reasonably new to HBase (I read a book, did the examples), am I
> missing anything in my thinking?
>
> David
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB