Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Porting SQL DB into HBASE


Copy link to this message
-
Re: Porting SQL DB into HBASE
Kranthi,

Your tables seem to be small. Why do you want to port them to HBase?

-Amandeep
Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz
On Mon, Apr 12, 2010 at 1:55 AM, kranthi reddy <[EMAIL PROTECTED]>wrote:

> HI jonathan,
>
> Sorry for the late response. Missed your reply.
>
> The problem is, around 80% (400) of the tables are static tables and the
> remaining 20% (100) are dynamic tables that are updated on a daily basis.
> The problem is denormalising these 20% tables is also extremely difficult
> and we are planning to port them directly into hbase. And also
> denormalising
> these tables would lead to a lot of redundant data.
>
> Static tables have number of entries varying in hundreds and mostly less
> than 1000 entries (rows). Where as the dynamic tables have more than 20,000
> entries and each entry might be updated/modified at least once in a week.
>
> Regards,
> kranthi
>
>
> On Wed, Mar 31, 2010 at 10:23 PM, Jonathan Gray <[EMAIL PROTECTED]>
> wrote:
>
> > Kranthi,
> >
> > HBase can handle a good number of tables, but tens or maybe a hundred.
>  If
> > you have 500 tables you should definitely be rethinking your schema
> design.
> >  The issue is less about HBase being able to handle lots of tables, and
> much
> > more about whether scattering your data across lots of tables will be
> > performant at read time.
> >
> >
> > 1)  Impossible to answer that question without knowing the schemas of the
> > existing tables.
> >
> > 2)  Not really any relation between fault tolerance and the number of
> > tables except potentially for recovery time but this would be the same
> with
> > few, very large tables.
> >
> > 3)  No difference in write performance.  Read performance if doing simple
> > key lookups would not be impacted, but most like having data spread out
> like
> > this will mean you'll need joins of some sort.
> >
> > Can you tell more about your data and queries?
> >
> > JG
> >
> > > -----Original Message-----
> > > From: kranthi reddy [mailto:[EMAIL PROTECTED]]
> > > Sent: Wednesday, March 31, 2010 3:05 AM
> > > To: [EMAIL PROTECTED]
> > > Subject: Porting SQL DB into HBASE
> > >
> > > Hi all,
> > >
> > >         I have run into some trouble while trying to port SQL DB to
> > > Hbase.
> > > The problem is my SQL DB has around 500 tables (approx) and it is very
> > > badly
> > > designed. Around 45-50 tables could be denormalised into a single table
> > > and
> > > the remaining tables are static tables. My doubts are
> > >
> > > 1) Is it possible to port this DB (Tables) to Hbase? If possible how?
> > > 2) How many tables can Hbase support with tolerance towards failure?
> > > 3) When so many tables are inserted, how is the performance going to be
> > > effected? Will it remain same or degrade?
> > >
> > > One possible solution I think is using column family for each table.
> > > But as
> > > per my knowledge and previous experiments, I found Hbase isn't stable
> > > when
> > > column families are more than 5.
> > >
> > > Since every day large quantities of data is ported into the DataBase,
> > > stability and fail proof system is highest priority.
> > >
> > > Hoping for a positive response.
> > >
> > > Thank you,
> > > kranthi
> >
>
>
>
> --
> Kranthi Reddy. B
> Room No : 98
> Old Boys Hostel
> IIIT-HYD
>
> -----------
>
> I don't know the key to success, but the key to failure is trying to
> impress
> others.
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB