Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Reg:delete performance on HBase table


Copy link to this message
-
Re: Reg:delete performance on HBase table
ramkrishna vasudevan 2012-12-06, 05:15
Generally if the data is not used after some short duration people tend to
go with individual tables and then drop the table itself..
Regards
Ram

On Thu, Dec 6, 2012 at 10:05 AM, Anoop Sam John <[EMAIL PROTECTED]> wrote:

> Hi Manoj
>         If I read you correctly, I think you want to aggregate some 3,4
> days of data and those data you want to get deleted.  Can you think of
> creating tables for this period (one table for 4 days) and aggregate and
> drop the table?  Then for the next 4 days another table?
>
> Or another option is TTL which HBase provides.
>
> -Anoop-
> ________________________________________
> From: Manoj Babu [[EMAIL PROTECTED]]
> Sent: Thursday, December 06, 2012 8:44 AM
> To: user
> Subject: Re: Reg:delete performance on HBase table
>
> Team,
>
> Thank you very much for the valuable information.
>
> HBase version am using is:
> HBase Version0.90.3-cdh3u1, r
>
> Use case is:
> We are collecting information on where the user is spending time in our
> site(tracking the user events) also we are doing historical data migration
> from existing system also based on the data we need to populate metrics for
> the year. like Customer A hits option x n times, hits option y n
> times, Customer B hits option x1 n times, hits option y1 n time.
>
> Earlier by using Hadoop MapReduce we are aggregating the whole year data
> every 2 or 4 days once and using DBOutputFormat emiting to Oracle Table and
> for inserting 181 Million rows it took only 20 mins through 20 reducers
> hitting parallel so before populating the year table we use to delete
> the existing 181 Million rows of that year alone but it tooks more than
> 3hrs even not deleted then by killing the session done a truncate actually
> we are in development stage so planning to try HBase for this case since
> delete is taking too much time in oracle for millions of rows.
>
>
> Need to delete rows based on the year only cannot drop, In oracle also
> truncate is extremely fast.
>
> Cheers!
> Manoj.
>
>
>
> On Wed, Dec 5, 2012 at 11:44 PM, Nick Dimiduk <[EMAIL PROTECTED]> wrote:
>
> > On Wed, Dec 5, 2012 at 7:46 AM, Doug Meil <[EMAIL PROTECTED]
> > >wrote:
> >
> > > You probably want to read this section on the RefGuide about deleting
> > from
> > > HBase.
> > >
> > > http://hbase.apache.org/book.html#perf.deleting
> >
> >
> > So hold on. From the guide:
> >
> > 11.9.2. Delete RPC Behavior
> > >
> >
> > > Be aware that htable.delete(Delete) doesn't use the writeBuffer. It
> will
> > > execute an RegionServer RPC with each invocation. For a large number of
> > > deletes, consider htable.delete(List).
> > >
> >
> > > See
> > >
> >
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#delete%28org.apache.hadoop.hbase.client.Delete%29
> >
> >
> > So Deletes are like Puts except they're not executed the same why.
> Indeed,
> > HTable.put() is implemented using the write buffer while HTable.delete()
> > makes a MutateRequest directly. What is the reason for this? Why is the
> > semantic of Delete subtly different from Put?
> >
> > For that matter, why not buffer all mutation operations?
> > HTable.checkAndPut(), checkAndDelete() both make direct MutateRequest
> calls
> > as well.
> >
> > Thanks,
> > -n
> >
>