Consider if i am running my job today for populating the latest counts for
current year, i will process the data from Jan 1st 2012 to day before(Dec
5th 2012) and insert into the table also the same table contains the
aggregated data of previous years like 2011,2010.
So if i am running the job tomorrow i will process the data from Jan 1st
2012 to Dec 6th 2012 and before inserting into table i will clean all the
rows for the year 2012.
TTL is a nice option but i want to trigger manually from my job.
On Thu, Dec 6, 2012 at 10:05 AM, Anoop Sam John <[EMAIL PROTECTED]> wrote:
> Hi Manoj
> If I read you correctly, I think you want to aggregate some 3,4
> days of data and those data you want to get deleted. Can you think of
> creating tables for this period (one table for 4 days) and aggregate and
> drop the table? Then for the next 4 days another table?
> Or another option is TTL which HBase provides.
> From: Manoj Babu [[EMAIL PROTECTED]]
> Sent: Thursday, December 06, 2012 8:44 AM
> To: user
> Subject: Re: Reg:delete performance on HBase table
> Thank you very much for the valuable information.
> HBase version am using is:
> HBase Version0.90.3-cdh3u1, r
> Use case is:
> We are collecting information on where the user is spending time in our
> site(tracking the user events) also we are doing historical data migration
> from existing system also based on the data we need to populate metrics for
> the year. like Customer A hits option x n times, hits option y n
> times, Customer B hits option x1 n times, hits option y1 n time.
> Earlier by using Hadoop MapReduce we are aggregating the whole year data
> every 2 or 4 days once and using DBOutputFormat emiting to Oracle Table and
> for inserting 181 Million rows it took only 20 mins through 20 reducers
> hitting parallel so before populating the year table we use to delete
> the existing 181 Million rows of that year alone but it tooks more than
> 3hrs even not deleted then by killing the session done a truncate actually
> we are in development stage so planning to try HBase for this case since
> delete is taking too much time in oracle for millions of rows.
> Need to delete rows based on the year only cannot drop, In oracle also
> truncate is extremely fast.
> On Wed, Dec 5, 2012 at 11:44 PM, Nick Dimiduk <[EMAIL PROTECTED]> wrote:
> > On Wed, Dec 5, 2012 at 7:46 AM, Doug Meil <[EMAIL PROTECTED]
> > >wrote:
> > > You probably want to read this section on the RefGuide about deleting
> > from
> > > HBase.
> > >
> > > http://hbase.apache.org/book.html#perf.deleting
> > So hold on. From the guide:
> > 11.9.2. Delete RPC Behavior
> > >
> > > Be aware that htable.delete(Delete) doesn't use the writeBuffer. It
> > > execute an RegionServer RPC with each invocation. For a large number of
> > > deletes, consider htable.delete(List).
> > >
> > > See
> > >
> > So Deletes are like Puts except they're not executed the same why.
> > HTable.put() is implemented using the write buffer while HTable.delete()
> > makes a MutateRequest directly. What is the reason for this? Why is the
> > semantic of Delete subtly different from Put?
> > For that matter, why not buffer all mutation operations?
> > HTable.checkAndPut(), checkAndDelete() both make direct MutateRequest
> > as well.
> > Thanks,
> > -n