Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase Table Row Count Optimization - A Solicitation For Help


Copy link to this message
-
RE: HBase Table Row Count Optimization - A Solicitation For Help
How long does it take for RowCounter Job for largest table to finish on your cluster?

Just curious.

On your options:

1. Not worth it probably - you may overload your cluster
2. Not sure this one differs from 1. Looks the same to me but more complex.
3. The same as 1 and 2

Counting rows in efficient way can be done if you sacrifice some accuracy :

http://highscalability.com/blog/2012/4/5/big-data-counting-how-to-count-a-billion-distinct-objects-us.html

Yeah, you will need coprocessors for that.

Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: [EMAIL PROTECTED]

________________________________________
From: James Birchfield [[EMAIL PROTECTED]]
Sent: Friday, September 20, 2013 3:50 PM
To: [EMAIL PROTECTED]
Subject: Re: HBase Table Row Count Optimization - A Solicitation For Help

Hadoop 2.0.0-cdh4.3.1

HBase 0.94.6-cdh4.3.1

110 servers, 0 dead, 238.2364 average load

Some other info, not sure if it helps or not.

Configured Capacity: 1295277834158080 (1.15 PB)
Present Capacity: 1224692609430678 (1.09 PB)
DFS Remaining: 624376503857152 (567.87 TB)
DFS Used: 600316105573526 (545.98 TB)
DFS Used%: 49.02%
Under replicated blocks: 0
Blocks with corrupt replicas: 1
Missing blocks: 0

It is hitting a production cluster, but I am not really sure how to calculate the load placed on the cluster.
On Sep 20, 2013, at 3:19 PM, Ted Yu <[EMAIL PROTECTED]> wrote:

> How many nodes do you have in your cluster ?
>
> When counting rows, what other load would be placed on the cluster ?
>
> What is the HBase version you're currently using / planning to use ?
>
> Thanks
>
>
> On Fri, Sep 20, 2013 at 2:47 PM, James Birchfield <
> [EMAIL PROTECTED]> wrote:
>
>>        After reading the documentation and scouring the mailing list
>> archives, I understand there is no real support for fast row counting in
>> HBase unless you build some sort of tracking logic into your code.  In our
>> case, we do not have such logic, and have massive amounts of data already
>> persisted.  I am running into the issue of very long execution of the
>> RowCounter MapReduce job against very large tables (multi-billion for many
>> is our estimate).  I understand why this issue exists and am slowly
>> accepting it, but I am hoping I can solicit some possible ideas to help
>> speed things up a little.
>>
>>        My current task is to provide total row counts on about 600
>> tables, some extremely large, some not so much.  Currently, I have a
>> process that executes the MapRduce job in process like so:
>>
>>                        Job job = RowCounter.createSubmittableJob(
>>                                        ConfigManager.getConfiguration(),
>> new String[]{tableName});
>>                        boolean waitForCompletion >> job.waitForCompletion(true);
>>                        Counters counters = job.getCounters();
>>                        Counter rowCounter >> counters.findCounter(hbaseadminconnection.Counters.ROWS);
>>                        return rowCounter.getValue();
>>
>>        At the moment, each MapReduce job is executed in serial order, so
>> counting one table at a time.  For the current implementation of this whole
>> process, as it stands right now, my rough timing calculations indicate that
>> fully counting all the rows of these 600 tables will take anywhere between
>> 11 to 22 days.  This is not what I consider a desirable timeframe.
>>
>>        I have considered three alternative approaches to speed things up.
>>
>>        First, since the application is not heavily CPU bound, I could use
>> a ThreadPool and execute multiple MapReduce jobs at the same time looking
>> at different tables.  I have never done this, so I am unsure if this would
>> cause any unanticipated side effects.
>>
>>        Second, I could distribute the processes.  I could find as many
>> machines that can successfully talk to the desired cluster properly, give
>> them a subset of tables to work on, and then combine the results post
Confidentiality Notice:  The information contained in this message, including any attachments hereto, may be confidential and is intended to be read only by the individual or entity to whom this message is addressed. If the reader of this message is not the intended recipient or an agent or designee of the intended recipient, please note that any review, use, disclosure or distribution of this message or its attachments, in any form, is strictly prohibited.  If you have received this message in error, please immediately notify the sender and/or [EMAIL PROTECTED] and delete or destroy any copy of this message and its attachments.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB