Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> bulk loading regions number


Copy link to this message
-
Re: bulk loading regions number
The decision can be made depending on the number of total regions you
want deployed across your 10 machines, and the size you expect the
total to be before you have to expand the size of cluster.
Additionally add in a parallelism factor of say 5-10 (or more if you
want) regions of the same table per RS, so that cluster expansion is
easy later.

The penalty of large HFile sizes (I am considering > 4 GB large
enough) may be that major compactions will begin taking time on
full/full-ish regions (writes a single file worth that much). I don't
think there's too much impact to parallelism (# of regions
independently serve-able) or to random reads with the new HFileV2
format with such big files.

If it suits your data ingest, go for bigger files.

On Mon, Sep 10, 2012 at 2:15 PM, Oleg Ruchovets <[EMAIL PROTECTED]> wrote:
> Great
>   That is actually what I am thinking about too.
> What is the best practice to choose HFile size?
> What is the penalty to define it very big?
>
> Thanks
> Oleg.
>
> On Mon, Sep 10, 2012 at 4:24 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>
>> Hi Oleg,
>>
>> If the root issue is a growing number of regions, why not control that
>> instead of a way to control the Reducer count? You could, for example,
>> raise the split-point sizes for HFiles, to not have it split too much,
>> and hence have larger but fewer regions?
>>
>> Given that you have 10 machines, I'd go this way rather than ending up
>> with a lot of regions causing issues with load.
>>
>> On Mon, Sep 10, 2012 at 1:49 PM, Oleg Ruchovets <[EMAIL PROTECTED]>
>> wrote:
>> > Hi ,
>> >   I am using bulk loading to write my data to hbase.
>> >
>> > I works fine , but number of regions growing very rapidly.
>> > Entering ONE WEEK of data I got  200 regions (I am going to save years of
>> > data).
>> > As a result job which writes data to HBase has REDUCERS number equals
>> > REGIONS number.
>> > So entering only one WEEK of data I have 200 reducers.
>> >
>> > Questions:
>> >    How to resolve the problem of constantly growing reducers number using
>> > bulk loading and TotalOrderPartition.
>> >  I have 10 machine cluster and I think I should have ~ 30 reducers.
>> >
>> > Thank in advance.
>> > Oleg.
>>
>>
>>
>> --
>> Harsh J
>>

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB