Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Recommended Node Size Limits


Copy link to this message
-
RE: Recommended Node Size Limits
How about Integer.MAX_VALUE (or I believe 0 works) to completely disable splits?

As far what we are running with today, we do have clusters with regions over 10GB and growing.  There has been a lot of work in the compaction logic to make these large regions more efficient with IO (by not compacting big/old files and such).

JG

> -----Original Message-----
> From: Ted Dunning [mailto:[EMAIL PROTECTED]]
> Sent: Friday, January 14, 2011 10:12 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Recommended Node Size Limits
>
> Way up = ??
>
> 1GB?
>
> 10GB?
>
> If 1GB, doesn't this mean that you are serving only 64GB of data per node?
>  That seems really, really small.
>
> On Fri, Jan 14, 2011 at 9:39 AM, Jonathan Gray <[EMAIL PROTECTED]> wrote:
>
> > Then you can turn your split size way up, effectively preventing
> > further splits.  Again, this is for randomly distributed requests.
> >
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB