Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> max HStoreFile size


Copy link to this message
-
max HStoreFile size
Hi all, I'm using HBase 0.94.12 and in some tables I'm managing splitting
and compactions manually.

I was wondering if hbase.hregion.max.filesize relates to compressed or
uncompressed file size.
If I'm using compression, and the file size < hbase.hregion.max.filesize
but uncompressed it's bigger, than when executing major compaction on the
region, it splits.

Should it be like that ? more important, the recommendation of regions of
1GB is for compressed or uncompressed StoreFile size?

Since I'm using bulk load, I get about 3 StoreFiles loaded into each CF of
every new region, I executed region compaction to unite them as 1 file (and
then got the unwanted splits) - If I'm never updating this data, do I gain
something from uniting the files ?
Could I manage ~500MB of compressed (GZ - decompresses to about 7.5GB) with
10GB RAM RegionServers ?

Thanks,

Amit.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB