Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - Slow Get Performance (or how many disk I/O does it take for one non-cached read?)


+
Jan Schellenberger 2014-01-31, 23:13
+
Ted Yu 2014-01-31, 23:24
+
lars hofhansl 2014-02-01, 05:26
+
lars hofhansl 2014-02-01, 05:31
Copy link to this message
-
Re: Slow Get Performance (or how many disk I/O does it take for one non-cached read?)
Ted Yu 2014-02-01, 01:44
bq. #3. Custom compaction

Stripe compaction would be in the upcoming 0.98.0 release.
See HBASE-7667 Support stripe compaction

Cheers
On Fri, Jan 31, 2014 at 5:29 PM, Vladimir Rodionov
<[EMAIL PROTECTED]>wrote:

>
> #1 Use GZ compression instead of SNAPPY - usually it gives you additional
> 1.5 x
>
> Block Cache hit rate 50% is very low, actually and it is strange. On every
> GET there will be at least 3 accesses to block cache:
>
> get INDEX block, get BLOOM block, get DATA block. Therefore, everything
> below 66% is actually, - nothing.
>
> #2: Try to increase block cache size and see what will happen?
>
> Your Bloomfilter does not work actually, because you have zillion of
> versions. In this case, the only thing which can help you:
>
> major compaction of regions... or better -
>
> #3. Custom compaction, which will create non-overlapping, by timestamp,
> store files. Yes, its hard.
>
> #4 Disable CRC32 check in HDFS and enable inline CRC in HBase - this will
> save you 50% of IOPS.
> https://issues.apache.org/jira/browse/HBASE-5074
>
> #5 Enable short circuit reads (See HBase book on short circuit reads)
>
> #6 For your use case, probably,  the good idea to try SSDs.
>
> and finally,
>
> #7 the rule of thumb is to have your hot data set in RAM. Does not fit?
> Increase RAM, increase # of servers.
>
> btw, what is the average size of GET result and do you really touch every
> key in your data set with the same probability?
>
> Best regards,
> Vladimir Rodionov
> Principal Platform Engineer
> Carrier IQ, www.carrieriq.com
> e-mail: [EMAIL PROTECTED]
>
> ________________________________________
> From: Jan Schellenberger [[EMAIL PROTECTED]]
> Sent: Friday, January 31, 2014 3:12 PM
> To: [EMAIL PROTECTED]
> Subject: Slow Get Performance (or how many disk I/O does it take for one
> non-cached read?)
>
> I am running a cluster and getting slow performance - about 50
> reads/sec/node
> or about 800 reads/sec for the cluster.  The data is too big to fit into
> memory and my access pattern is completely random reads which is presumably
> difficult for hbase.  Is my read speed reasonable?  I feel like typical
> read
> speeds I've seen reported are much higher?
>
>
>
> Hardware/Software Configuration:
> 17 nodes + 1 master
> 8 cores
> 24 gigs ram
> 4x1TB 3.5" hard drives (I know this is low for hbase - we're working on
> getting more disks)
> running Cloudera CDH 4.3 with hbase .94.6
> Most configurations are default except I'm using 12GB heap space/region
> server and the block cache is .4 instead of .25 but neither of these two
> things makes much of a difference.   I am NOT having a GC issue.  Latencies
> are around 40ms and 99% is 200ms.
>
>
> Dataset Description:
> 6 tables ~300GB each (uncompressed) or 120GB each compressed <- compression
> speeds things up a bit.
> I just ran a major compaction so block locality is 100%
> Each Table has a single column family and a single column ("c:d").
> keys are short strigs ~10-20 characters.
> values are short json ~500 characters
> 100% Gets.  No Puts
> I am heavily using time stamping.  maxversions is set to Integer.MAXINT.
>  My
> gets have a maxretrieved of 200.  A typical row would have < 10 versions on
> average though.  <1% of queries would max out at 200 versions returned.
>
> Here are table configurations (I've also tried Snappy compression)
> {NAME => 'TABLE1', FAMILIES => [{NAME => 'c', DATA_BLOCK_ENCODING => 'NONE'
>  , BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS =>
> '2147483647',
> COMPR
>  ESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647',
> KEEP_DELETED_CELLS =>
>   'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK =>
> 'true', BLOCKCACHE => 'true'}]}
>
>
> I am using the master node to query (with 20 threads) and get about 800
> Gets/second.  Each worker node is completely swamped by disk i/o - I'm
> seeing 80 io/sec with iostat for each of the 4 disk with a throughput of
> about 10MB/sec each.  So this means it's reading roughly 120kB/transfer and

 
+
Jan Schellenberger 2014-02-01, 02:39
+
Ted Yu 2014-02-01, 04:28
+
lars hofhansl 2014-02-01, 05:28
+
Ted Yu 2014-02-01, 05:37
+
lars hofhansl 2014-02-01, 06:21
+
Jan Schellenberger 2014-02-01, 06:32
+
lars hofhansl 2014-02-02, 04:07
+
Jay Vyas 2014-02-02, 04:10
+
Andrew Purtell 2014-02-03, 04:13
+
Jan Schellenberger 2014-02-02, 05:38
+
lars hofhansl 2014-02-02, 06:34
+
Varun Sharma 2014-02-02, 19:03