Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Slow Get Performance (or how many disk I/O does it take for one non-cached read?)


+
Jan Schellenberger 2014-01-31, 23:13
Copy link to this message
-
Re: Slow Get Performance (or how many disk I/O does it take for one non-cached read?)
bq. DATA_BLOCK_ENCODING => 'NONE'

Have you tried enabling data block encoding with e.g. FAST_DIFF ?

Cheers
On Fri, Jan 31, 2014 at 3:12 PM, Jan Schellenberger <[EMAIL PROTECTED]>wrote:

> I am running a cluster and getting slow performance - about 50
> reads/sec/node
> or about 800 reads/sec for the cluster.  The data is too big to fit into
> memory and my access pattern is completely random reads which is presumably
> difficult for hbase.  Is my read speed reasonable?  I feel like typical
> read
> speeds I've seen reported are much higher?
>
>
>
> Hardware/Software Configuration:
> 17 nodes + 1 master
> 8 cores
> 24 gigs ram
> 4x1TB 3.5" hard drives (I know this is low for hbase - we're working on
> getting more disks)
> running Cloudera CDH 4.3 with hbase .94.6
> Most configurations are default except I'm using 12GB heap space/region
> server and the block cache is .4 instead of .25 but neither of these two
> things makes much of a difference.   I am NOT having a GC issue.  Latencies
> are around 40ms and 99% is 200ms.
>
>
> Dataset Description:
> 6 tables ~300GB each (uncompressed) or 120GB each compressed <- compression
> speeds things up a bit.
> I just ran a major compaction so block locality is 100%
> Each Table has a single column family and a single column ("c:d").
> keys are short strigs ~10-20 characters.
> values are short json ~500 characters
> 100% Gets.  No Puts
> I am heavily using time stamping.  maxversions is set to Integer.MAXINT.
>  My
> gets have a maxretrieved of 200.  A typical row would have < 10 versions on
> average though.  <1% of queries would max out at 200 versions returned.
>
> Here are table configurations (I've also tried Snappy compression)
> {NAME => 'TABLE1', FAMILIES => [{NAME => 'c', DATA_BLOCK_ENCODING => 'NONE'
>  , BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS =>
> '2147483647',
> COMPR
>  ESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647',
> KEEP_DELETED_CELLS =>
>   'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK =>
> 'true', BLOCKCACHE => 'true'}]}
>
>
> I am using the master node to query (with 20 threads) and get about 800
> Gets/second.  Each worker node is completely swamped by disk i/o - I'm
> seeing 80 io/sec with iostat for each of the 4 disk with a throughput of
> about 10MB/sec each.  So this means it's reading roughly 120kB/transfer and
> it's taking about 8 Hard Disk I/O's per Get request.  Does that seem
> reasonable?  I've read the HFILE specs and I feel if the block index is
> loaded into memory, it should take 1 hard disk read to read the proper
> block
> with my row.
>
>
> The region servers have a blockCacheHitRatio of about 33% (no compression)
> or 50% (snappy compression)
>
> Here are some regionserver stats while I'm running queries.  This is the
> uncompressed table version and queries are only 38/sec
>
> requestsPerSecond=38, numberOfOnlineRegions=212,
>  numberOfStores=212, numberOfStorefiles=212, storefileIndexSizeMB=0,
> rootIndexSizeKB=190, totalStaticIndexSizeKB=172689,
> totalStaticBloomSizeKB=79692, memstoreSizeMB=0, mbInMemoryWithoutWAL=0,
> numberOfPutsWithoutWAL=0, readRequestsCount=1865459,
> writeRequestsCount=0, compactionQueueSize=0, flushQueueSize=0,
> usedHeapMB=4565, maxHeapMB=12221, blockCacheSizeMB=4042.53,
> blockCacheFreeMB=846.07, blockCacheCount=62176,
> blockCacheHitCount=5389770, blockCacheMissCount=9909385,
> blockCacheEvictedCount=2744919, blockCacheHitRatio=35%,
> blockCacheHitCachingRatio=65%, hdfsBlocksLocalityIndex=99,
> slowHLogAppendCount=0, fsReadLatencyHistogramMean=1570049.34,
> fsReadLatencyHistogramCount=1239690.00,
> fsReadLatencyHistogramMedian=20859045.50,
> fsReadLatencyHistogram75th=35791318.75,
> fsReadLatencyHistogram95th=97093132.05,
> fsReadLatencyHistogram99th=179688655.93,
> fsReadLatencyHistogram999th=312277183.40,
> fsPreadLatencyHistogramMean=35548585.63,
> fsPreadLatencyHistogramCount=2803268.00,
> fsPreadLatencyHistogramMedian=37662144.00,
> fsPreadLatencyHistogram75th=55991186.50,

 
+
lars hofhansl 2014-02-01, 05:26
+
lars hofhansl 2014-02-01, 05:31
+
Ted Yu 2014-02-01, 01:44
+
Jan Schellenberger 2014-02-01, 02:39
+
Ted Yu 2014-02-01, 04:28
+
lars hofhansl 2014-02-01, 05:28
+
Ted Yu 2014-02-01, 05:37
+
lars hofhansl 2014-02-01, 06:21
+
Jan Schellenberger 2014-02-01, 06:32
+
lars hofhansl 2014-02-02, 04:07
+
Jay Vyas 2014-02-02, 04:10
+
Andrew Purtell 2014-02-03, 04:13
+
Jan Schellenberger 2014-02-02, 05:38
+
lars hofhansl 2014-02-02, 06:34
+
Varun Sharma 2014-02-02, 19:03
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB