Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Does HBase RegionServer benefit from OS Page Cache

Copy link to this message
答复: Does HBase RegionServer benefit from OS Page Cache
Maybe we should adopt some ideas from RDBMS ?
In MySQL area:
Innodb storage engine has a buffer pool(just like current block cache), caches both
compressed and uncompressed pages in latest innodb version, it brings
about adaptive LRU algorithm, see http://dev.mysql.com/doc/innodb/1.1/en/innodb-compression-internals.html,
in short, it's somehow more subtle for this detail than leveldb&hbase's implementation,
per my view. In deed, we(Xiaomi) had a plan to develop&evaluate it already
(we logged it in our internal phabricator system before), hopefully we could contribute it
to community in the future.

Another storage engine Falcon has "Row Cache" feature, which similar with Enis mentioned,
It's more friendly against random read scenario.
Every user table could choose a prefered storage engine in MySQL, so here, my point is:
maybe we need to consider supporting more configureable cache strategy per table granularity

发件人: Enis Söztutar [[EMAIL PROTECTED]]
发送时间: 2013年3月26日 4:26
收件人: hbase-user
Cc: lars hofhansl
主题: Re: Does HBase RegionServer benefit from OS Page Cache

> With very large heaps and a GC that can handle them (perhaps the G1 GC),
another option which might be worth experimenting with is a value (KV)
cache independent of the block cache which could be enabled on a per-table
Thanks Andy for bringing this up. We've had some discussions some time ago
about a row-cache (or KV cache)

The takeaway was that if you are mostly doing point gets, rather than
scans, this cache might be better.

> 1) [HBASE-7404]: L1/L2 block cache
I knew about the Bucket cache, but not that bucket cache could hold
compressed blocks. Is it the case, or are you suggesting we can add that to
this L2 cache.

>  2) [HBASE-5263] Preserving cached data on compactions through
Thanks, this is the same idea. I'll track the ticket.

On Mon, Mar 25, 2013 at 12:18 PM, Liyin Tang <[EMAIL PROTECTED]> wrote:

> Hi Enis,
> Good ideas ! And hbase community is driving on these 2 items.
> 1) [HBASE-7404]: L1/L2 block cache
> 2) [HBASE-5263] Preserving cached data on compactions through
> cache-on-write
> Thanks a lot
> Liyin
> ________________________________________
> From: Enis Söztutar [[EMAIL PROTECTED]]
> Sent: Monday, March 25, 2013 11:24 AM
> To: hbase-user
> Cc: lars hofhansl
> Subject: Re: Does HBase RegionServer benefit from OS Page Cache
> Thanks Liyin for sharing your use cases.
> Related to those, I was thinking of two improvements:
>  - AFAIK, MySQL keeps the compressed and uncompressed versions of the blocs
> in its block cache, failing over the compressed one if decompressed one
> gets evicted. With very large heaps, maybe keeping around the compressed
> blocks in a secondary cache makes sense?
>  - A compaction will trash the cache. But maybe we can track keyvalues
> (inside cached blocks are cached) for the files in the compaction, and mark
> the blocks of the resulting compacted file which contain previously cached
> keyvalues to be cached after the compaction. I have to research the
> feasibility of this approach.
> Enis
> On Sun, Mar 24, 2013 at 10:15 PM, Liyin Tang <[EMAIL PROTECTED]> wrote:
> > Block cache is for uncompressed data while OS page contains the
> compressed
> > data. Unless the request pattern is full-table sequential scan, the block
> > cache is still quite useful. I think the size of the block cache should
> be
> > the amont of hot data we want to retain within a compaction cycle, which