In my envionment, I run a dozens of client to read about 5-20K data per scan concurrently, And the average read latency for cached data is around 5-20ms.
So it seems there must be something wrong with my cluster env or application. Or did you run that with multiple client?
>Depends on so much environment related variables and on data as well.
>But to give you a number after all:
>One of our clusters is on EC2, 6 RS, on m1.xlarge machines (network performance 'high' according to aws), with 90% of the time we do reads; our avg data size is 2K, block cache at 20K, 100 rows per scan avg, bloom filters 'on' at the 'ROW' level, 40% of heap dedicated to block cache (note that it contains several other bits and pieces) and I would say our average latency for cached data (~97% blockCacheHitCachingRatio) is 3-4ms. File system access is much much painful, especially on ec2 m1.xlarge where you really can't tell what's going on, as far as I can tell. To tell you the truth as I see it, this is an abuse (for our use case) of the HBase store and for cache like behavior I would recommend going to something like Redis.
On Mon, Jun 3, 2013 at 12:13 PM, ramkrishna vasudevan < [EMAIL PROTECTED]> wrote:
> What is that you are observing now?
> On Mon, Jun 3, 2013 at 2:00 PM, Liu, Raymond <[EMAIL PROTECTED]>
> > Hi
> > If all the data is already in RS blockcache.
> > Then what's the typical scan latency for scan a few rows
> > from a say several GB table ( with dozens of regions ) on a small
> > cluster with
> > 4 RS ?
> > A few ms? Tens of ms? Or more?
> > Best Regards,
> > Raymond Liu