I could buy these results for a totally disk bound application as far as
reads go. I was running some experiments where I have HFiles on disk.
Memory : data ratio is 1:2 - so half the data can fit in memory. Then I run
"new HFileScanner()" and then scanner.seekTo("someKeyValue"). On a 4 HDD
system, I can get ~400 reads max. The hard drives end run quite hot - and
the max I can push this thing to is 500 reads per second. Note that this is
raw HFile seeks - no HBase or HDFS layers are present. I suspect HBase just
issues way more iops than it needs to do.
On Wed, Nov 27, 2013 at 12:01 AM, Vladimir Rodionov
> Oh, I got it. "Next big thing for HBase" is not MapR M7 , but global
> optimization and tuning of HBase itself.
> On Tue, Nov 26, 2013 at 11:56 PM, Vladimir Rodionov
> <[EMAIL PROTECTED]>wrote:
> > Why do you think I got excited? I do not work for MapR. MapR has posted
> > benchmark results and some numbers for HBase look quite low. I thought
> > be community will be interested in these results.
> > On Tue, Nov 26, 2013 at 10:04 PM, lars hofhansl <[EMAIL PROTECTED]>
> >> Excuse me if I do not get too exited about a report published by MapR
> >> that comes to the conclusion that MapR's M7 is faster than "other
> >> distribution".
> >> -- Lars
> >> ________________________________
> >> From: Vladimir Rodionov <[EMAIL PROTECTED]>
> >> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> >> Sent: Tuesday, November 26, 2013 8:00 PM
> >> Subject: Next big thing for HBase
> >> Global optimization and performance tuning:
> >> Some numbers from this report does not look right for HBase. I do not
> >> believe that 5 RS on Fusion drive scores only 1605 reads per sec per