Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # dev >> ANN: The third hbase 0.94.0 release candidate is available for download


Copy link to this message
-
Re: ANN: The third hbase 0.94.0 release candidate is available for download
Elliot, any plan on running the same on 0.90.x?

Enis

On Mon, May 7, 2012 at 11:07 AM, Elliott Clark <[EMAIL PROTECTED]>wrote:

> Sorry everything is in elapsed time as reported by Elapsed time in
> milliseconds.  So higher is worse.
>
> The standard deviation on 0.92.1 writes is 4,591,384 so Write 5 is a little
> outside of 1 std dev.  Not really sure what happened on that test, but it
> does appear that PE is very noisy.
>
> On Mon, May 7, 2012 at 10:47 AM, Todd Lipcon <[EMAIL PROTECTED]> wrote:
>
> > Is higher better or worse? :) Any idea what happened on the "Write 5"
> test?
> >
> > On Mon, May 7, 2012 at 10:42 AM, Elliott Clark <[EMAIL PROTECTED]>
> > wrote:
> > > http://www.scribd.com/eclark847297/d/92715238-0-94-0-RC3-Cluster-Perf
> > >
> > > On Fri, May 4, 2012 at 7:42 PM, Ted Yu <[EMAIL PROTECTED]> wrote:
> > >
> > >> 0.94 also has LoadTestTool (from FB)
> > >>
> > >> I have used it to do some cluster load testing.
> > >>
> > >> Just FYI
> > >>
> > >> On Fri, May 4, 2012 at 3:14 PM, Elliott Clark <[EMAIL PROTECTED]
> > >> >wrote:
> > >>
> > >> > With the cluster size that I'm testing YCSB was stressing the client
> > >> > machine more than the cluster.  I was saturating the network of the
> > test
> > >> > machine.  So I switched over to pe; while it doesn't have a
> realistic
> > >> work
> > >> > load it is better than nothing.
> > >> >
> > >> > On Fri, May 4, 2012 at 3:07 PM, Ted Yu <[EMAIL PROTECTED]> wrote:
> > >> >
> > >> > > Thanks for the update, Elliot.
> > >> > >
> > >> > > If I read your post correctly, you're using PE. ycsb is better
> > >> measuring
> > >> > > performance, from my experience.
> > >> > >
> > >> > > Cheers
> > >> > >
> > >> > > On Fri, May 4, 2012 at 3:04 PM, Elliott Clark <
> > [EMAIL PROTECTED]
> > >> > > >wrote:
> > >> > >
> > >> > > > So I got 94.0rc3 up on a cluster and tried to break it, Killing
> > >> masters
> > >> > > and
> > >> > > > killing rs.  Everything seems good. hbck reports everything is
> > good.
> > >> >  And
> > >> > > > all my reads succeed.
> > >> > > >
> > >> > > > I'll post cluster benchmark numbers once they are done running.
> > >>  Should
> > >> > > > only be a couple more hours of pe runs.
> > >> > > >
> > >> > > > Looks great to me.
> > >> > > > On Thu, May 3, 2012 at 10:36 AM, Elliott Clark <
> > >> [EMAIL PROTECTED]
> > >> > > > >wrote:
> > >> > > >
> > >> > > > > I agree it was just a micro benchmark with no guarantee that
> it
> > >> > relates
> > >> > > > to
> > >> > > > > real world. With it just being standalone I didn't think
> anyone
> > >> > should
> > >> > > > take
> > >> > > > > the numbers as 100% representative.  Really I was just trying
> to
> > >> > shake
> > >> > > > out
> > >> > > > > any weird behaviors and the fact that we got a big speed up
> was
> > >> > > > > interesting.
> > >> > > > >
> > >> > > > >
> > >> > > > > On Thu, May 3, 2012 at 12:15 AM, Mikael Sitruk <
> > >> > > [EMAIL PROTECTED]
> > >> > > > >wrote:
> > >> > > > >
> > >> > > > >> Hi guys
> > >> > > > >> Looking at the posted slide/pictures for the benchmark the
> > >> > > > >> following intriguing me:
> > >> > > > >> 1. The recordcount is only 100,000
> > >> > > > >> 2. workoloada is: read 50%, update 50% and zipfian
> distribution
> > >> even
> > >> > > > with
> > >> > > > >> 5M operations count, the same keys are updated again and
> again.
> > >> > > > >> 3. heap size 10G
> > >> > > > >>
> > >> > > > >> Therefore it might be that the dataset is too small (even
> with
> > 3
> > >> > > > versions
> > >> > > > >> configured we have = 3(version)*100,000(keys)*1KB (size of
> > >> record) > > >> > > 300
> > >> > > > >> MB
> > >> > > > >> of "live" dataset ?
> > >> > > > >> And approximately the number of store files will be 5x10^6
> (op
> > >> > > > >> count)*1KB(record size)/256MB(max store file size
> > (Default))=>20
> > >> > store
> > >> > > > >> file, even taking factor of 10 for metadata (record key, in
> > store
> > >> > > files)
> > >> > > > >> we