-Re: dev Digest of: thread.25579
Jesse Yates 2011-12-02, 02:34
I'm generally on the same page here, but this is what I'm (currently)
thinking about how it could work:
We need to characterize the unit tests because some are longer than
others, and we want to split them up in order of complexity.
Keywal, this next bit is divergent from what I was saying the other day...
The next level up should be testing between integration of components.
Again this should be run by the CI; really this is more of a finer
grain separation of what was previously just unit tests.
I've revised my thinking such that this could a different category of
unit tests (maybe @Integration that run in their own jvm, in a
This helps separate concerns between the individual class testing and
the testing of multiple pieces working together.
Onto using the failsafe plugin.
I like the idea of really leveraging the failsafe plugin to spin up
the tests on some of 'real' instance. The (canonical) simple example
of using failsafe is to spin up a Jetty server and test whatever stuff
you are doing with a 'real'
Also, integrating with BigTop to do some of that stuff would also be
pretty sweet (and good for both communities).
A good place for these tests would be the hbase-it module discussed in
HBASE-4336, since that module can just depend on the rest of the
modules (giving full access to the components). Within that package,
we could conceivably have the multi-item integration tests
(@Integration form above) and then the 'real cluster' tests as well.
+1 Having tests that just use the top level API (e.g. what the FB guys
are using on their dev cluster after running the full test suite) is
really important to make sure we 'real' test cases to ensure
Also, agreeing with Roman, making the plugin to enable that is going
to rough, but we should be able to at least find a way to (at least
manually at first) have those api level tests run on a real cluster,
even if it is a company X and they just post the results. Even then a
weekly build/run of that suite could be sufficient at first (though
having that for every commit would be must better - anyone know
someone at Amazon we can scrounge credits from?).
However, I do think that if tests start failing in _any_ of the above
phases then the patch that caused it needs to not be committed OR the
tests need to be revised (which would also be part of the patch, so my
original point stands). That is indicative of broken functionality and
trunk should be kept as stable as possible.
Yeah, devs shouldn't have to run the full set of tests before
submitting a patch, but they should easily be able to run everything
up to (but not including) the api-level testing.
What would be cool though would be that they could even run the
api-level testing, but that is just runs on a local MiniCluster, but
that's a lot of work out :).
A lot of that stability comes form having a staged testing cycle as
well as helping devs cut down their testing cycles, so anything we do
to help cut that time down is good stuff in my book..
ps. I hate to be 'that guy' but we need a consistent way of describing
the tests we talking about, but lets push that off until figure out
what the heck we actually want to do.
> ---------- Forwarded message ----------
> From: Ted Yu <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Date: Mon, 28 Nov 2011 12:23:43 -0800
> Subject: scoping integration tests
> The following discussion is closely related to HBASE-4712.
> We should reach general consensus so that the execution of future test
> strategy is smooth.
> On Mon, Nov 28, 2011 at 11:50 AM, Jesse Yates <[EMAIL PROTECTED]
> > I was considering 'integration tests' as a separate concern from the
> > large/medium/small _unit_ tests.
> > That is, in fact, why the failsafe plugin was added (and is designed
> > Currently, we have a lot of tests that fall in the realm of integration
> > tests (testing integration between various pieces, rather than single