-Re: CI for the release builds
Devaraj Das 2013-02-01, 01:22
We have an internal framework for running the tests (and we might
eventually contribute that), but for now we will run the tests using
the framework, and yes we'd need to convert the release tests (as in
the wiki) to run in that framework.
The framework is kind of like the integration tests framework but I am
thinking of extending it so that it can run the set of
system/integration tests on an hbase artifact present in a certain URL
(let's say some RC) at the click of a button.
In some sense, it is complimentary to the other thread. From what I
remember of Bigtop and the thread, Bigtop supports/builds/tests a
certain released version of HBase, but here the scope is mostly for
catching regressions in yet-to-be-released artifacts / trunk without
actually locking down on any released version.
On Thu, Jan 31, 2013 at 12:18 PM, Stack <[EMAIL PROTECTED]> wrote:
> Sounds very nice. You going to script the running of the wiki content DD?
> You see any overlap with the effort at getting hbase-it tests run on a
> continual basis, the stuff discussed here  and here  (currently a
> little stalled till we figure the failed hbase-it test), or as you see it,
> the two efforts compliment each other (which seems to be the case)?
> Good stuff,
> about 2/3rds down the thread)
> http://mail-archives.apache.org/mod_mbox/hbase-dev/201212.mbox/%[EMAIL PROTECTED]%3E
> On Wed, Jan 30, 2013 at 10:07 PM, Devaraj Das <[EMAIL PROTECTED]> wrote:
>> Hi folks,
>> Have been toying with the idea of automating the process of running
>> the tests that constitute the release test plan (the initial tests
>> would be from
>> and they would be run with/without security turned on in the cluster).
>> As we continue to develop more system/largescale tests, we would keep
>> adding them to the harness.
>> I have set the ball rolling on this within the company and I hope the
>> community will be interested in seeing such a thing happen.
>> The idea is to be able to easily run a release artifact through a
>> series of tests (on our internal cluster or on AWS), and the result
>> would be published in a machine on AWS (visible to all). In the
>> future, this could be extended to run the tests on trunk artifacts as
>> well (catch regressions early).