Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # dev >> making a hadoop-common test run if a property is set

Copy link to this message
Re: making a hadoop-common test run if a property is set
On Mon, Dec 17, 2012 at 11:03 AM, Steve Loughran <[EMAIL PROTECTED]> wrote:
> On 17 December 2012 16:06, Tom White <[EMAIL PROTECTED]> wrote:
>> There are some tests like the S3 tests that end with "Test" (e.g.
>> Jets3tNativeS3FileSystemContractTest) - unlike normal tests which
>> start with "Test". Only those that start with "Test" are run
>> automatically (see the surefire configuration in
>> hadoop-project/pom.xml). You have to run the others manually with "mvn
>> test -Dtest=...".
>> The mechanism that Colin describes is probably better though, since
>> the environment-specific tests can be run as a part of a full test run
>> by Jenkins if configured appropriately.
> I'd like that -though one problem with the current system is that you need
> to get the s3 (and soon: openstack) credentials into
> src/test/resources/core-site.xml, which isn't the right approach. If we
> could get them into properties files things would be easier.
> That's overkill for adding a few more openstack tests -but I would like to
> make it easier to turn those and the rackspace ones without sticking my
> secrets into an XML file under SCM

I think the way to go is to have one XML file include another.

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration xmlns:xi="http://www.w3.org/2001/XInclude">
     ... etc, etc...
  <xi:include href="../secret-stuff.xml" />

That way, you can keep the boring configuration under version control,
and still have your password sitting in a small separate
non-version-controlled XML file.

We use this trick a bunch with the HA configuration stuff-- 99% of the
configuration is the same between the Active and Standby Namenodes,
but you can't give them the same dfs.ha.namenode.id or dfs.name.dir.
Includes help a lot here.

> another tactic could be to have specific test projects: test-s3,
> test-openstack, test-... which contain nothing but test cases. You'd set
> jenkins up those test projects too -the reason for having the separate
> names is to make it blatantly clear which tests you've not run

I dunno.  Every time a project puts unit or system tests into a
separate project, the developers never run them.  I've seen it happen
enough times that I think I can call it an anti-pattern by now.  I
like having tests alongside the code-- to the maximum extent that is


>> Tom
>> On Mon, Dec 17, 2012 at 10:06 AM, Steve Loughran <[EMAIL PROTECTED]>
>> wrote:
>> > thanks, I'l; have a look. I've always wanted to add the notion of skipped
>> > to test runs -all the way through to the XML and generated reports, but
>> > you'd have to do a new junit runner for this and tweak the reporting
>> code.
>> > Which, if it involved going near maven source, is not something I am
>> > prepared to do
>> >
>> > On 14 December 2012 18:57, Colin McCabe <[EMAIL PROTECTED]> wrote:
>> >
>> >> One approach we've taken in the past is making the junit test skip
>> >> itself when some precondition is not true.  Then, we often create a
>> >> property which people can use to cause the skipped tests to become a
>> >> hard error.
>> >>
>> >> For example, all the tests that rely on libhadoop start with these
>> lines:
>> >>
>> >> > @Test
>> >> > public void myTest() {
>> >> >    Assume.assumeTrue(NativeCodeLoader.isNativeCodeLoaded());
>> >> >   ...
>> >> > }
>> >>
>> >> This causes them to be silently skipped when libhadoop.so is not
>> >> available or loaded (perhaps because it hasn't been built.)
>> >>
>> >> However, if you want to cause this to be a hard error, you simply run
>> >> > mvn test -Drequire.test.libhadoop
>> >>
>> >> See TestHdfsNativeCodeLoader.java to see how this is implemented.
>> >>
>> >> The main idea is that your Jenkins build slaves use all the -Drequire
>> >> lines, but people running tests locally are not inconvenienced by the
>> >> need to build libhadoop.so in every case.  This is especially good