Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # dev >> making a hadoop-common test run if a property is set

Copy link to this message
Re: making a hadoop-common test run if a property is set
On Tue, Dec 18, 2012 at 1:05 AM, Colin McCabe <[EMAIL PROTECTED]> wrote:
> On Mon, Dec 17, 2012 at 11:03 AM, Steve Loughran <[EMAIL PROTECTED]> wrote:
>> On 17 December 2012 16:06, Tom White <[EMAIL PROTECTED]> wrote:
>>> There are some tests like the S3 tests that end with "Test" (e.g.
>>> Jets3tNativeS3FileSystemContractTest) - unlike normal tests which
>>> start with "Test". Only those that start with "Test" are run
>>> automatically (see the surefire configuration in
>>> hadoop-project/pom.xml). You have to run the others manually with "mvn
>>> test -Dtest=...".
>>> The mechanism that Colin describes is probably better though, since
>>> the environment-specific tests can be run as a part of a full test run
>>> by Jenkins if configured appropriately.
>> I'd like that -though one problem with the current system is that you need
>> to get the s3 (and soon: openstack) credentials into
>> src/test/resources/core-site.xml, which isn't the right approach. If we
>> could get them into properties files things would be easier.
>> That's overkill for adding a few more openstack tests -but I would like to
>> make it easier to turn those and the rackspace ones without sticking my
>> secrets into an XML file under SCM
> I think the way to go is to have one XML file include another.
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <configuration xmlns:xi="http://www.w3.org/2001/XInclude">
>   <property>
>      <name>boring.config.1</name>
>      <value>boring-value</value>
>      ... etc, etc...
>   <xi:include href="../secret-stuff.xml" />
> </configuration>
> That way, you can keep the boring configuration under version control,
> and still have your password sitting in a small separate
> non-version-controlled XML file.
> We use this trick a bunch with the HA configuration stuff-- 99% of the
> configuration is the same between the Active and Standby Namenodes,
> but you can't give them the same dfs.ha.namenode.id or dfs.name.dir.
> Includes help a lot here.
>> another tactic could be to have specific test projects: test-s3,
>> test-openstack, test-... which contain nothing but test cases. You'd set
>> jenkins up those test projects too -the reason for having the separate
>> names is to make it blatantly clear which tests you've not run
> I dunno.  Every time a project puts unit or system tests into a
> separate project, the developers never run them.  I've seen it happen
> enough times that I think I can call it an anti-pattern by now.  I
> like having tests alongside the code-- to the maximum extent that is
> possible.

Just to be clear, I'm not referring to any Hadoop-related project
here, just certain other open source (and not) ones I've worked on.
System/unit tests belong with the rest of the code, otherwise they get
stale real fast.

It sometimes makes sense for integration tests to live in a separate
repo, since by their nature they're usually talking to stuff that
lives in multiple repos.


> cheers,
> Colin
>>> Tom
>>> On Mon, Dec 17, 2012 at 10:06 AM, Steve Loughran <[EMAIL PROTECTED]>
>>> wrote:
>>> > thanks, I'l; have a look. I've always wanted to add the notion of skipped
>>> > to test runs -all the way through to the XML and generated reports, but
>>> > you'd have to do a new junit runner for this and tweak the reporting
>>> code.
>>> > Which, if it involved going near maven source, is not something I am
>>> > prepared to do
>>> >
>>> > On 14 December 2012 18:57, Colin McCabe <[EMAIL PROTECTED]> wrote:
>>> >
>>> >> One approach we've taken in the past is making the junit test skip
>>> >> itself when some precondition is not true.  Then, we often create a
>>> >> property which people can use to cause the skipped tests to become a
>>> >> hard error.
>>> >>
>>> >> For example, all the tests that rely on libhadoop start with these
>>> lines:
>>> >>
>>> >> > @Test
>>> >> > public void myTest() {
>>> >> >    Assume.assumeTrue(NativeCodeLoader.isNativeCodeLoaded());