Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # dev >> testing, powermock, jmockit - from pow-wow


Copy link to this message
-
Re: testing, powermock, jmockit - from pow-wow
Perhaps the best course of action for test failures involving twisty code
weaving / alternate class loaders / multiple mocking libraries would be to
svn rm or @Ignore. We should consider that time and contributor bandwidth
is precious in a volunteer project, especially where unit tests are
concerned. If it's not possible to understand a unit test failure from
source given an exception and stack trace, the test itself is the problem.
On Friday, September 14, 2012, lars hofhansl wrote:

> Using Powermock for its reflection library sounds great. I'd refrain from
> using its classloader, that is just asking for trouble.
>
> > To get the low-level access we could instead use jmockit at the cost of
> dealing with code-weaving.
>
> As we had discussed, this scares me :).
> I do not want to have to debug some test code that was weaved (i.e. has no
> matching source code lying around *anywhere*).
>
> Generally I think code should be designed to be unittestable. Each (set
> of) class(es) should be able to stand alone, enough to be unittested, then
> only a few bigger bigger functional tests are needed to test things
> end-to-end... I know that's pipe-dream. I'm also asking for every committer
> to be given a pony, btw.
> Mocking is the next best option.
>
> Thanks for taking this on!
>
> -- Lars
>
> ________________________________
> From: Jesse Yates <[EMAIL PROTECTED] <javascript:;>>
> To: [EMAIL PROTECTED] <javascript:;>
> Sent: Friday, September 14, 2012 7:16 PM
> Subject: testing, powermock, jmockit - from pow-wow
>
> Hi all,
>
> TL;DR powermock is really hard (and ugly) to get working for the
> class-loading abilities, but has a really nice reflection library we can
> leverage. To get the low-level access we could instead use jmockit at the
> cost of dealing with code-weaving.
>
> At the pow-wow there was also some discussion of ways in which we can
> improve the testing infrastructure. Options toss around were things like
> improving the testing util, fixing classes to make them more testable, and
> powermock.
>
> The first two are definitely the right way to go, but can be pretty hard to
> do (problem with all legacy code) and often times can lead to awkward code
> to facilitate testing. Here, PowerMock is great - it lets you get into the
> internals more easily.
>
> That said, I've started working on HBASE-5456 (Introduce PowerMock into our
> unit tests to reduce unnecessary method exposure) and its a hairy mess.
> Powermock's whole functionality is based on using its own classloader.
> However, because of all the reflection that HBase and Hadoop does
> (particularly Hadoop here), we end up having to do a ton of ignore
> statements for classes/packages that PowerMock shouldn't load. I hacked on
> it for a bit and still couldn't get a mini-cluster up and running :-/
>
> Even given that pain we can get a lot of use from the reflection utilities
> - this allows you to replace existing objects with mocks, access private
> methods (no more 'exposed for testing methods'). The powermock jars amount
> to about 100K - small overhead for a lot the helper methods. See
> http://code.google.com/p/powermock/wiki/BypassEncapsulation
>
> To resolve the cases where the powermock class-loading would be useful
> (e.g. catching object creation to use your own mock, rather than using
> factories or DI everywhere) we could use jmockit. Jmockit does the same
> basic stuff as powermock (minus the nice reflection library), but it does
> it through run-time code weaving for tests through the java agent
> framework.
>
> This will give us really fine grained access to the code under test without
> having to do a lot of funky rewrites. However, code-weaving comes at the
> cost of loosing debugger usage as the code-weaving messes with the
> bytecode. I'd argue that its a small price to pay for getting highly
> controllable tests, *as long as we don't go overboard with the
> weaving.*Yes, when we start doing too much test weaving it can become
> untenable, but
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB