Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> Hadoop 2 compatibility issues


Copy link to this message
-
Re: Hadoop 2 compatibility issues
CXF does (4) for the various competing JAX-WS implementations.

The different options are API-compatible, and the profiles just switch
the deps around.

There would be slightly more Maven correctness in marking the deps
optional, forcing each user to pick one explicitly.

However, (4) with good doc on what to put in the POM is really not a
cause for shame. Maven is weak in this area, and it's all tradeoffs.

On Tue, May 14, 2013 at 4:56 PM, John Vines <[EMAIL PROTECTED]> wrote:
> I'm an advocate of option 4. You say that it's ignoring the problem,
> whereas I think it's waiting until we have the time to solve the problem
> correctly. Your reasoning for this is for standardizing for maven
> conventions, but the other options, while more 'correct' from a maven
> standpoint or a larger headache for our user base and ourselves. In either
> case, we're going to be breaking some sort of convention, and while it's
> not good, we should be doing the one that's less bad for US. The important
> thing here, now, is that the poms work and we should go with the method
> that leaves the work minimal for our end users to utilize them.
>
> I do agree that 1. is the correct option in the long run. More
> specifically, I think it boils down to having a single module compatibility
> layer, which is how hbase deals with this issue. But like you said, we
> don't have the time to engineer a proper solution. So let sleeping dogs lie
> and we can revamp the whole system for 1.5.1 or 1.6.0 when we have the
> cycles to do it right.
>
>
> On Tue, May 14, 2013 at 4:40 PM, Christopher <[EMAIL PROTECTED]> wrote:
>
>> So, I've run into a problem with ACCUMULO-1402 that requires a larger
>> discussion about how Accumulo 1.5.0 should support Hadoop2.
>>
>> The problem is basically that profiles should not contain
>> dependencies, because profiles don't get activated transitively. A
>> slide deck by the Maven developers point this out as a bad practice...
>> yet it's a practice we rely on for our current implementation of
>> Hadoop2 support
>> (http://www.slideshare.net/aheritier/geneva-jug-30th-march-2010-maven
>> slide 80).
>>
>> What this means is that even if we go through the work of publishing
>> binary artifacts compiled against Hadoop2, neither our Hadoop1
>> binaries or our Hadoop2 binaries will be able to transitively resolve
>> any dependencies defined in profiles. This has significant
>> implications to user code that depends on Accumulo Maven artifacts.
>> Every user will essentially have to explicitly add Hadoop dependencies
>> for every Accumulo artifact that has dependencies on Hadoop, either
>> because we directly or transitively depend on Hadoop (they'll have to
>> peek into the profiles in our POMs and copy/paste the profile into
>> their project). This becomes more complicated when we consider how
>> users will try to use things like Instamo.
>>
>> There are workarounds, but none of them are really pleasant.
>>
>> 1. The best way to support both major Hadoop APIs is to have separate
>> modules with separate dependencies directly in the POM. This is a fair
>> amount of work, and in my opinion, would be too disruptive for 1.5.0.
>> This solution also gets us separate binaries for separate supported
>> versions, which is useful.
>>
>> 2. A second option, and the preferred one I think for 1.5.0, is to put
>> a Hadoop2 patch in the branch's contrib directory
>> (branches/1.5/contrib) that patches the POM files to support building
>> against Hadoop2. (Acknowledgement to Keith for suggesting this
>> solution.)
>>
>> 3. A third option is to fork Accumulo, and maintain two separate
>> builds (a more traditional technique). This adds merging nightmare for
>> features/patches, but gets around some reflection hacks that we may
>> have been motivated to do in the past. I'm not a fan of this option,
>> particularly because I don't want to replicate the fork nightmare that
>> has been the history of early Hadoop itself.
>>
>> 4. The last option is to do nothing and to continue to build with the