Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # dev - Hadoop 2 compatibility issues


Copy link to this message
-
Re: Hadoop 2 compatibility issues
Eric Newton 2013-05-16, 15:23
I've snuck some necessary changes in... doing integration testing on it
right now.

-Eric

On Wed, May 15, 2013 at 8:03 PM, John Vines <[EMAIL PROTECTED]> wrote:

> I will gladly do it next week, but I'd rather not have it delay the
> release. The question from there is, is doing this type of packaging change
> too large to put in 1.5.1?
>
>
> On Wed, May 15, 2013 at 2:44 PM, Christopher <[EMAIL PROTECTED]> wrote:
>
> > So, I think that'd be great, if it works, but who is willing to do
> > this work and get it in before I make another RC?
> > I'd like to cut RC3 tomorrow if I have time. So, feel free to patch
> > these in to get it to work before then... or, by the next RC if RC3
> > fails to pass a vote.
> >
> > --
> > Christopher L Tubbs II
> > http://gravatar.com/ctubbsii
> >
> >
> > On Wed, May 15, 2013 at 5:31 PM, Adam Fuchs <[EMAIL PROTECTED]> wrote:
> > > It seems like the ideal option would be to have one binary build that
> > > determines Hadoop version and switches appropriately at runtime. Has
> > anyone
> > > attempted to do this yet, and do we have an enumeration of the places
> in
> > > Accumulo code where the incompatibilities show up?
> > >
> > > One of the incompatibilities is in
> org.apache.hadoop.mapreduce.JobContext
> > > switching between an abstract class and an interface. This can be fixed
> > > with something to the effect of:
> > >
> > >   public static Configuration getConfiguration(JobContext context) {
> > >     Impl impl = new Impl();
> > >     Configuration configuration = null;
> > >     try {
> > >       Class c > > >
> >
> TestCompatibility.class.getClassLoader().loadClass("org.apache.hadoop.mapreduce.JobContext");
> > >       Method m = c.getMethod("getConfiguration");
> > >       Object o = m.invoke(context, new Object[0]);
> > >       configuration = (Configuration)o;
> > >     } catch (Exception e) {
> > >       throw new RuntimeException(e);
> > >     }
> > >     return configuration;
> > >   }
> > >
> > > Based on a test I just ran, using that getConfiguration method instead
> of
> > > just calling the getConfiguration method on context should avoid the
> one
> > > incompatibility. Maybe with a couple more changes like that we can get
> > down
> > > to one bytecode release for all known Hadoop versions?
> > >
> > > Adam
> >
>