Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Re: Hadoop not working after replacing hadoop-core.jar with hadoop-core-append.jar


Copy link to this message
-
Re: Hadoop not working after replacing hadoop-core.jar with hadoop-core-append.jar
Mike Spreitzer 2011-06-07, 01:49
Where is that citation of Michael Noll's nicely detailed instruction on
how to build the append branch?

Why does hbase include a hadoop-core.jar?  The instructions say I should
replace it, so why am I given it in the first place?

Thanks,
Mike Spreitzer

From:   Stack <[EMAIL PROTECTED]>
To:     [EMAIL PROTECTED]
Date:   06/06/2011 03:40 PM
Subject:        Re: Hadoop not working after replacing hadoop-core.jar
with hadoop-core-append.jar
Sent by:        [EMAIL PROTECTED]

On Mon, Jun 6, 2011 at 11:24 AM, Joe Pallas <[EMAIL PROTECTED]>
wrote:
> Hi St.Ack.  Here is the sense in which the book leads a new user to the
route that Mike (and I) took.  It seems to say this:
>
> <paraphrase>
> You have a choice.  You can download the source for the append branch of
hadoop and build it yourself from scratch, which will take who knows how
long and require additional tools and may not work on your preferred
development platform (see <http://search-hadoop.com/m/8Efvi1EEiaf>, which
says "Building sucks"), or you can take this shortcut that seems to work,
but has no guarantees.  What you cannot do is find a pre-built release of
the append branch anywhere for you to just download and use.  Your call.
> </paraphrase>
>
> Now, maybe that isn't the message you actually intend.

Its not.  In particular, the "...which will take who knows how long
and require additional tools and may not work on your preferred
development platform" bit.  Michael Noll has written up a nicely
detailed instruction on how to build the append branch.  Its cited up
front in our doc.  Is it not helpful?  I'd think that readers would
give this posting more credence than a "Building sucks" comment made
by our Ryan, HBase's (proud!) Statler and Waldorf combined [1].

The 'shortcut' will work its just that folks normally go the opposite
direction to that of your Michael; they copy their cluster's hadoop
jars into hbase rather than hbase's hadoop jar to the cluster.  I'm
guessing that Michael went this route because he would avoid CDH?
(Is that right?)

> Would it be some sort of horrible Apache faux pas for the HBase project
to distribute its own release of the version of Hadoop that is required by
HBase?

This came up recently over in hadoop.  HBasers were pitching to host a
build of the append branch over in hbase-land.  It was found that if
we did such a thing, we'd have to call it something other than Hadoop;
only  Hadoop can host Hadoop releases.  We didn't want to add yet more
confusion to an already torturous landscape so we passed on it.

> Because the Hadoop project isn't likely to do it, apparently, and, if I
understand correctly, HBase is not going to work anytime soon with the
next Hadoop release that has append support.  So this is not a problem
that is going to fix itself.
>

HBase will work with the next Hadoop release, 0.22.0, when it comes
out [2].  The current state of things is temporary (I believe).  Sorry
for the inconvenience.

Thanks for the above input.  Our hadoop section needs updating now
there are yet more versions afoot.  The above will help when we recast
this section.

St.Ack

1.  http://muppet.wikia.com/wiki/Statler_and_Waldorf
2. https://issues.apache.org/jira/browse/HBASE-2233