Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # dev >> Hadoop Version 0.20.2, 0.20.100 and 0.21.0.


Copy link to this message
-
Re: Hadoop Version 0.20.2, 0.20.100 and 0.21.0.
On Mar 16, 2011, at 12:52 PM, Jane Chen wrote:

> Hi,
>
> I'm quite confused of the status and future of recent Hadoop versions.  

Since I'm sure I'll be struck down by lightening from someone, just let me put in the disclaimer that these are my opinions and do not necessarily (and probably don't) reflect the ASF board, the Hadoop PMC, any other non-PMC committers, any other contributors, or LinkedIn.

(Every protagonist needs an antagonist, right?)

> 0.21.0 has not been declared production ready.  Is there plan to make it production ready?

No.  

>  If not, what is the next release that is going to be production ready? I see that some changes are checked into 0.20.100.  Are the same changes going into 0.21 branch or the next subsequent release?

0.21 is dead.  Thanks everyone for testing trunk-as-of-6-months-ago out.  As suspected, it is pretty much busted i weird little ways.
0.22 might as well be dead, since it is doubtful anyone is actually going to run it at scale.
0.23 is looking like a year+ off before it is in any usable form, assuming that everyone can agree on what it should be.

Instead, what we're likely to see, is trunk will remain a dumping ground of "See! We're participating!" while various parties release one-off branches with the Apache stamp on it.... that few, if anyone, will actually be running.

> In general, any advice on which version to adopt?

At LinkedIn, we've been running 0.20.2 w/3 patches from JIRA+1 custom patch to provide better support for Solaris quite stably for over a year.  Unless you have a need to do something newer, it really is your best bet if you're trying to avoid fork lock-in.  That said, I've been eyeing the 0.20.203 branch, as it includes a lot of things that I haven't had the time to patch/backport myself  that I'm going to need, plus I know that two of the larger installations are using it.  As LI looks towards breaking the 1000 node mark by the end of the year, that scalability is going to be important to us.