Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # general >> [DISCUSS] Spin out MR, HDFS and YARN as their own TLPs and disband Hadoop umbrella project


Copy link to this message
-
Re: [DISCUSS] Spin out MR, HDFS and YARN as their own TLPs and disband Hadoop umbrella project
Looking at the voting, it appears YARN wants to become a TLP RIGHT NOW but
at the price of the complete decoherence of the Apache Hadoop platform. For
all of us who have invested in the Apache Hadoop platform, how does this
benefit us? Certainly our interests seem to get little consideration with
this plan to just blow everything up tomorrow.

How does a downstream project that imports HDFS and MapReduce coordinate
the shared dependencies with those new projects? For, example Guava. One
could have a multi way library incompatibility problem; this has already
happened in the large with HDFS, HBase, and Pig. It's DLL hell magnified 3
or 4 times just in the smoking ruins of "core". The obvious answer is: Once
these pieces are moving in different trajectories at different rates, end
users and downstream projects will be forced to negotiate with many
parties, and those parties explicitly wont care about the issues concerning
another, according to this discussion. YARN must have broken our
minicluster based MapReduce tests 5 times over the last year. HDFS took up
a certain version of Guava and this required us to refactor some code to
match that version. We had a coherent group of committers to assist us then
but that would go away. Proponents of the split seem to want exactly this
situation. BigTop was suggested as a vehicle for addressing that concern
but then explicitly rejected on this thread. A commercial vendor looking to
torpedo the ability of anyone to build something on Apache Hadoop directly
couldn't come up with a better plan, because only a full time operation can
be expected to have the resources to harmonize the pieces plus all of their
dependencies with build patches, code wrangling, testing, testing, testing.
Volunteer contributor and committer time is a precious gift. I wonder if
the many professional full time Hadoop devs voting here have lost sight of
this. Pushing your integration work downstream doesn't mean resources will
be there to pick it up. Downstream projects could be forced to reluctantly
abandon working with Apache releases for a commercial distribution such as
CDH, or the MapR platform. Or, they will be unable to move from a "known
good" combination in the face of a combinatorial explosion of dependency
changes, so their general utility to the end user steadily declines. Maybe
the consensus is that is acceptable, but I would find that kind of a sad
ending to this remarkable project.

On Friday, August 31, 2012, Devaraj Das wrote:

> Andrew's points are fair IMHO. In general, I think it makes sense to have
> the TLPs but we aren't there yet (as others have pointed out). I'd propose
> that we should think about the timelines (maybe an appropriate time is when
> we have Hadoop-2.0 GA'ed).
>
> On Aug 30, 2012, at 7:11 AM, Andrew Purtell wrote:
>
> > As a direct Apache software product consumer and sometimes contributor, I
> > also experienced firsthand the pain of the project splits. It was not
> > possible to build an installable release. It may have been many days or
> > weeks before that was cured by a re-merge. I gave up after burning too
> many
> > hours on it, went back to the 1.0 code base, and came back only after the
> > damage was repaired.
> >
> > It's also frustrating to hear, even if just one person's proposal, that
> we
> > have spent months preparing to stabilize our next production deployment
> > based on the 2.0 branch, with the expectation that it will be the new
> > stable, but now maybe 0.23 will be the new stable. 0.23 is quite
> backwards
> > in comparison and missing all of the critical HA HDFS work.
> >
> > This thread seems to be becoming a competition for which is the more
> > radical proposal to snatch defeat from the jaws of success.
> >
> > These proposals seem to be made with a total lack of care for the end
> user.
> >
> > From my point of view, things were going reasonably well until suddenly
> > there is this sudden turn into lunacy. I am positive this kind of
> > "foundation" / PMC / project / administrivia tinkering is what will
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB