Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # general >> Large feature development

Steve Loughran 2012-08-31, 17:07
Todd Lipcon 2012-09-01, 08:20
Steve Loughran 2012-09-02, 14:58
Eli Collins 2012-09-02, 19:47
Arun C Murthy 2012-09-01, 19:47
Eli Collins 2012-09-02, 20:00
Arun Murthy 2012-09-02, 22:11
Todd Lipcon 2012-09-03, 01:12
Arun C Murthy 2012-09-03, 07:05
Todd Lipcon 2012-09-03, 07:31
Arun C Murthy 2012-09-03, 07:48
Arun C Murthy 2012-09-03, 07:22
Copy link to this message
Re: Large feature development
Its unfortunate that certain work, an year after accepted in to the main line, being attributed to a single person. There is significant amount of work done by people who are not in the PMC or a commiter, especially to get it running in production. For those who have been associated with running hadoop before its became synonymous with 'BigData', stabilizing major release takes time. With more critical systems dependent on hadoop, transitioning to new feature set would take longer. hadoop-0.20 took ~8 months.
IMHO, months after a feature set is accepted in to the mainline, it may not be appropriate to question its quality.

In next couple of months, we are planning to widely deploy 0.23.3 release by Bobby. As with any major release, I know this is not going to be a smooth ride.

----- Original Message -----
> From: Todd Lipcon <[EMAIL PROTECTED]>
> Cc:
> Sent: Saturday, September 1, 2012 1:20 AM
> Subject: Re: Large feature development
>T hanks for starting this thread, Steve. I think your points below are
> good. I've snipped most of your comment and will reply inline to one
> bit below:
> On Fri, Aug 31, 2012 at 10:07 AM, Steve Loughran
> <[EMAIL PROTECTED]> wrote:
>>  Of the big changes that have worked, they are
>>     1. HDFS 2's HA and ongoing improvements: collaborative dev on the
> list
>>     with incremental changes going on in trunk, RTC with lots of tests. This
>>     isn't finished, and the test problem there is that functional
> testing of
>>     all failure modes requires software-controlled fencing devices and
> switches
>>     -and tests to generated the expected failure space.
> Actually, most of the HDFS HA code has been done on branches. The
> first work that led towards HA was the redesign of the edits logging
> infrastrucutre -- HDFS-1073. This was a feature branch with about 60
> patches on it. Then HDFS-1623, the main manual-failover HA
> development, had close to 150 patches on the branch. Automatic HA
> (HDFS-3042) was some 15-20 patches. The current work (removing
> dependency on NAS) is around 35 patches in so far and getting close to
> merge.
> In these various branches, we've experimented with a few policies
> which have differed from trunk. In particular:
> - HDFS-1073 had a "modified review then commit" policy, which was
> that, if a patch sat without a review for more than 24hrs, we
> committed it with the restriction that there would be a post-commit
> review before the branch was merged.
> - All of the branches have done away with the requirement of running
> the full QA suite, findbugs, etc prior to commit. This means that the
> branches at times have broken tests checked in, but also makes it
> quicker to iterate on the new feature. Again, the assumption is that
> these requirements are met before merge.
> - In all cases there has been a design doc and some good design
> discussion up front before substantial code was written. This made it
> easier to forge ahead on the branch with good confidence that the
> community was on-board with the idea.
> Given my experiences, I think all of the above are useful to follow.
> It means development can happen quickly, but ensures that when the
> merge is proposed, people feel like the quality meets our normal
> standards.
>>     2. YARN: Arun on his own branch, CTR, merge once mostly stable, and
>>     completely replacing MRv1.
> I'd actually contend that YARN was merged too early. I have yet to see
> anyone running YARN in production, and it's holding up the
> "Stable"
> moniker for Hadoop 2.0 -- HDFS-wise we are already quite stable and
> I'm seeing fewer issues in our customers running Hadoop HDFS 2
> compared to Hadoop 1-derived code.
>>  How then do we get (a) more dev projects working and integrated by the
>>  current committers, and (b) a process in which people who are not yet
>>  contributors/committers can develop non-trivial changes to the project in a
Arun Murthy 2012-09-01, 22:33