Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # general - [VOTE] Should we release 0.20.2-rc4 ?


Copy link to this message
-
Re: [VOTE] Should we release 0.20.2-rc4 ?
Chris Douglas 2010-02-26, 09:28
With 4 +1s, the vote passes. I'll push out the release. -C

On Tue, Feb 23, 2010 at 2:27 PM, Chris Douglas <[EMAIL PROTECTED]> wrote:
> The rules are here:
>
> http://www.apache.org/foundation/voting.html#ReleaseVotes
>
> This says that one cannot veto a release, but 3 days should be
> sufficient for everyone to test it out and find problems. If nobody
> finds anything sufficient to block the release, I'll try to push it on
> Wednesday or Thursday. -C
>
> On Tue, Feb 23, 2010 at 2:12 PM, Todd Lipcon <[EMAIL PROTECTED]> wrote:
>> Hi Steve,
>>
>> I believe the process is that there need to be 3 "+1" votes from PMC
>> members. I'm not sure if the one who rolled the release counts as one
>> of the binding +1 votes. If so, we should have the requisite number,
>> and we just need to wait a few days to be sure there are no -1s before
>> closing the vote. If not, we need one more PMC +1.
>>
>> So, to answer your question, I would guess < 1 week.
>>
>> Thanks
>> -Todd
>>
>>
>> On Tue, Feb 23, 2010 at 2:08 PM, Stephen Watt <[EMAIL PROTECTED]> wrote:
>>> Bit of noob question here, but I haven't been around long enough to have
>>> observed the full lifecycle of a point release candidate yet. Given the
>>> generally positive feedback about 20.2 rc4, how close are we to actually
>>> releasing it? Is it 2 days, 1 week, 2 weeks ?
>>>
>>> Kind regards
>>> Steve Watt
>>>
>>>
>>>
>>> From:
>>> Chris Douglas <[EMAIL PROTECTED]>
>>> To:
>>> [EMAIL PROTECTED]
>>> Date:
>>> 02/23/2010 03:34 PM
>>> Subject:
>>> Re: [VOTE] Should we release 0.20.2-rc4 ?
>>>
>>>
>>>
>>> I looked around, and found this repository of public keys:
>>>
>>> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
>>>
>>> I'd be happy to upload my key elsewhere if necessary; I found no
>>> documentation on that point.
>>>
>>> We'll inevitably roll a 0.20.3, which I agree, should include
>>> MAPREDUCE-587 -C
>>>
>>> On Tue, Feb 23, 2010 at 10:50 AM, Todd Lipcon <[EMAIL PROTECTED]> wrote:
>>>> Tested download with md5: 8f40198ed18bef28aeea1401ec536cb9
>>>> Tried to verify the GPG signature, but Chris is not in
>>>> http://download.nextag.com/apache/hadoop/core/KEYS - he should be
>>>> added there if he is going to sign releases.
>>>>
>>>> I ran unit tests on my machine at home - TestStreamingExitStatus
>>>> failed with an OOME. I think it's exactly MAPREDUCE-587. Aside from
>>>> that, all unit tests passed. I also ran a few jobs on a
>>>> pseudo-distributed cluster and it worked fine.
>>>>
>>>> Since this is just a test bug, and in contrib, I think we should
>>>> release anyway. Meanwhile let's commit MAPREDUCE-587 to branch-20
>>>> before the next release. I'll reopen that JIRA.
>>>>
>>>> So, [non-binding] +1 from me. Thanks for the hard work, Chris.
>>>>
>>>> -Todd
>>>>
>>>> On Mon, Feb 22, 2010 at 11:52 PM, Dhruba Borthakur <[EMAIL PROTECTED]>
>>> wrote:
>>>>> +1. Looks good to me. Ran unit tests.
>>>>>
>>>>> -dhruba
>>>>>
>>>>>
>>>>> On Fri, Feb 19, 2010 at 11:08 AM, Stack <[EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> +1
>>>>>>
>>>>>> I put up on a small cluster under load.  Seems to work fine.  Trolled
>>>>>> logs a while.  Nothing out of the ordinary.  Checked docs.  They look
>>>>>> grand.
>>>>>>
>>>>>> St.Ack
>>>>>>
>>>>>> On Fri, Feb 19, 2010 at 1:07 AM, Chris Douglas <[EMAIL PROTECTED]>
>>>>>> wrote:
>>>>>> > There are now only two consistently failing testcases in my
>>>>>> > environment, both in the capacity-scheduler contrib module:
>>>>>> >
>>>>>> > org.apache.hadoop.mapred.TestJobInitialization
>>>>>> > org.apache.hadoop.mapred.TestQueueCapacities
>>>>>> >
>>>>>> > neither of which is a regression from 0.20.1
>>>>>> >
>>>>>> > http://people.apache.org/~cdouglas/0.20.2-rc4
>>>>>> >
>>>>>> > Please try it out. -C
>>>>>> >
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Connect to me at http://www.facebook.com/dhruba
>>>>>
>>>>
>>>
>>>
>>>
>>
>