I went ahead and purged those server restrictions.
The way the upstreamness is set up it actually does hadoop2 first and
shouldn't build hadoop1 unless it's a success, I think.
At this point, it looks like builds are failing in the infra tests. Go
On Mon, Sep 9, 2013 at 2:15 PM, Billie Rinaldi <[EMAIL PROTECTED]>wrote:
> Might as well add those servers back in, see if they're healthier now.
> I also noticed and was wondering about the upstream build. I didn't
> configure it that way, so I don't know if someone did it by accident or
> purposely. Perhaps it's because there's no point in trying to build with
> hadoop 2 if the regular hadoop build fails?
> On Mon, Sep 9, 2013 at 10:52 AM, John Vines <[EMAIL PROTECTED]> wrote:
>> So, the trunk builds had been failing due to rat failures from the
>> change it seems. I went ahead and change the behavior to purge the
>> workspace before it pulls it, and so far it seems to be working.
>> However, I did notice a few things that I might have missed or might
>> warrant a change-
>> 1. We skip ubuntu5 for regular trunk and ubuntu4 for hadoop2. I know we
>> started skipping certain boxes because they were consistently misbehaving.
>> Is this still the case or should we go ahead and remove those checks
>> 2. The hadoop2 build is upstream of the regular hadoop build. Is there a
>> reason for this?