Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # dev >> Hadoop build slaves software


Copy link to this message
-
Re: Hadoop build slaves software
Guys should builds@ be copied on this?

Cheers,
Chris

On 1/4/13 11:15 AM, "Todd Lipcon" <[EMAIL PROTECTED]> wrote:

>I've always liked puppet for distributing config files, but always
>though it kind of silly for distributing big binaries like toolchains.
>Seems just as easy to just make a 15-line shell script to wget, tar
>xzf, and make install.
>
>Definitely agree puppet makes sense for ensuring the right deb
>packages are installed on the hosts, though. Are the machines already
>managed by puppet for that?
>
>-Todd
>
>On Fri, Jan 4, 2013 at 11:10 AM, Konstantin Boudnik <[EMAIL PROTECTED]>
>wrote:
>> Do I hear puppet? :)
>>
>> Cos
>>
>> On Fri, Jan 04, 2013 at 11:08AM, Todd Lipcon wrote:
>>> I agree -- I'd like to see us have a shell script of some sort which,
>>> given a prefix, downloads and installs the needed toolchain
>>> dependencies.
>>>
>>> We could then download that script onto the build machines and install
>>> into something like /opt/hadoop-toolchain/
>>> AFAIK the only real dependencies we have where the Ubuntu packages are
>>> too old are protoc and maven, so shouldnt be too tough.
>>>
>>> -Todd
>>>
>>> On Fri, Jan 4, 2013 at 10:59 AM, Rajiv Chittajallu
>>><[EMAIL PROTECTED]> wrote:
>>> > asf008 has been up for a while. It was probably just added as a
>>>slave.
>>> >
>>> > All the dependencies should probably be installed in a build_prefix,
>>>to
>>> > avoid conflict to OS specific packages and allows multiple projects
>>>to
>>> > build on the same machines. This is an better alternative to
>>> > provisioning vms for unique builds.
>>> >
>>> > -rajive
>>> >
>>> > Giridharan Kesavan wrote on 01/04/13 at 09:31:55 -0800:
>>> >>   Im on it
>>> >>
>>> >>   -Giri
>>> >>
>>> >>   On Thu, Jan 3, 2013 at 11:24 PM, Todd Lipcon
>>><[1][EMAIL PROTECTED]> wrote:
>>> >>
>>> >>     Hey folks,
>>> >>
>>> >>     It looks like hadoop8 has recently come back online as a build
>>>slave,
>>> >>     but is failing all the builds because it has an ancient version
>>>of
>>> >>     protobuf (2.2.0):
>>> >>     todd@asf008:~$ protoc  --version
>>> >>     libprotoc 2.2.0
>>> >>
>>> >>     In contrast, other slaves have 2.4.1:
>>> >>     todd@asf001:~$ protoc --version
>>> >>     libprotoc 2.4.1
>>> >>
>>> >>     asf001 has the newer protoc in /usr/local/bin but asf008 does
>>>not.
>>> >>     Does anyone know how software is meant to be deployed on these
>>>build
>>> >>     slaves? I'm happy to download and install protobuf 2.4.1 into
>>> >>     /usr/local on asf008 if manual installation is the name of the
>>>game,
>>> >>     but it seems like we should be doing something a little more
>>> >>     reproducible than one-off builds by rando developers to manage
>>>our
>>> >>     toolchain on the Jenkins slaves.
>>> >>     -Todd
>>> >>     --
>>> >>     Todd Lipcon
>>> >>     Software Engineer, Cloudera
>>> >>
>>> >>References
>>> >>
>>> >>   Visible links
>>> >>   1. mailto:[EMAIL PROTECTED]
>>>
>>>
>>>
>>> --
>>> Todd Lipcon
>>> Software Engineer, Cloudera
>
>
>
>--
>Todd Lipcon
>Software Engineer, Cloudera
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB