Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # general >> follow up Hadoop mavenization work


Copy link to this message
-
Re: follow up Hadoop mavenization work
Joep,

Ivy & Maven pull JARs from the maven repos you specify.

Maven verifies checksums and I assume Ivy does.

You could turn your verified ~/.m2 into a Maven proxy and switch fetching
JARs not found in the proxy cache.

Bottom line, for you concerns Ivy and Maven are equally good or bad.

Thanks.

Alejandro

On Fri, Jul 29, 2011 at 5:09 PM, Rottinghuis, Joep <[EMAIL PROTECTED]>wrote:

> Thanks for the replies.
>
> To elaborate on why I want to build on a server w/o Internet access:
> Build should not reach out to Internet and grab jars from unverified
> sources w/o md5 hash check etc.
> The resulting code will run on a large production cluster with
> sensitive/private data. From a compliance and risk perspective I want to be
> able to control which jars get pulled in from where.
>
> Manual verification of ~/.m2, tar.gz and scp to build server is an
> acceptable workaround.
> Maven proxy simply bypasses the firewalls which were there for good reason.
>
> Looking forward to try this all on trunk after patch is committed. Until
> then I'll work on making this function on 0.22.
>
> Thanks,
>
> Joep
>
> -----Original Message-----
> From: Steve Loughran [mailto:[EMAIL PROTECTED]]
> Sent: Friday, July 29, 2011 8:32 AM
> To: [EMAIL PROTECTED]
> Subject: Re: follow up Hadoop mavenization work
>
> On 29/07/11 03:10, Rottinghuis, Joep wrote:
> > Alejandro,
> >
> > Are you trying the use-case when people will want to locally build a
> consistent set of common, hdfs, and mapreduce without the downstream
> projects depending on published Maven SNAPSHOTS?
> > I'm working to get this going on 0.22 right now (see HDFS-843, HDFS-2214,
> and I'll have to file two equivalent bugs on mapreduce).
> >
> > Part of the problem is that the assumption was that people always compile
> hdfs against hadoop-common-0.xyz-SNAPSHOT.
> > When applying one patch at a time from Jira attachments that may be fine.
> >
> > If I set up a Jenkins build I will want to make sure that first
> hadoop-common builds with a new build number (not snapshot), then hdfs
> against that same build number, then mapreduce against hadoop-common and
> hdfs.
> > Otherwise you can get a situation when the mapreduce build is still
> running and hadoop-common build has already produced a new snapshot build.
> >
> > Local caching in ~/.m2 and ~/.ivy2 repos makes this situation even more
> complex.
>
> One option here is to set up >1 virtual machine (The centos 6.0 minimal are
> pretty lightweight) and delegate work to these jenkins instances, forcing
> different branches into different virtual hosts, and jenkins to build stuff
> serially on a single machine. That ensures a strict order and isolates you.
> You can even have ant targets to purge the repository caches.
>
> I have some Centos VMs set up to do release work on my desktop as it
> ensures that I never release under-development code; the functional test
> runs don't interfere with my desktop test runs, and I can keep editing the
> code. It works OK, if you have enough RAM and HDD to spare
>
> -steve
>
>