This is handled by maven reactor.
When your run Maven in a multimodule project (like we have), all modules
that are part of the build (from the dir where you are) down are used for
the build/test/packaging, all modules that are not part of the build are
picked up from .m2/repo.
"cd trunk/hadoop-mapreduce;mvn compile" uses hadoop-common & hadoop-hdfs
"cd trunk;mvn compile" uses hadoop-common, hadoop-hdfs, hadoop-mapreduce
from the build.
On Thu, Aug 18, 2011 at 4:35 PM, Matt Foley <[EMAIL PROTECTED]> wrote:
> Since we put all the effort into "un-splitting" the components, shouldn't
> have a switch
> that causes, eg, the MAPREDUCE build to pick up artifacts from COMMON and
> HDFS builds
> in specified sibling directories, without using m2 as an intermediary?
> Of course it should respect dependencies (via maven) so that if HDFS source
> has been modified,
> the HDFS artifacts will also be rebuilt before MAPREDUCE uses them :-)
> On Thu, Aug 18, 2011 at 3:30 PM, Giridharan Kesavan <
> [EMAIL PROTECTED]> wrote:
> > Hello,
> > Its the same -Dresolvers=internal for the ant build system; For the
> > maven/yarn build system as long as you have the latest common jar in
> > the m2 cache its going to resolve common from the maven cache. If not
> > from the apache maven repo. You can force the builds to use the cache
> > by adding -o option. (offline builds)
> > Thanks,
> > Giri
> > On Thu, Aug 18, 2011 at 3:19 PM, Eli Collins <[EMAIL PROTECTED]> wrote:
> > > Hey gang,
> > >
> > > What's the new equivalent of resolvers=true in the new MR build? ie
> > > how do you get a a local common change to get picked up by mr?
> > >
> > > Thanks,
> > > Eli
> > >