On Fri, Feb 18, 2011 at 4:23 PM, Eric Sammer <[EMAIL PROTECTED]> wrote:
> You have a few options. You can:
> 1. Package dependent jars in a lib/ directory of the jar file.
> 2. Use something like Maven's assembly plugin to build a self contained
> Either way, I'd strongly recommend using something like Maven to build your
> artifacts so they're reproducible and in line with commonly used tools. Hand
> packaging files tends to be error prone. This is less of a Hadoop-ism and
> more of a general Java development issue, though.
> On Fri, Feb 18, 2011 at 5:18 PM, Mark Kerzner <[EMAIL PROTECTED]>wrote:
>> I have a script that I use to re-package all the jars (which are output in
>> dist directory by NetBeans) - and it structures everything correctly into
>> single jar for running a MapReduce job. Here it is below, but I am not
>> if it is the best practice. Besides, it hard-codes my paths. I am sure
>> there is a better way.
>> # to be run from the project directory
>> cd ../dist
>> jar -xf MR.jar
>> jar -cmf META-INF/MANIFEST.MF /home/mark/MR.jar *
>> cd ../bin
>> echo "Repackaged for Hadoop"
>> Thank you,
> Eric Sammer
> twitter: esammer
> data: www.cloudera.com