Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> How to package multiple jars for a Hadoop job


Copy link to this message
-
How to package multiple jars for a Hadoop job
Hi,

I have a script that I use to re-package all the jars (which are output in a
dist directory by NetBeans) - and it structures everything correctly into a
single jar for running a MapReduce job. Here it is below, but I am not sure
if it is the best practice. Besides, it hard-codes my paths. I am sure that
there is a better way.

#!/bin/sh
# to be run from the project directory
cd ../dist
jar -xf MR.jar
jar -cmf META-INF/MANIFEST.MF  /home/mark/MR.jar *
cd ../bin
echo "Repackaged for Hadoop"

Thank you,
Mark
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB