Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # dev - 0.23 & trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar?


Copy link to this message
-
0.23 & trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar?
Alejandro Abdelnur 2011-10-12, 16:07
Currently common, hdfs and mapred create partial tars which are not usable
unless they are stitched together into a single tar.

With HADOOP-7642 the stitching happens as part of the build.

The build currently produces the following tars:

1* common TAR
2* hdfs (partial) TAR
3* mapreduce (partial) TAR
4* hadoop (full, the stitched one) TAR

#1 on its own does not run anything, #2 and #3 on their own don't run. #4
runs hdfs & mapreduce.

Questions:

Q1. Does it make sense to publish #1, #2 & #3? Or #4 is sufficient and you
start the services you want (i.e. Hbase would just use HDFS)?

Q2. And what about a source TAR, does it make sense to have source TAR per
component or a single TAR for the whole?
For simplicity (for the build system and for users) I'd prefer a single
binary TAR and a single source TAR.

Thanks.

Alejandro