-0.23 & trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar?
Currently common, hdfs and mapred create partial tars which are not usable
unless they are stitched together into a single tar.
With HADOOP-7642 the stitching happens as part of the build.
The build currently produces the following tars:
1* common TAR
2* hdfs (partial) TAR
3* mapreduce (partial) TAR
4* hadoop (full, the stitched one) TAR
#1 on its own does not run anything, #2 and #3 on their own don't run. #4
runs hdfs & mapreduce.
Q1. Does it make sense to publish #1, #2 & #3? Or #4 is sufficient and you
start the services you want (i.e. Hbase would just use HDFS)?
Q2. And what about a source TAR, does it make sense to have source TAR per
component or a single TAR for the whole?
For simplicity (for the build system and for users) I'd prefer a single
binary TAR and a single source TAR.
Prashant Sharma 2011-10-12, 16:30
Doug Cutting 2011-10-12, 17:28
Ravi Teja 2011-10-13, 04:43
Steve Loughran 2011-10-13, 12:52
Bharath Mundlapudi 2011-10-14, 18:58
giridharan kesavan 2011-10-12, 17:25
Uma Maheswara Rao G 72686... 2011-10-13, 05:11