Sorta unclear on what is prompting this question (to answer it more
specifically) but my response below:
On Tue, Apr 9, 2013 at 9:05 AM, John Meza <[EMAIL PROTECTED]> wrote:
> The default mode for hadoop is Standalone, PsuedoDistributed and Fully
> Distributed modes. It is configured for Psuedo and Fully Distributed via
> configuration file, but defaults to Standalone otherwise (correct?).
The mapred-default.xml we ship, has "mapred.job.tracker"
(0.20.x/1.x/0.22.x) set to local, or "mapreduce.framework.name"
(0.23.x, 2.x, trunk) set to local. This is why, without reconfiguring
an installation to point to a proper cluster (JT or YARN), you will
get local job runner activated.
> Question about the -defaulting- mechanism:
> -Does it get the -default- configuration via one of the config files?
For any Configuration type of invocation:
1. First level of defaults come from *-default.xml embedded inside the
various relevant jar files.
2. Configurations further found in a classpath resource XML
(core,mapred,hdfs,yarn, *-site.xmls) are applied on top of the
3. User applications' code may then override this set, with any
settings of their own, if needed.
> -Or does it get the -default- configuration via hard-coded values?
There may be a few cases of hardcodes, missing documentation and
presence in *-default.xml, but they should still be configurable via
(2) and (3).
> -Or another mechanism?