The mapred execution engine is checked in the Cluster.java source, and
each Service implementation is scanned through and then selected based on
the match to the configuration property "mapreduce.framework.name" ....
,,, but How and where do JDK Service implementations that encapsulate this
information get packaged into hadoop jars, ? Is there a generic way in the
hadoop build that the JDK Service API is implemented ?