Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: M/R job to a cluster?


Copy link to this message
-
Re: M/R job to a cluster?
To validate if your jobs are running locally, look for the classname
"LocalJobRunner" in the runtime output.

Configs are sourced either from the classpath (if a dir or jar on the
classpath has the XMLs at their root, they're read), or via the code
(conf.set("mapred.job.tracker", "foo:349");) or also via -D parameters
if you use Tool.

The tool + classpath way is usually the best thing to do, for flexibility.

On Sat, Apr 27, 2013 at 2:29 AM,  <[EMAIL PROTECTED]> wrote:
> I suspect that my MapReduce job is being run locally. I don't have any
> evidence but I am not sure how the specifics of my configuration are
> communicated to the Java code that I write. Based on the text that I have
> read online basically I start with code like:
>
> JobClient client = new JobClient();
> JobConf conf - new JobConf(WordCount.class);
> . . . . .
>
> Where do I communicate the configuration information so that the M/R job
> runs on the cluster and not locally? Or is the configuration location
> "magically determined"?
>
> Thank you.

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB