-Re: M/R job to a cluster?
sudhakara st 2013-04-27, 19:48
In the case:
JobClient client = new JobClient();
JobConf conf - new JobConf(WordCount.class);
Job client(default in local system) picks configuration information by
referring HADOOP_HOME in local system.
if your job configuration like this:
*Configuration conf = new Configuration();*
It pickups configuration information by referring HADOOP_HOME in specified
namenode and job tracker.
On Sat, Apr 27, 2013 at 2:52 AM, Kevin Burton <[EMAIL PROTECTED]>wrote:
> It is hdfs://devubuntu05:9000. Is this wrong? Devubuntu05 is the name of
> the host where the NameNode and JobTracker should be running. It is also
> the host where I am running the M/R client code.
> On Apr 26, 2013, at 4:06 PM, Rishi Yadav <[EMAIL PROTECTED]> wrote:
> check core-site.xml and see value of fs.default.name. if it has localhost
> you are running locally.
> On Fri, Apr 26, 2013 at 1:59 PM, <[EMAIL PROTECTED]> wrote:
>> I suspect that my MapReduce job is being run locally. I don't have any
>> evidence but I am not sure how the specifics of my configuration are
>> communicated to the Java code that I write. Based on the text that I have
>> read online basically I start with code like:
>> JobClient client = new JobClient();
>> JobConf conf - new JobConf(WordCount.class);
>> . . . . .
>> Where do I communicate the configuration information so that the M/R job
>> runs on the cluster and not locally? Or is the configuration location
>> "magically determined"?
>> Thank you.