Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: M/R job to a cluster?


Copy link to this message
-
Re: M/R job to a cluster?
check in namenode:50030 if it appears there its not running in localmode
else it is

*Thanks & Regards    *


Shashwat Shriparv

On Sun, Apr 28, 2013 at 1:18 AM, sudhakara st <[EMAIL PROTECTED]>wrote:

> Hello Kevin,
>
> In the case:
>
> JobClient client = new JobClient();
> JobConf conf - new JobConf(WordCount.class);
>
> Job client(default in local system) picks  configuration information  by
> referring HADOOP_HOME in local system.
>
> if your job configuration like this:
> *Configuration conf = new Configuration();*
> *conf.set("fs.default.name", "hdfs://name_node:9000");*
> *conf.set("mapred.job.tracker", "job_tracker_node:9001");*
>
> It pickups configuration information  by referring HADOOP_HOME in
> specified namenode and job tracker.
>
> Regards,
> Sudhakara.st
>
>
> On Sat, Apr 27, 2013 at 2:52 AM, Kevin Burton <[EMAIL PROTECTED]>wrote:
>
>> It is hdfs://devubuntu05:9000. Is this wrong? Devubuntu05 is the name of
>> the host where the NameNode and JobTracker should be running. It is also
>> the host where I am running the M/R client code.
>>
>> On Apr 26, 2013, at 4:06 PM, Rishi Yadav <[EMAIL PROTECTED]> wrote:
>>
>> check core-site.xml and see value of fs.default.name. if it has
>> localhost you are running locally.
>>
>>
>>
>>
>> On Fri, Apr 26, 2013 at 1:59 PM, <[EMAIL PROTECTED]> wrote:
>>
>>> I suspect that my MapReduce job is being run locally. I don't have any
>>> evidence but I am not sure how the specifics of my configuration are
>>> communicated to the Java code that I write. Based on the text that I have
>>> read online basically I start with code like:
>>>
>>> JobClient client = new JobClient();
>>> JobConf conf - new JobConf(WordCount.class);
>>> . . . . .
>>>
>>> Where do I communicate the configuration information so that the M/R job
>>> runs on the cluster and not locally? Or is the configuration location
>>> "magically determined"?
>>>
>>> Thank you.
>>>
>>
>>
>
>
> --
>
> Regards,
> .....  Sudhakara.st
>
>
+
Harsh J 2013-04-29, 18:15
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB