Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> max number of map/reduce per node


Copy link to this message
-
Re: max number of map/reduce per node
Hi,

My reply inline.

On Mon, Feb 11, 2013 at 5:15 PM, Oleg Ruchovets <[EMAIL PROTECTED]> wrote:
> Hi
>    I found that my job runs with such parameters:
> mapred.tasktracker.map.tasks.maximum    4
> mapred.tasktracker.reduce.tasks.maximum    2
>
>    I try to change these parameters from my java code
>
>     Properties properties = new Properties();
>     properties.put("mapred.tasktracker.map.tasks.maximum" , "8");
>     properties.put("mapred.tasktracker.reduce.tasks.maximum" , "4");

These properties are a per-tasktracker configuration, not applicable
or read from clients.

Also, if you're tweaking client-end properties, using the Java
Properties class is not the right way to go about it. See
Configuration API:
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/conf/Configuration.html

> But executing the job I didn't get updated values of these parameters , it
> remains:
>
> mapred.tasktracker.map.tasks.maximum 4
> mapred.tasktracker.reduce.tasks.maximum 2
>
>
> Should I change the parameters on hadoop XML configuration files?

Yes, as these are per *tasktracker* properties, not client ones.

> Please advice.
>
>
>
>
>
>

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB