Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> mapred.tasktracker.map.tasks.maximum


Copy link to this message
-
mapred.tasktracker.map.tasks.maximum
Hi,

I have a cluster with 4 nodes and 32 many cores on each. My default value
for the maximum number of mappers per slot is 1:

  <property>
    <name>mapred.tasktracker.map.tasks.maximum</name>
    <!-- see other kb entry about this one. -->
    <value>1</value>
    <final>true</final>
  </property>
(which I think is wrong).

Howeve, when sizable jobs run, I see 65 mappers working, so it seems that
it does create more than 1 mapper per node.

Questions: what maximum number of mappers would be appropriate in this
situation? Is that the right way to set them?

Thank you. Sincerely,
Mark
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB