Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Any way to limit the total tasks running on a node in 0.202


Copy link to this message
-
Any way to limit the total tasks running on a node in 0.202
Hadoop can set the maximum mappers and reducers running on a node but under
0.20.2 I do not see a way to limit
the system from running mappers and reducers together with the total
exceeding individual limits.
I find that when my mappers are about 50% done the system kicks off
reducers. I have raised the maxmemory in
chilld.java,vm.opts because I have been hitting GC limits and the values
work well when I am running 6 mappers OR
6 reducers but when my mappers are half way done I see 6 mappers AND 6
reducers running and this challenges the
total memory on a node.
How can I keep the total tasks on a node under control without limiting the
maximum mappers and reducers to half the total I want??

--
Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB