Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> map container is assigned default memory size rather than user configured which will cause TaskAttempt failure


Copy link to this message
-
map container is assigned default memory size rather than user configured which will cause TaskAttempt failure
Hi,

I've been running Terasort on Hadoop-2.0.4.

Every time there is s a small number of Map failures (like 4 or 5) because
of container's running beyond virtual memory limit.

I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
TaskAttempt goes fine while the values of those failed maps are the default
1024MB.

My question is thus, why a small number of container's memory values are
set to default rather than that of user-configured ?

Any thoughts ?

Thanks,
Manu Zhang