-Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure
This should not be the case. All tasks should have the configuration values you specified propagated to them. Are you sure your setup is correct? Are they always the same nodes which run with 1024Mb? Perhaps you have mapred-site.xml on those nodes?
On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <[EMAIL PROTECTED]> wrote:
I've been running Terasort on Hadoop-2.0.4.
Every time there is s a small number of Map failures (like 4 or 5) because of container's running beyond virtual memory limit.
I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most TaskAttempt goes fine while the values of those failed maps are the default 1024MB.
My question is thus, why a small number of container's memory values are set to default rather than that of user-configured ?
Any thoughts ?