Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> map container is assigned default memory size rather than user configured which will cause TaskAttempt failure


Copy link to this message
-
Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure
Hi,

How about checking the value of mapreduce.map.java.opts? Are your JVMs
launched with assumed heap memory?

On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang <[EMAIL PROTECTED]> wrote:
> Just confirmed the problem still existed even the "mapred-site.xml"s on all
> nodes have the same configuration (mapreduce.map.memory.mb = 2560).
>
> Any more thoughts ?
>
> Thanks,
> Manu
>
>
> On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <[EMAIL PROTECTED]> wrote:
>>
>> Thanks Ravi.
>>
>> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
>> sounds weird to me should they read configuration from those mapred-site.xml
>> since it's the client who applies for the resource. I have another
>> mapred-site.xml in the directory where I run my job. I suppose my job should
>> read conf from that mapred-site.xml. Please correct me if I am mistaken.
>>
>> Also, not always the same nodes. The number of failures is random, too.
>>
>> Anyway, I will have my settings in all the nodes' mapred-site.xml and see
>> if the problem goes away.
>>
>> Manu
>>
>>
>> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <[EMAIL PROTECTED]> wrote:
>>>
>>> Manu!
>>>
>>> This should not be the case. All tasks should have the configuration
>>> values you specified propagated to them. Are you sure your setup is correct?
>>> Are they always the same nodes which run with 1024Mb? Perhaps you have
>>> mapred-site.xml on those nodes?
>>>
>>> HTH
>>> Ravi
>>>
>>>
>>> On Tuesday, October 22, 2013 9:09 PM, Manu Zhang
>>> <[EMAIL PROTECTED]> wrote:
>>> Hi,
>>>
>>> I've been running Terasort on Hadoop-2.0.4.
>>>
>>> Every time there is s a small number of Map failures (like 4 or 5)
>>> because of container's running beyond virtual memory limit.
>>>
>>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
>>> TaskAttempt goes fine while the values of those failed maps are the default
>>> 1024MB.
>>>
>>> My question is thus, why a small number of container's memory values are
>>> set to default rather than that of user-configured ?
>>>
>>> Any thoughts ?
>>>
>>> Thanks,
>>> Manu Zhang
>>>
>>>
>>>
>>
>

--
- Tsuyoshi
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB