Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS, mail # user - Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.


Copy link to this message
-
Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
YouPeng Yang 2013-12-06, 01:32
Hi

  Have your spread you config over your cluster.

  And do you take a look whether the error containers are on any concentrated
nodes?
regards
2013/12/5 panfei <[EMAIL PROTECTED]>

> Hi YouPeng, thanks for your advice. I have read the docs and configure the
> parameters as follows:
>
> Physical Server: 8 cores CPU, 16GB memory.
>
> For YARN:
>
> yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.
>
> yarn.scheduler.minimum-allocation-mb set to 2048M  as the minimum
> allocation unit for the container.
>
> yarn.nodemanager.vmem-pmem-ratio is the default value 2.1
>
>
> FOR MAPREDUCE:
>
> mapreduce.map.memory.mb set to 2048 for map task containers.
>
> mapreduce.reduce.memory.mb set to 4096 for reduce task containers.
>
> mapreduce.map.java.opts set to -Xmx1536m
>
> mapreduce.reduce.java.opts set to -Xmx3072m
>
>
>
> after setting theses parameters, the problem still there, I think it's
> time to get back to HADOOP 1.0 infrastructure.
>
> thanks for your advice again.
>
>
>
> 2013/12/5 YouPeng Yang <[EMAIL PROTECTED]>
>
>> Hi
>>
>>  please reference to
>> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
>>
>>
>>
>> 2013/12/5 panfei <[EMAIL PROTECTED]>
>>
>>> we have already tried several values of these two parameters, but it
>>> seems no use.
>>>
>>>
>>> 2013/12/5 Tsuyoshi OZAWA <[EMAIL PROTECTED]>
>>>
>>>> Hi,
>>>>
>>>> Please check the properties like mapreduce.reduce.memory.mb and
>>>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>>>> resource limits for mappers/reducers.
>>>>
>>>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <[EMAIL PROTECTED]> wrote:
>>>> >
>>>> >
>>>> > ---------- Forwarded message ----------
>>>> > From: panfei <[EMAIL PROTECTED]>
>>>> > Date: 2013/12/4
>>>> > Subject: Container
>>>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> running
>>>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>>>> memory
>>>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>>>> > To: CDH Users <[EMAIL PROTECTED]>
>>>> >
>>>> >
>>>> > Hi All:
>>>> >
>>>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>>>> all)
>>>> > jobs from hive, we get the following exception info , seems the
>>>> physical
>>>> > memory and virtual memory both not enough for the job to run:
>>>> >
>>>> >
>>>> > Task with the most failures(4):
>>>> > -----
>>>> > Task ID:
>>>> >   task_1386156666044_0001_m_000000
>>>> >
>>>> > URL:
>>>> >
>>>> >
>>>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>>>> > -----
>>>> > Diagnostic Messages for this Task:
>>>> > Container
>>>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>>>> > container.
>>>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES)
>>>> FULL_CMD_LINE
>>>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>>>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>>>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>>>> >
>>>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>>>> > -Dlog4j.configuration=container-log4j.properties
>>>> >
>>>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>>>> > -Dyarn.app.mapreduce.container.log.filesize=0
>>>> -Dhadoop.root.logger=INFO,CLA
>>>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>>>> > attempt_1386156666044_0001_m_000000_3 13
>>>> >
>>>> > following is some of our configuration:
>>>> >