Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Application Master getting killed randomly reporting excess usage of memory


+
Krishna Kishore Bonagiri 2013-03-08, 07:42
Copy link to this message
-
Application Master getting killed randomly reporting excess usage of memory
Hi,

  I am running a date command using the Distributed Shell example in a loop
of 500 times. It ran successfully all the times except one time where it
gave the following error.

2013-03-22 04:33:25,280 INFO  [main] distributedshell.Client
(Client.java:monitorApplication(605)) - Got application report from ASM
for, appId=222, clientToken=null, appDiagnostics=Application
application_1363938200742_0222 failed 1 times due to AM Container for
appattempt_1363938200742_0222_000001 exited with  exitCode: 143 due to:
Container [pid=21141,containerID=container_1363938200742_0222_01_000001] is
running beyond virtual memory limits. Current usage: 47.3 Mb of 128 Mb
physical memory used; 611.6 Mb of 268.8 Mb virtual memory used. Killing
container.
Dump of the process-tree for container_1363938200742_0222_01_000001 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 21147 21141 21141 21141 (java) 244 12 532643840 11802
/home_/dsadm/yarn/jdk//bin/java -Xmx128m
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster
--container_memory 10 --num_containers 2 --priority 0 --shell_command date
        |- 21141 8433 21141 21141 (bash) 0 0 108642304 298 /bin/bash -c
/home_/dsadm/yarn/jdk//bin/java -Xmx128m
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster
--container_memory 10 --num_containers 2 --priority 0 --shell_command date
1>/tmp/logs/application_1363938200742_0222/container_1363938200742_0222_01_000001/AppMaster.stdout
2>/tmp/logs/application_1363938200742_0222/container_1363938200742_0222_01_000001/AppMaster.stderr
  Any ideas if it is a known issue? I am using the latest version of
hadoop, i.e. hadoop-2.0.3-alpha.

Thanks,
Kishore