Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Too many open files error with YARN


Copy link to this message
-
Too many open files error with YARN
Hi,

 I am running a date command with YARN's distributed shell example in a
loop of 1000 times in this way:

yarn jar
/home/kbonagir/yarn/hadoop-2.0.0-alpha/share/hadoop/mapreduce/hadoop-yarn-applications-distributedshell-2.0.0-alpha.jar
org.apache.hadoop.yarn.applications.distributedshell.Client --jar
/home/kbonagir/yarn/hadoop-2.0.0-alpha/share/hadoop/mapreduce/hadoop-yarn-applications-distributedshell-2.0.0-alpha.jar
--shell_command date --num_containers 2
Around 730th time or so, I am getting an error in node manager's log saying
that it failed to launch container because there are "Too many open files"
and when I observe through lsof command,I find that there is one instance
of this kind of file is left for each run of Application Master, and it
kept growing as I am running it in loop.

node1:44871->node1:50010

Is this a known issue? Or am I missing doing something? Please help.

Note: I am working on hadoop--2.0.0-alpha

Thanks,
Kishore
+
Sandy Ryza 2013-03-20, 17:39
+
Hemanth Yamijala 2013-03-21, 04:27
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB