Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS, mail # user - Too many open files error with YARN


+
Krishna Kishore Bonagiri 2013-03-20, 11:24
Copy link to this message
-
Re: Too many open files error with YARN
Sandy Ryza 2013-03-20, 17:39
Hi Kishore,

50010 is the datanode port. Does your lsof indicate that the sockets are in
CLOSE_WAIT?  I had come across an issue like this where that was a symptom.

-Sandy

On Wed, Mar 20, 2013 at 4:24 AM, Krishna Kishore Bonagiri <
[EMAIL PROTECTED]> wrote:

> Hi,
>
>  I am running a date command with YARN's distributed shell example in a
> loop of 1000 times in this way:
>
> yarn jar
> /home/kbonagir/yarn/hadoop-2.0.0-alpha/share/hadoop/mapreduce/hadoop-yarn-applications-distributedshell-2.0.0-alpha.jar
> org.apache.hadoop.yarn.applications.distributedshell.Client --jar
> /home/kbonagir/yarn/hadoop-2.0.0-alpha/share/hadoop/mapreduce/hadoop-yarn-applications-distributedshell-2.0.0-alpha.jar
> --shell_command date --num_containers 2
>
>
> Around 730th time or so, I am getting an error in node manager's log
> saying that it failed to launch container because there are "Too many open
> files" and when I observe through lsof command,I find that there is one
> instance of this kind of file is left for each run of Application Master,
> and it kept growing as I am running it in loop.
>
> node1:44871->node1:50010
>
> Is this a known issue? Or am I missing doing something? Please help.
>
> Note: I am working on hadoop--2.0.0-alpha
>
> Thanks,
> Kishore
>
+
Hemanth Yamijala 2013-03-21, 04:27