But there was not any trace looking like "OutOfMemoryError". nproc
might has result with this, no? Not a SocketException?
Anyway, I have increased it to 32768. I will see if I face that again.
2013/4/9 Ted Yu <[EMAIL PROTECTED]>:
> According to http://hbase.apache.org/book.html#ulimit , you should increase
> nproc setting.
> On Tue, Apr 9, 2013 at 8:33 AM, Jean-Marc Spaggiari <[EMAIL PROTECTED]
>> I just faced an issue this morning on one of my RS.
>> Here is an extract of the logs
>> 2013-04-09 11:05:33,164 ERROR org.apache.hadoop.hdfs.DFSClient:
>> Exception closing file
>> : java.net.SocketException: Too many open files
>> java.net.SocketException: Too many open files
>> at sun.nio.ch.Net.socket0(Native Method)
>> at sun.nio.ch.Net.socket(Net.java:323)
>> at sun.nio.ch.Net.socket(Net.java:316)
>> at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:101)
>> at java.nio.channels.SocketChannel.open(SocketChannel.java:142)
>> ulimit is unlimited on all my servers.
>> Is seems there was to many network connections opened. Is there
>> anything HBase can handle in such scenario? It's only hadoop in the
>> stacktrace, so I'm not sure.
>> Can this be related to nproc? I don't think so. I have another tool
>> running on the RS. Using low CPU, low bandwidth but making MANY
>> network HTTP connections...
>> Any suggestion?