Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Too many open files (java.net.SocketException)


Copy link to this message
-
Re: Too many open files (java.net.SocketException)
you might also want to just check what file-max is,

more /proc/sys/fs/file-max

I just checked on my fedora and ubuntu systems and it appears they
default to 785130 and 2452636 respectively so you might not want to
accidentally decrease those numbers.

On 4/10/13, Andrew Purtell <[EMAIL PROTECTED]> wrote:
> Correct, nproc has nothing to do with file table issues.
>
> I typically do something like this when setting up a node:
>
> echo "@hadoop soft nofile 65536" >> /etc/security/limits.conf
> echo "@hadoop hard nofile 65536" >> /etc/security/limits.conf
>
> where all accounts launching Hadoop daemons are in the 'hadoop' group. Make
> sure you've done something like this.
>
> You may need to increase the maximum number of file-handles that the Linux
> kernel will allocate overall. Try adding this to an init script:
>
>     sysctl -w fs.file-max=131072
>
> ... for example. Or you can add "fs.file-max=131072" into /etc/sysctl.conf.
> If you do that, then be sure to execute sysctl -p as root for the change to
> take effect.
>
>
>
> On Tue, Apr 9, 2013 at 9:08 AM, Jean-Marc Spaggiari
> <[EMAIL PROTECTED]
>> wrote:
>
>> But there was not any trace looking like "OutOfMemoryError". nproc
>> might has result with this, no? Not a SocketException?
>> Anyway, I have increased it to 32768. I will see if I face that again.
>>
>> Thanks,
>>
>> JM
>>
>> 2013/4/9 Ted Yu <[EMAIL PROTECTED]>:
>> > According to http://hbase.apache.org/book.html#ulimit , you should
>> increase
>> > nproc setting.
>> >
>> > Cheers
>> >
>> > On Tue, Apr 9, 2013 at 8:33 AM, Jean-Marc Spaggiari <
>> [EMAIL PROTECTED]
>> >> wrote:
>> >
>> >> Hi,
>> >>
>> >> I just faced an issue this morning on one of my RS.
>> >>
>> >> Here is an extract of the logs
>> >> 2013-04-09 11:05:33,164 ERROR org.apache.hadoop.hdfs.DFSClient:
>> >> Exception closing file
>> >>
>> >>
>> /hbase/entry_proposed/ae4a5d72d4613728ddbcc5a64262371b/.tmp/ed6a0154ef714cd88faf26061cf248d3
>> >> : java.net.SocketException: Too many open files
>> >> java.net.SocketException: Too many open files
>> >>         at sun.nio.ch.Net.socket0(Native Method)
>> >>         at sun.nio.ch.Net.socket(Net.java:323)
>> >>         at sun.nio.ch.Net.socket(Net.java:316)
>> >>         at
>> sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:101)
>> >>         at
>> >>
>> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)
>> >>         at
>> >> java.nio.channels.SocketChannel.open(SocketChannel.java:142)
>> >>         at
>> >>
>> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:3423)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3381)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
>> >>
>> >> ulimit is unlimited on all my servers.
>> >>
>> >> Is seems there was to many network connections opened. Is there
>> >> anything HBase can handle in such scenario? It's only hadoop in the
>> >> stacktrace, so I'm not sure.
>> >>
>> >> Can this be related to nproc? I don't think so. I have another tool
>> >> running on the RS. Using low CPU, low bandwidth but making MANY
>> >> network HTTP connections...
>> >>
>> >> Any suggestion?
>> >>
>> >> JM
>> >>
>>
>
>
>
> --
> Best regards,
>
>    - Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)
>
--
Ted.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB