Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> java.net.SocketException: Too many open files


+
Nandigam, Sujitha 2013-08-01, 16:04
+
Jun Rao 2013-08-02, 04:08
Copy link to this message
-
Re: java.net.SocketException: Too many open files
We've had this problem with Zookeeper...

Setting ulimit properly can occasionally be tricky because you need to
logout and re-ssh into the box for the changes to take effect on the next
processes you start up. Another problem we've hit was that our puppet
service was running in the background and silently restoring settings to
their original values, which would bite us a while later, when we'd need to
restart a service (currently running processes keep the limit they had at
start time).

You can double-check that your processes are running with the ulimit you
expect them to by finding out their PID (using ps) and then doing sudo cat
/proc/PID/limits

If you don't see the value you configured in the "Max open files" line,
then something somewhere prevented your process from using the number of
file handles you want it to.

Of course, what I just said doesn't address the possibility that there
could be some sort of file handle leak somewhere in the 0.8 code... Though
I guess such bug would have surfaced in heavy-duty environments such as
LinkedIn's, if it existed.

--
Felix
On Fri, Aug 2, 2013 at 12:07 AM, Jun Rao <[EMAIL PROTECTED]> wrote:

> If you do netstat, what hosts are those connections for and what state are
> those connections in?
>
> Thanks,
>
> Jun
>
>
> On Thu, Aug 1, 2013 at 9:04 AM, Nandigam, Sujitha <[EMAIL PROTECTED]
> >wrote:
>
> > Hi,
> >
> > In producer I was continuously getting this exception
> > java.net.SocketException: Too many open files
> > even though I added the below line to /etc/security/limits.conf
> >
> >
> >
> > kafka-0.8.0-beta1-src    -    nofile    983040
> >
> >
> > ERROR Producer connection to localhost:9093 unsuccessful
> > (kafka.producer.SyncProducer)
> > java.net.SocketException: Too many open files
> >
> > Please help me how to resolve this.
> >
> > Thanks,
> > Sujitha
> > "This message (including any attachments) is intended only for the use of
> > the individual or entity to which it is addressed, and may contain
> > information that is non-public, proprietary, privileged, confidential and
> > exempt from disclosure under applicable law or may be constituted as
> > attorney work product. If you are not the intended recipient, you are
> > hereby notified that any use, dissemination, distribution, or copying of
> > this communication is strictly prohibited. If you have received this
> > message in error, notify sender immediately and delete this message
> > immediately."
> >
>

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB