Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> Too many open files


Copy link to this message
-
Too many open files
Our 0.7.2 Kafka cluster keeps crashing with:

2013-09-24 17:21:47,513 -  [kafka-acceptor:Acceptor@153] - Error in acceptor
java.io.IOException: Too many open

The obvious fix is to bump up the number of open files but I'm wondering if there is a leak on the Kafka side and/or our application side. We currently have the ulimit set to a generous 4096 but obviously we are hitting this ceiling. What's a recommended value?

We are running rails and our Unicorn workers are connecting to our Kafka cluster via round-robin load balancing. We have about 1500 workers to that would be 1500 connections right there but they should be split across our 3 nodes. Instead Netstat shows thousands of connections that look like this:

tcp        0      0 kafka1.mycompany.:XmlIpcRegSvc ::ffff:10.99.99.1:22503     ESTABLISHED
tcp        0      0 kafka1.mycompany.:XmlIpcRegSvc ::ffff:10.99.99.1:48398     ESTABLISHED
tcp        0      0 kafka1.mycompany.:XmlIpcRegSvc ::ffff:10.99.99.2:29617     ESTABLISHED
tcp        0      0 kafka1.mycompany.:XmlIpcRegSvc ::ffff:10.99.99.1:32444     ESTABLISHED
tcp        0      0 kafka1.mycompany.:XmlIpcRegSvc ::ffff:10.99.99.1:34415     ESTABLISHED
tcp        0      0 kafka1.mycompany.:XmlIpcRegSvc ::ffff:10.99.99.1:56901     ESTABLISHED
tcp        0      0 kafka1.mycompany.:XmlIpcRegSvc ::ffff:10.99.99.2:45349     ESTABLISHED

Has anyone come across this problem before? Is this a 0.7.2 leak, LB misconfiguration… ?

Thanks
 
+
Jun Rao 2013-09-25, 04:02
+
Mark 2013-09-25, 13:09
+
Jun Rao 2013-09-25, 16:07
+
Mark 2013-09-25, 23:30
+
Mark 2013-09-25, 23:48
+
Jun Rao 2013-09-26, 04:39
+
Jun Rao 2013-09-26, 14:38
+
Mark 2013-09-26, 22:07
+
Mark 2013-09-26, 22:08
+
Mark 2013-09-27, 16:35
+
Florian Weingarten 2013-10-04, 12:15