Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Getting "java.io.IOException: Too many open files"


Copy link to this message
-
Re: Getting "java.io.IOException: Too many open files"
Without knowing the intricacies of Kafka, i think the default open file
descriptors is 1024 on unix. This can be changed by setting a higher ulimit
value ( typically 8192 but sometimes even 100000 ).
Before modifying the ulimit I would recommend you check the number of
sockets stuck in TIME_WAIT mode. In this case, it looks like the broker has
too many open sockets. This could be because you have a rogue client
connecting and disconnecting repeatedly.
You might have to reduce the TIME_WAIT state to 30 seconds or lower.

On Wed, Jun 25, 2014 at 10:19 AM, Lung, Paul <[EMAIL PROTECTED]> wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB