We've had this problem with Zookeeper...

Setting ulimit properly can occasionally be tricky because you need to
logout and re-ssh into the box for the changes to take effect on the next
processes you start up. Another problem we've hit was that our puppet
service was running in the background and silently restoring settings to
their original values, which would bite us a while later, when we'd need to
restart a service (currently running processes keep the limit they had at
start time).

You can double-check that your processes are running with the ulimit you
expect them to by finding out their PID (using ps) and then doing sudo cat

If you don't see the value you configured in the "Max open files" line,
then something somewhere prevented your process from using the number of
file handles you want it to.

Of course, what I just said doesn't address the possibility that there
could be some sort of file handle leak somewhere in the 0.8 code... Though
I guess such bug would have surfaced in heavy-duty environments such as
LinkedIn's, if it existed.

On Fri, Aug 2, 2013 at 12:07 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB