Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Re: Open Files Limits


Copy link to this message
-
Re: Open Files Limits

On 02/08/2013 03:16 PM, Marco Gallotta wrote:
> Hey guys
>
> I'm running hbase on Ubuntu, and I'm experiencing problems with "too many open files". I've got the following in limits.conf:
>
> *                -    nofile          50000
> *                -    nproc           50000
I think that the correct configuration is for hdfs, mapred and hbase users:

# user type        resource     value

hdfs -                  nofile          32768
mapred -            nofile          32768
hbase -               nofile          32768
>
>
> And added "session required pam_limits.so" to /etc/pam.d/common-session . Hbase master is getting the correct limit of 50k, but the regionserver (which is dying) and zookeeper are getting the default 1024 limit.
>
> Is there anything in the pipeline where the RS/ZK set the limit? I can't find anything written in this or in the config.
>
> Appreciate any help.
>

--
Marcos Ortiz Valmaseda,
Product Manager && Data Scientist at UCI
Blog: http://marcosluis2186.posterous.com
Twitter: @marcosluis2186 <http://twitter.com/marcosluis2186>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB