Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Re:: Region Servers crashing following: "File does not exist", "Too many open files" exceptions


+
Dhaval Shah 2013-02-10, 01:24
+
David Koch 2013-02-10, 02:17
+
Marcos Ortiz 2013-02-10, 03:22
+
David Koch 2013-02-10, 12:51
+
shashwat shriparv 2013-02-10, 14:53
Copy link to this message
-
Re: : Region Servers crashing following: "File does not exist", "Too many open files" exceptions
Like I said, the maximum permissible number of filehandlers is set to 65535
for users hbase (the one who starts HBase), mapred and hdfs

The too many files warning occurs on the region servers but not on the HDFS
namenode.

/David
On Sun, Feb 10, 2013 at 3:53 PM, shashwat shriparv <
[EMAIL PROTECTED]> wrote:

> On Sun, Feb 10, 2013 at 6:21 PM, David Koch <[EMAIL PROTECTED]> wrote:
>
> > problems but could not find any. The settings
>
>
> increase the u limit for the user using you are starting the hadoop and
> hbase services, in os
>
>
>
> ∞
> Shashwat Shriparv
>
+
ramkrishna vasudevan 2013-02-11, 03:58
+
David Koch 2013-02-11, 15:24
+
ramkrishna vasudevan 2013-02-11, 16:50
+
David Koch 2013-02-11, 22:14
+
David Koch 2013-02-10, 01:07
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB