Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Region Servers crashing following: "File does not exist", "Too many open files" exceptions


Copy link to this message
-
Re: : Region Servers crashing following: "File does not exist", "Too many open files" exceptions
David Koch 2013-02-11, 15:24
Hello,

No, we did not change anything, so compactions should run at automatically
- I guess it's once a day - however, I don't know to what extent jobs
running on the cluster have impeded compactions - if this is even a
possibility.

/David

On Mon, Feb 11, 2013 at 4:58 AM, ramkrishna vasudevan <
[EMAIL PROTECTED]> wrote:

> Hi David,
>
> Have you changed anything on the configurations related to compactions?
>
> If there are more store files created and if the compactions are not run
> frequently we end up in this problem.  Atleast there will be a consistent
> increase in the file handler count.
>
> Could you run compactions manually to see if it helps?
>
> Regards
> Ram
>
> On Mon, Feb 11, 2013 at 1:41 AM, David Koch <[EMAIL PROTECTED]> wrote:
>
> > Like I said, the maximum permissible number of filehandlers is set to
> 65535
> > for users hbase (the one who starts HBase), mapred and hdfs
> >
> > The too many files warning occurs on the region servers but not on the
> HDFS
> > namenode.
> >
> > /David
> >
> >
> > On Sun, Feb 10, 2013 at 3:53 PM, shashwat shriparv <
> > [EMAIL PROTECTED]> wrote:
> >
> > > On Sun, Feb 10, 2013 at 6:21 PM, David Koch <[EMAIL PROTECTED]>
> > wrote:
> > >
> > > > problems but could not find any. The settings
> > >
> > >
> > > increase the u limit for the user using you are starting the hadoop and
> > > hbase services, in os
> > >
> > >
> > >
> > > ∞
> > > Shashwat Shriparv
> > >
> >
>