Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Getting ScannerTimeoutException even after several calls in the specified time limit


Copy link to this message
-
Re: Getting ScannerTimeoutException even after several calls in the specified time limit
For pretty graphs with JVM GC info + system + HBase metrics you could also
easily hook up SPM to your cluster.  See URL in signature.

Otis
--
Performance Monitoring - http://sematext.com/spm
On Sep 11, 2012 6:30 AM, "HARI KUMAR" <[EMAIL PROTECTED]> wrote:

> For GC Monitoring, Add Parameters "export HBASE_OPTS="$HBASE_OPTS
> -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> -Xloggc:$HBASE_HOME/logs/gc-hbase.log" to hbase-env.sh and try to view the
> file using tools like "GCViewer".  or use tools like VisualVM to look at
> your GC Consumption.
>
> ./hari
>
> Add
>
> On Tue, Sep 11, 2012 at 2:11 PM, Dhirendra Singh <[EMAIL PROTECTED]> wrote:
>
> > No i am not doing parallel scans,
> >
> > * If yes, check the time taken for GC and
> > the number of calls that can be served at your end point*.
> >
> >  could you please tell me how to do that, where can i see the GC logs?
> >
> >
> > On Tue, Sep 11, 2012 at 12:54 PM, HARI KUMAR <[EMAIL PROTECTED]
> >wrote:
> >
> >> Hi,
> >>
> >> Are u trying to do parallel scans. If yes, check the time taken for GC
> and
> >> the number of calls that can be served at your end point.
> >>
> >> Best Regards
> >> N.Hari Kumar
> >>
> >> On Tue, Sep 11, 2012 at 8:22 AM, Dhirendra Singh <[EMAIL PROTECTED]>
> >> wrote:
> >>
> >> > i tried with a smaller caching i.e 10, it failed again, not its not
> >> really
> >> > a big cell. this small cluster(4 nodes) is only used for Hbase, i am
> >> > currently using hbase-0.92.1-cdh4.0.1. ,  could you just let me know
> how
> >> > could i debug this issue ?
> >> >
> >> >
> >> > aused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
> >> > 99560ms passed since the last invocation, timeout is currently set to
> >> > 60000
> >> >         at
> >> >
> >>
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
> >> >         at
> >> >
> >>
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1399)
> >> >         ... 5 more
> >> > Caused by: org.apache.hadoop.hbase.UnknownScannerException:
> >> > org.apache.hadoop.hbase.UnknownScannerException: Name:
> >> > -8889369042827960647
> >> >         at
> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2114)
> >> >         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown
> Source)
> >> >         at
> >> >
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >         at java.lang.reflect.Method.invoke(Method.java:597)
> >> >         at
> >> >
> >>
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
> >> >         at
> >> >
> >>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
> >> >
> >> >
> >> >
> >> > On Mon, Sep 10, 2012 at 10:53 PM, Stack <[EMAIL PROTECTED]> wrote:
> >> >
> >> > > On Mon, Sep 10, 2012 at 10:13 AM, Dhirendra Singh <[EMAIL PROTECTED]
> >
> >> > > wrote:
> >> > > > I am facing this exception while iterating over a big table,  by
> >> > default
> >> > > i
> >> > > > have specified caching as 100,
> >> > > >
> >> > > > i am getting the below exception, even though i checked there are
> >> > several
> >> > > > calls made to the scanner before it threw this exception, but
> >> somehow
> >> > its
> >> > > > saying 86095ms were passed since last invocation.
> >> > > >
> >> > > > i also observed that if it set scan.setCaching(false),  it
> succeeds,
> >> > >  could
> >> > > > some one please explain or point me to some document as if what's
> >> > > happening
> >> > > > here and what's the best practices to avoid it.
> >> > > >
> >> > > >
> >> > >
> >> > > Try again cachine < 100.  See if it works.  A big cell?  A GC pause?
> >> > > You should be able to tell roughly which server is being traversed
> >> > > when you get the timeout.  Anything else going on on that server at
> >> > > the time?  What version of HBase?
> >> > > St.Ack
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Warm Regards,
> >> > Dhirendra Pratap
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB