Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Getting ScannerTimeoutException even after several calls in the specified time limit


Copy link to this message
-
RE: Getting ScannerTimeoutException even after several calls in the specified time limit
>could someone please clarify,   when i say caching 100 or any number,
 where does this actually happen on server (cluster  ) or client

This happens at both places. When the scan calls with caching= N,  the client will pass this number N to the 1st region which is under scan for this specific scan. Server side (RS) will try to find as much results(rows) from this region with max rows=N. If it is able to find the client got the results for that next() call.  If it gets rows less than N, client will try to get the remaining number of rows from the next region and so on.. Mostly this will happen in server side alone.[It might be finding N rows from one region itself]  But when you have some Filter conditions it might not be finding N rows from one region...

Note : Client will try to find N rows with one next() call as N is specified as caching. So it might be contacting many regions across different RSs.  There is a max result size config param also available at client side..  If the total size of the results exceeds this value and there are less than N results, then client will stop scanning even it has not got N results... If this cross of size is not happening well one call of next() might go through all the regions.. [You may be getting ScannerTimeouts due to RPC time outs]

Hope I have answered your question..  :)

-Anoop-
________________________________________
From: Dhirendra Singh [[EMAIL PROTECTED]]
Sent: Wednesday, September 12, 2012 7:55 AM
To: [EMAIL PROTECTED]
Subject: Re: Getting ScannerTimeoutException even after several calls in the specified time limit

could someone please clarify,   when i say caching 100 or any number,
 where does this actually happen on server (cluster  ) or client.  if i
assume it happens on cluster, so does this ScannerTimeOut is because of
caching as the server might have run out of memory and hence not able to
respond within the specified timeout?

any link related to caching mechanism in HBase would be of great help

Thanks,

On Wed, Sep 12, 2012 at 7:41 AM, Otis Gospodnetic <
[EMAIL PROTECTED]> wrote:

> For pretty graphs with JVM GC info + system + HBase metrics you could also
> easily hook up SPM to your cluster.  See URL in signature.
>
> Otis
> --
> Performance Monitoring - http://sematext.com/spm
> On Sep 11, 2012 6:30 AM, "HARI KUMAR" <[EMAIL PROTECTED]> wrote:
>
> > For GC Monitoring, Add Parameters "export HBASE_OPTS="$HBASE_OPTS
> > -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> > -Xloggc:$HBASE_HOME/logs/gc-hbase.log" to hbase-env.sh and try to view
> the
> > file using tools like "GCViewer".  or use tools like VisualVM to look at
> > your GC Consumption.
> >
> > ./hari
> >
> > Add
> >
> > On Tue, Sep 11, 2012 at 2:11 PM, Dhirendra Singh <[EMAIL PROTECTED]>
> wrote:
> >
> > > No i am not doing parallel scans,
> > >
> > > * If yes, check the time taken for GC and
> > > the number of calls that can be served at your end point*.
> > >
> > >  could you please tell me how to do that, where can i see the GC logs?
> > >
> > >
> > > On Tue, Sep 11, 2012 at 12:54 PM, HARI KUMAR <[EMAIL PROTECTED]
> > >wrote:
> > >
> > >> Hi,
> > >>
> > >> Are u trying to do parallel scans. If yes, check the time taken for GC
> > and
> > >> the number of calls that can be served at your end point.
> > >>
> > >> Best Regards
> > >> N.Hari Kumar
> > >>
> > >> On Tue, Sep 11, 2012 at 8:22 AM, Dhirendra Singh <[EMAIL PROTECTED]>
> > >> wrote:
> > >>
> > >> > i tried with a smaller caching i.e 10, it failed again, not its not
> > >> really
> > >> > a big cell. this small cluster(4 nodes) is only used for Hbase, i am
> > >> > currently using hbase-0.92.1-cdh4.0.1. ,  could you just let me know
> > how
> > >> > could i debug this issue ?
> > >> >
> > >> >
> > >> > aused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
> > >> > 99560ms passed since the last invocation, timeout is currently set
> to
> > >> > 60000
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)

Warm Regards,
Dhirendra Pratap
+91. 9717394713
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB