Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> ScannerTimeoutException


Copy link to this message
-
Re: ScannerTimeoutException
Hi Geoff,

I believe you don't need the scan.addColumn() when you add the whole
family. Although this should not affect the timeouts. If the timeouts
are getting more frequent, do you see compations in you RegionServer's
log? Do the timeouts occur while scanning for the same row(s)?

Jan

On 8.9.2011 17:23, Geoff Hendrey wrote:
> Hi Jan -
>
> The relevant code looks like this. I added addFamily. Still getting the ScannerTimeoutException. Arghhh. This is really becoming a mission critical problem for my team...Furthermore, the problem becomes more and more frequent, to the point that the job almost never finishes. At the beginning of the job, the timout never happens. By the end, it's happening 9 out of 10 attempts...
>
>                  Scan scan = new Scan(Bytes.toBytes(key));
>                  scan.setCaching(1);
>                  scan.setMaxVersions(1);
>                  scan.addFamily(Bytes.toBytes("V1"));
>                  scan.addColumn(Bytes.toBytes("V1"), Bytes.toBytes("cluster_map"));
>                  scan.addColumn(Bytes.toBytes("V1"), Bytes.toBytes("version_control_number"));
>
> -geoff
>
> -----Original Message-----
> From: Geoff Hendrey
> Sent: Tuesday, September 06, 2011 9:51 AM
> To: Jan Lukavsk�; [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: RE: ScannerTimeoutException
>
> I'll try your suggestions!
>
> -----Original Message-----
> From: Jan Lukavsk� [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, September 06, 2011 9:48 AM
> To: [EMAIL PROTECTED]
> Cc: Geoff Hendrey; [EMAIL PROTECTED]
> Subject: Re: ScannerTimeoutException
>
> Hi Geoff,
>
> we are having these issues when the scanner uses scan.addColumn() and
> the column is sparse, in the sense there are many rows with some other
> column in the same column family. I suppose your problems will vanish if
> you use scan.addFamily() call instead. The same behavior may appear if
> you are reading from region after massive delete (then the timeouts
> settle down after major compation), or when using server side Filters.
>
> Changing scan.addColumn() to scan.addFamily() brings some overhead,
> which I think could be removed by RegionServer renewing the lease of
> scanner while reading data, not only after first entry to
> HRegionServer.next().
>
> Would this be worth opening a JIRA?
>
> Jan
>
> On 6.9.2011 04:11, Geoff Hendrey wrote:
>> Hi -
>>
>>
>>
>> I found some odd behavior with ResultScanner.next(). Usually the times
>> for next() are couple hundred ms. But occasionally the call to next
>> spikes VERY long. In fact, I have the timeout set to 60 seconds (60000),
>> but once in a while the call to next() itself is interrupted by the
>> ScannerTimeoutException after more than 60 seconds. It seems odd that
>> the call to next itself can be interrupted because "61107ms passed since
>> the last invocation, timeout is currently set to 60000"
>>
>> The only thing I can think of is that a GC kicks in after the call to
>> next begins, but before the call returns, and the server is still
>> ticking the timeout. But this seems to happen periodically with
>> regularity. The odds of the GC kicking in at that exact instant, so
>> often, seem kind of low.
>>
>> -geoff
>>
>>
>>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB