Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # dev >> Re: Giving a chance to buggy coprocessors to clean up


Copy link to this message
-
Re: Giving a chance to buggy coprocessors to clean up
+ dev list
I don't see any good reason for that.
Andy, Gary, any insights?

You can also try to place your "global" variables in shared HashMap via RegionCoprocessorEnvironment.getSharedData().
That will be automatically cleared up when all instances of a coprocessor class are gone.
-- Lars

________________________________
 From: tsuna <[EMAIL PROTECTED]>
To: HBase users <[EMAIL PROTECTED]>
Sent: Monday, December 9, 2013 9:46 PM
Subject: Giving a chance to buggy coprocessors to clean up
 

Hi there,
If a coprocessor is buggy and throws an uncaught exception, it gets
removed without having its stop() method called, and it therefore
can't free up resources.

Any resources that are held by global variables (e.g. statics on a
class loaded by the coprocessor) can't be freed because of bug
HBASE-9046 (Coprocessors can't be upgraded in service reliably). And
the coprocessor can't be removed because of HBASE-9046.  Therefore
there is no way that I can see to release those resources, short of
restarting the RegionServer (yikes!).

Is there any rationale behind not calling stop() when forcefully
removing the buggy coprocessor?  Or should we maybe add some sort of a
cleanUp() method to give a chance to the coprocessor to save face and
die gracefully?

--
Benoit "tsuna" Sigoure