Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Bulk load from OSGi running client

Copy link to this message
Re: Bulk load from OSGi running client
First issue I found was that I didn't bundle the libhadoop.so in my hadoop
bundle (I saw a lot of "Got brand new decompressor" in the log), that is
fixed now.

The main issue still remains, it looks like Compression.Algortihm
configuration's class loader had reference to the bundle in revision 0
(before jar update) instead of revision 1 (after jar update). This could be
because of caching (or static) but then, why should it work after I get
NullPointerException (it does, immediately, no restarts or bundle updates).

If anyone has any idea please share, I will keep posting my findings.


On Sun, Sep 8, 2013 at 3:57 AM, Stack <[EMAIL PROTECTED]> wrote:

> On Sun, Sep 8, 2013 at 4:19 AM, Amit Sela <[EMAIL PROTECTED]> wrote:
> > I did some debug and I have more input about my issue. The Configuration
> in
> > Compression.Algorithm has a class loader that holds reference to the
> > original package (loaded at restart) and not to e current one (loaded
> after
> > package update). Is the compression algorithm cached somewhere such that
> > after a first time read (get, scan) from hbase, the following uses use a
> > cached instance ?
> Yes.  Does this rather than reload each time.
> Let me know if you need more help getting this all up and running.  Am
> interested in your findings.
> St.Ack