Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Bulk load from OSGi running client


Copy link to this message
-
Re: Bulk load from OSGi running client
I did some debug and I have more input about my issue. The Configuration in
Compression.Algorithm has a class loader that holds reference to the
original package (loaded at restart) and not to e current one (loaded after
package update). Is the compression algorithm cached somewhere such that
after a first time read (get, scan) from hbase, the following uses use a
cached instance ?
On Sep 3, 2013 6:37 PM, "Amit Sela" <[EMAIL PROTECTED]> wrote:

> Hi all,
>
> I'm running on Hadoop 1.0.4 with HBase 0.94.2 and I've bundled both (for
> client side use only) so that I could support execution of MR and/or HBase
> queries (and other client operations) from an OSGi environment (in my case
> Felix).
>
> So far, I've managed (context class loader adjustments) to execute MR jobs
> and to query HBase (gert, put...) with no problem.
>
> *I'm trying to execute Bulk Load into HBase and I seem to encounter a
> strange NullPointerException:*
>  Caused by: java.lang.NullPointerException: null
> at
> org.apache.felix.framework.BundleRevisionImpl.getResourceLocal(BundleRevisionImpl.java:474)
>  at
> org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1432)
> at
> org.apache.felix.framework.BundleWiringImpl.getResourceByDelegation(BundleWiringImpl.java:1360)
>  at
> org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.getResource(BundleWiringImpl.java:2256)
> at
> org.apache.hadoop.conf.Configuration.getResource(Configuration.java:1002)
>  at
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1156)
> at
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1112)
>  at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1056)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:401)
>  at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:471)
> at
> org.apache.hadoop.io.compress.GzipCodec.createInputStream(GzipCodec.java:131)
>  at
> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createDecompressionStream(Compression.java:223)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.decompress(HFileBlock.java:1392)
>  at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1897)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1637)
>  at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1286)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1294)
>  at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:126)
> at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:552)
>  at
> org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:589)
> at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:603)
>  at
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:402)
> at
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:323)
>  at
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:321)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> *I get this every time I try to bulk load after server restart following
> with a bundle update (update is done after restart, so the update calls for
> refresh packages).*
> *Strangely, if I immediately try again, success. Any following attempts
> succeed as well.*
> *
> *
> *Any ideas anyone ?*
> *
> *
> *Thanks,  *
> *
> *
> *Amit. *
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB