Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Bulk load fails with NullPointerException


Copy link to this message
-
Re: Bulk load fails with NullPointerException
Maybe one option will be to disable the compaction, load the data,
re-activate the compaction, major-compact the data?

2012/11/6, Amit Sela <[EMAIL PROTECTED]>:
> Seems like that's the one alright... Any ideas how to avoid it ? maybe a
> patch ?
>
> On Tue, Nov 6, 2012 at 8:05 PM, Jean-Daniel Cryans
> <[EMAIL PROTECTED]>wrote:
>
>> This sounds a lot like https://issues.apache.org/jira/browse/HBASE-5458
>>
>> On Tue, Nov 6, 2012 at 2:28 AM, Amit Sela <[EMAIL PROTECTED]> wrote:
>> > Hi all,
>> >
>> > I'm trying to bulk load using LoadIncrementalHFiles and I get a
>> > NullPointerException
>> > at:
>> org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:63).
>> >
>> > It looks like DefaultCodec has no set configuration...
>> >
>> > Anyone encounter this before ?
>> >
>> > Thanks.
>> >
>> >>>>>>>>Full exception thrown:
>> >
>> > java.util.concurrent.ExecutionException: java.lang.NullPointerException
>> > at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>> > at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>> > at
>> >
>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:333)
>> > at
>> >
>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:232)
>> > at
>> >
>> com.infolinks.hadoop.jobrunner.UrlsHadoopJobExecutor.executeURLJob(UrlsHadoopJobExecutor.java:204)
>> > at
>> >
>> com.infolinks.hadoop.jobrunner.UrlsHadoopJobExecutor.runJobIgnoreSystemJournal(UrlsHadoopJobExecutor.java:86)
>> > at
>> >
>> com.infolinks.hadoop.jobrunner.HadoopJobExecutor.main(HadoopJobExecutor.java:182)
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> > at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> > at java.lang.reflect.Method.invoke(Method.java:597)
>> > at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> > Caused by: java.lang.NullPointerException
>> > at
>> >
>> org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:63)
>> > at
>> >
>> org.apache.hadoop.io.compress.GzipCodec.getDecompressorType(GzipCodec.java:142)
>> > at
>> >
>> org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:125)
>> > at
>> >
>> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getDecompressor(Compression.java:290)
>> > at
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.decompress(HFileBlock.java:1391)
>> > at
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1897)
>> > at
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1637)
>> > at
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1286)
>> > at
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1294)
>> > at
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:126)
>> > at
>> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:552)
>> > at
>> >
>> org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:589)
>> > at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:603)
>> > at
>> >
>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:402)
>> > at
>> >
>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:323)
>> > at
>> >
>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:321)
>> > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>> > at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>> > at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> > at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB