Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Bulk load fails with NullPointerException


Copy link to this message
-
Re: Bulk load fails with NullPointerException
I'm not talking about the major compation, but about the CF compaction.

What's your table definition? Do you have the compaction (GZ) defined there?

It seems there is some failure with this based on the stack trace.

So if you disable it while you are doing your load, you should not
face this again. Then you can alter your CF to re-activate it?

2012/11/6, Amit Sela <[EMAIL PROTECTED]>:
> Do you mean setting: hbase.hregion.majorcompaction to 0 ?
> Because it's already set this way. We pre-create new regions before writing
> to HBase and initiate a major compaction once a day.
>
> On Tue, Nov 6, 2012 at 8:51 PM, Jean-Marc Spaggiari
> <[EMAIL PROTECTED]
>> wrote:
>
>> Maybe one option will be to disable the compaction, load the data,
>> re-activate the compaction, major-compact the data?
>>
>> 2012/11/6, Amit Sela <[EMAIL PROTECTED]>:
>> > Seems like that's the one alright... Any ideas how to avoid it ? maybe
>> > a
>> > patch ?
>> >
>> > On Tue, Nov 6, 2012 at 8:05 PM, Jean-Daniel Cryans
>> > <[EMAIL PROTECTED]>wrote:
>> >
>> >> This sounds a lot like
>> >> https://issues.apache.org/jira/browse/HBASE-5458
>> >>
>> >> On Tue, Nov 6, 2012 at 2:28 AM, Amit Sela <[EMAIL PROTECTED]> wrote:
>> >> > Hi all,
>> >> >
>> >> > I'm trying to bulk load using LoadIncrementalHFiles and I get a
>> >> > NullPointerException
>> >> > at:
>> >>
>> org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:63).
>> >> >
>> >> > It looks like DefaultCodec has no set configuration...
>> >> >
>> >> > Anyone encounter this before ?
>> >> >
>> >> > Thanks.
>> >> >
>> >> >>>>>>>>Full exception thrown:
>> >> >
>> >> > java.util.concurrent.ExecutionException:
>> java.lang.NullPointerException
>> >> > at
>> >> > java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>> >> > at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:333)
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:232)
>> >> > at
>> >> >
>> >>
>> com.infolinks.hadoop.jobrunner.UrlsHadoopJobExecutor.executeURLJob(UrlsHadoopJobExecutor.java:204)
>> >> > at
>> >> >
>> >>
>> com.infolinks.hadoop.jobrunner.UrlsHadoopJobExecutor.runJobIgnoreSystemJournal(UrlsHadoopJobExecutor.java:86)
>> >> > at
>> >> >
>> >>
>> com.infolinks.hadoop.jobrunner.HadoopJobExecutor.main(HadoopJobExecutor.java:182)
>> >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >> > at
>> >> >
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> > at
>> >> >
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> > at java.lang.reflect.Method.invoke(Method.java:597)
>> >> > at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> > Caused by: java.lang.NullPointerException
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:63)
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.io.compress.GzipCodec.getDecompressorType(GzipCodec.java:142)
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:125)
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getDecompressor(Compression.java:290)
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.decompress(HFileBlock.java:1391)
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1897)
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1637)
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1286)
>> >> > at
>> >> >
>> >>
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1294)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB