Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Bulk load fails with NullPointerException


Copy link to this message
-
Re: Bulk load fails with NullPointerException
Does this bug affect snappy as well ? maybe I'll just use it instead of GZ
(also recommended in the book).

On Tue, Nov 6, 2012 at 10:27 PM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:

> I'm not talking about the major compation, but about the CF compaction.
>
> What's your table definition? Do you have the compaction (GZ) defined
> there?
>
> It seems there is some failure with this based on the stack trace.
>
> So if you disable it while you are doing your load, you should not
> face this again. Then you can alter your CF to re-activate it?
>
> 2012/11/6, Amit Sela <[EMAIL PROTECTED]>:
> > Do you mean setting: hbase.hregion.majorcompaction to 0 ?
> > Because it's already set this way. We pre-create new regions before
> writing
> > to HBase and initiate a major compaction once a day.
> >
> > On Tue, Nov 6, 2012 at 8:51 PM, Jean-Marc Spaggiari
> > <[EMAIL PROTECTED]
> >> wrote:
> >
> >> Maybe one option will be to disable the compaction, load the data,
> >> re-activate the compaction, major-compact the data?
> >>
> >> 2012/11/6, Amit Sela <[EMAIL PROTECTED]>:
> >> > Seems like that's the one alright... Any ideas how to avoid it ? maybe
> >> > a
> >> > patch ?
> >> >
> >> > On Tue, Nov 6, 2012 at 8:05 PM, Jean-Daniel Cryans
> >> > <[EMAIL PROTECTED]>wrote:
> >> >
> >> >> This sounds a lot like
> >> >> https://issues.apache.org/jira/browse/HBASE-5458
> >> >>
> >> >> On Tue, Nov 6, 2012 at 2:28 AM, Amit Sela <[EMAIL PROTECTED]>
> wrote:
> >> >> > Hi all,
> >> >> >
> >> >> > I'm trying to bulk load using LoadIncrementalHFiles and I get a
> >> >> > NullPointerException
> >> >> > at:
> >> >>
> >>
> org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:63).
> >> >> >
> >> >> > It looks like DefaultCodec has no set configuration...
> >> >> >
> >> >> > Anyone encounter this before ?
> >> >> >
> >> >> > Thanks.
> >> >> >
> >> >> >>>>>>>>Full exception thrown:
> >> >> >
> >> >> > java.util.concurrent.ExecutionException:
> >> java.lang.NullPointerException
> >> >> > at
> >> >> > java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> >> >> > at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> >> >> > at
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:333)
> >> >> > at
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:232)
> >> >> > at
> >> >> >
> >> >>
> >>
> com.infolinks.hadoop.jobrunner.UrlsHadoopJobExecutor.executeURLJob(UrlsHadoopJobExecutor.java:204)
> >> >> > at
> >> >> >
> >> >>
> >>
> com.infolinks.hadoop.jobrunner.UrlsHadoopJobExecutor.runJobIgnoreSystemJournal(UrlsHadoopJobExecutor.java:86)
> >> >> > at
> >> >> >
> >> >>
> >>
> com.infolinks.hadoop.jobrunner.HadoopJobExecutor.main(HadoopJobExecutor.java:182)
> >> >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> >> > at
> >> >> >
> >> >>
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> > at
> >> >> >
> >> >>
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> > at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> > at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >> > Caused by: java.lang.NullPointerException
> >> >> > at
> >> >> >
> >> >>
> >>
> org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:63)
> >> >> > at
> >> >> >
> >> >>
> >>
> org.apache.hadoop.io.compress.GzipCodec.getDecompressorType(GzipCodec.java:142)
> >> >> > at
> >> >> >
> >> >>
> >>
> org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:125)
> >> >> > at
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getDecompressor(Compression.java:290)
> >> >> > at
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.decompress(HFileBlock.java:1391)
> >> >> > at