Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Why is this region compacting?


Copy link to this message
-
Re: Why is this region compacting?
Jean-Marc Spaggiari 2013-09-24, 20:11
Hi Tom,

Thanks for this information and the offer. I think we have enought to start
to look at this issue. I'm still trying to reproduce that locally. In the
meantime, I sent a patch to fix the NullPointException your faced before.

I will post back here if I'm able to reproduce. Have you tried Sergey's
workarround?

JM
2013/9/24 Tom Brown <[EMAIL PROTECTED]>

> Yes, it is empty.
>
> 13/09/24 13:03:03 INFO hfile.CacheConfig: Allocating LruBlockCache with
> maximum size 2.9g
> 13/09/24 13:03:03 ERROR metrics.SchemaMetrics: Inconsistent configuration.
> Previous configuration for using table name in metrics: true, new
> configuration: false
> 13/09/24 13:03:03 WARN metrics.SchemaConfigured: Could not determine table
> and column family of the HFile path /fca0882dc7624342a8f4fce4b89420ff.
> Expecting at least 5 path components.
> 13/09/24 13:03:03 WARN snappy.LoadSnappy: Snappy native library is
> available
> 13/09/24 13:03:03 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library
> 13/09/24 13:03:03 INFO snappy.LoadSnappy: Snappy native library loaded
> 13/09/24 13:03:03 INFO compress.CodecPool: Got brand-new decompressor
> Stats:
> no data available for statistics
> Scanned kv count -> 0
>
> If you want to examine the actual file, I would be happy to email it to you
> directly.
>
> --Tom
>
>
> On Tue, Sep 24, 2013 at 12:42 PM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
> > Can you try with less parameters and see if you are able to get something
> > from it? This exception is caused by the "printMeta", so if you remove -m
> > it should be ok. However, printMeta was what I was looking for ;)
> >
> > getFirstKey for this file seems to return null. So it might simply be an
> > empty file, not necessary a corrupted one.
> >
> >
> > 2013/9/24 Tom Brown <[EMAIL PROTECTED]>
> >
> > > /usr/lib/hbase/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -m -s
> -v
> > -f
> > >
> > >
> >
> /hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/fca0882dc7624342a8f4fce4b89420ff
> > > 13/09/24 12:33:40 INFO util.ChecksumType: Checksum using
> > > org.apache.hadoop.util.PureJavaCrc32
> > > Scanning ->
> > >
> > >
> >
> /hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/fca0882dc7624342a8f4fce4b89420ff
> > > 13/09/24 12:33:41 INFO hfile.CacheConfig: Allocating LruBlockCache with
> > > maximum size 2.9g
> > > 13/09/24 12:33:41 ERROR metrics.SchemaMetrics: Inconsistent
> > configuration.
> > > Previous configuration for using table name in metrics: true, new
> > > configuration: false
> > > 13/09/24 12:33:41 WARN snappy.LoadSnappy: Snappy native library is
> > > available
> > > 13/09/24 12:33:41 INFO util.NativeCodeLoader: Loaded the native-hadoop
> > > library
> > > 13/09/24 12:33:41 INFO snappy.LoadSnappy: Snappy native library loaded
> > > 13/09/24 12:33:41 INFO compress.CodecPool: Got brand-new decompressor
> > > Block index size as per heapsize: 336
> > > Exception in thread "main" java.lang.NullPointerException
> > >         at
> > org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:716)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.io.hfile.AbstractHFileReader.toStringFirstKey(AbstractHFileReader.java:138)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.io.hfile.AbstractHFileReader.toString(AbstractHFileReader.java:149)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.printMeta(HFilePrettyPrinter.java:318)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:234)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:189)
> > >         at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:756)
> > >
> > >
> > > Does this mean the problem might have been caused by a corrupted
> file(s)?
> > >
> > > --Tom
> > >
> > >
> > > On Tue, Sep 24, 2013 at 12:21 PM, Jean-Marc Spaggiari <
> > > [EMAIL PROTECTED]> wrote: