Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Why is this region compacting?


Copy link to this message
-
Re: Why is this region compacting?
/usr/lib/hbase/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -m -s -v -f
/hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/fca0882dc7624342a8f4fce4b89420ff
13/09/24 12:33:40 INFO util.ChecksumType: Checksum using
org.apache.hadoop.util.PureJavaCrc32
Scanning ->
/hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/fca0882dc7624342a8f4fce4b89420ff
13/09/24 12:33:41 INFO hfile.CacheConfig: Allocating LruBlockCache with
maximum size 2.9g
13/09/24 12:33:41 ERROR metrics.SchemaMetrics: Inconsistent configuration.
Previous configuration for using table name in metrics: true, new
configuration: false
13/09/24 12:33:41 WARN snappy.LoadSnappy: Snappy native library is available
13/09/24 12:33:41 INFO util.NativeCodeLoader: Loaded the native-hadoop
library
13/09/24 12:33:41 INFO snappy.LoadSnappy: Snappy native library loaded
13/09/24 12:33:41 INFO compress.CodecPool: Got brand-new decompressor
Block index size as per heapsize: 336
Exception in thread "main" java.lang.NullPointerException
        at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:716)
        at
org.apache.hadoop.hbase.io.hfile.AbstractHFileReader.toStringFirstKey(AbstractHFileReader.java:138)
        at
org.apache.hadoop.hbase.io.hfile.AbstractHFileReader.toString(AbstractHFileReader.java:149)
        at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.printMeta(HFilePrettyPrinter.java:318)
        at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:234)
        at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:189)
        at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:756)
Does this mean the problem might have been caused by a corrupted file(s)?

--Tom
On Tue, Sep 24, 2013 at 12:21 PM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:

> One more Tom,
>
> When you will have been able capture de HFile locally, please run run the
> HFile class on it to see the number of keys (is it empty?) and the other
> specific information.
>
> bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -m -s -v -f HFILENAME
>
> Thanks,
>
> JM
>
>
> 2013/9/24 Jean-Marc Spaggiari <[EMAIL PROTECTED]>
>
> > We get -1 because of this:
> >
> >       byte [] timerangeBytes = metadataMap.get(TIMERANGE_KEY);
> >       if (timerangeBytes != null) {
> >         this.reader.timeRangeTracker = new TimeRangeTracker();
> >         Writables.copyWritable(timerangeBytes,
> > this.reader.timeRangeTracker);
> >       }
> > this.reader.timeRangeTracker will return -1 for the maximumTimestamp
> > value. So now, we need to figure if it's normal or not to have
> > TIMERANGE_KEY not null here.
> >
> > I have created the same table locally on 0,94.10 with the same attributes
> > and I'm not facing this issue.
> >
> > We need to look at the related HFile, but files are rolled VERY quickly,
> > so might be difficult to get one.
> >
> > Maybe something like
> > hadoop fs -get hdfs://
> >
> hdpmgr001.pse.movenetworks.com:8020/hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/*
> > .
> >
> > might help to get the file? Then we can start to look at it and see what
> > exactly trigger this behaviour?
> >
> > JM
> >
> >
> > 2013/9/24 Sergey Shelukhin <[EMAIL PROTECTED]>
> >
> >> Yeah, I think c3580bdb62d64e42a9eeac50f1c582d2 store file is a good
> >> example.
> >> Can you grep for c3580bdb62d64e42a9eeac50f1c582d2 and post the log just
> to
> >> be sure? Thanks.
> >> It looks like an interaction of deleting expired files and
> >>           // Create the writer even if no kv(Empty store file is also
> ok),
> >>           // because we need record the max seq id for the store file,
> see
> >>           // HBASE-6059
> >> in compactor.
> >> The newly created file is immediately collected the same way and
> replaced
> >> by another file, which seems like not an intended behavior, even though
> >> both pieces of code are technically correct (the empty file is expired,
> >> and
> >> the new file is generally needed).