Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Why is this region compacting?


+
Tom Brown 2013-09-24, 15:50
+
Jean-Marc Spaggiari 2013-09-24, 15:55
+
Bharath Vissapragada 2013-09-24, 15:59
+
Tom Brown 2013-09-24, 16:08
+
Jean-Marc Spaggiari 2013-09-24, 16:13
+
Tom Brown 2013-09-24, 16:33
+
Jean-Marc Spaggiari 2013-09-24, 16:51
+
Jean-Marc Spaggiari 2013-09-24, 16:53
+
Tom Brown 2013-09-24, 17:02
+
Jean-Marc Spaggiari 2013-09-24, 17:14
+
Tom Brown 2013-09-24, 17:18
+
Jean-Marc Spaggiari 2013-09-24, 17:20
+
Sergey Shelukhin 2013-09-24, 17:55
+
Sergey Shelukhin 2013-09-24, 18:07
+
Jean-Marc Spaggiari 2013-09-24, 18:10
+
Jean-Marc Spaggiari 2013-09-24, 18:21
Copy link to this message
-
Re: Why is this region compacting?
/usr/lib/hbase/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -m -s -v -f
/hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/fca0882dc7624342a8f4fce4b89420ff
13/09/24 12:33:40 INFO util.ChecksumType: Checksum using
org.apache.hadoop.util.PureJavaCrc32
Scanning ->
/hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/fca0882dc7624342a8f4fce4b89420ff
13/09/24 12:33:41 INFO hfile.CacheConfig: Allocating LruBlockCache with
maximum size 2.9g
13/09/24 12:33:41 ERROR metrics.SchemaMetrics: Inconsistent configuration.
Previous configuration for using table name in metrics: true, new
configuration: false
13/09/24 12:33:41 WARN snappy.LoadSnappy: Snappy native library is available
13/09/24 12:33:41 INFO util.NativeCodeLoader: Loaded the native-hadoop
library
13/09/24 12:33:41 INFO snappy.LoadSnappy: Snappy native library loaded
13/09/24 12:33:41 INFO compress.CodecPool: Got brand-new decompressor
Block index size as per heapsize: 336
Exception in thread "main" java.lang.NullPointerException
        at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:716)
        at
org.apache.hadoop.hbase.io.hfile.AbstractHFileReader.toStringFirstKey(AbstractHFileReader.java:138)
        at
org.apache.hadoop.hbase.io.hfile.AbstractHFileReader.toString(AbstractHFileReader.java:149)
        at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.printMeta(HFilePrettyPrinter.java:318)
        at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:234)
        at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:189)
        at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:756)
Does this mean the problem might have been caused by a corrupted file(s)?

--Tom
On Tue, Sep 24, 2013 at 12:21 PM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:

> One more Tom,
>
> When you will have been able capture de HFile locally, please run run the
> HFile class on it to see the number of keys (is it empty?) and the other
> specific information.
>
> bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -m -s -v -f HFILENAME
>
> Thanks,
>
> JM
>
>
> 2013/9/24 Jean-Marc Spaggiari <[EMAIL PROTECTED]>
>
> > We get -1 because of this:
> >
> >       byte [] timerangeBytes = metadataMap.get(TIMERANGE_KEY);
> >       if (timerangeBytes != null) {
> >         this.reader.timeRangeTracker = new TimeRangeTracker();
> >         Writables.copyWritable(timerangeBytes,
> > this.reader.timeRangeTracker);
> >       }
> > this.reader.timeRangeTracker will return -1 for the maximumTimestamp
> > value. So now, we need to figure if it's normal or not to have
> > TIMERANGE_KEY not null here.
> >
> > I have created the same table locally on 0,94.10 with the same attributes
> > and I'm not facing this issue.
> >
> > We need to look at the related HFile, but files are rolled VERY quickly,
> > so might be difficult to get one.
> >
> > Maybe something like
> > hadoop fs -get hdfs://
> >
> hdpmgr001.pse.movenetworks.com:8020/hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/*
> > .
> >
> > might help to get the file? Then we can start to look at it and see what
> > exactly trigger this behaviour?
> >
> > JM
> >
> >
> > 2013/9/24 Sergey Shelukhin <[EMAIL PROTECTED]>
> >
> >> Yeah, I think c3580bdb62d64e42a9eeac50f1c582d2 store file is a good
> >> example.
> >> Can you grep for c3580bdb62d64e42a9eeac50f1c582d2 and post the log just
> to
> >> be sure? Thanks.
> >> It looks like an interaction of deleting expired files and
> >>           // Create the writer even if no kv(Empty store file is also
> ok),
> >>           // because we need record the max seq id for the store file,
> see
> >>           // HBASE-6059
> >> in compactor.
> >> The newly created file is immediately collected the same way and
> replaced
> >> by another file, which seems like not an intended behavior, even though
> >> both pieces of code are technically correct (the empty file is expired,
> >> and
> >> the new file is generally needed).
+
Jean-Marc Spaggiari 2013-09-24, 18:42
+
Tom Brown 2013-09-24, 19:04
+
Jean-Marc Spaggiari 2013-09-24, 20:11
+
Tom Brown 2013-09-24, 20:27
+
Jean-Marc Spaggiari 2013-09-24, 23:08
+
Sergey Shelukhin 2013-09-25, 02:21
+
Tom Brown 2013-09-24, 17:19
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB