Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Why is this region compacting?


+
Tom Brown 2013-09-24, 15:50
+
Jean-Marc Spaggiari 2013-09-24, 15:55
+
Bharath Vissapragada 2013-09-24, 15:59
+
Tom Brown 2013-09-24, 16:08
+
Jean-Marc Spaggiari 2013-09-24, 16:13
+
Tom Brown 2013-09-24, 16:33
+
Jean-Marc Spaggiari 2013-09-24, 16:51
+
Jean-Marc Spaggiari 2013-09-24, 16:53
+
Tom Brown 2013-09-24, 17:02
+
Jean-Marc Spaggiari 2013-09-24, 17:14
+
Tom Brown 2013-09-24, 17:18
+
Jean-Marc Spaggiari 2013-09-24, 17:20
+
Sergey Shelukhin 2013-09-24, 17:55
+
Sergey Shelukhin 2013-09-24, 18:07
+
Jean-Marc Spaggiari 2013-09-24, 18:10
+
Jean-Marc Spaggiari 2013-09-24, 18:21
+
Tom Brown 2013-09-24, 18:35
+
Jean-Marc Spaggiari 2013-09-24, 18:42
Copy link to this message
-
Re: Why is this region compacting?
Yes, it is empty.

13/09/24 13:03:03 INFO hfile.CacheConfig: Allocating LruBlockCache with
maximum size 2.9g
13/09/24 13:03:03 ERROR metrics.SchemaMetrics: Inconsistent configuration.
Previous configuration for using table name in metrics: true, new
configuration: false
13/09/24 13:03:03 WARN metrics.SchemaConfigured: Could not determine table
and column family of the HFile path /fca0882dc7624342a8f4fce4b89420ff.
Expecting at least 5 path components.
13/09/24 13:03:03 WARN snappy.LoadSnappy: Snappy native library is available
13/09/24 13:03:03 INFO util.NativeCodeLoader: Loaded the native-hadoop
library
13/09/24 13:03:03 INFO snappy.LoadSnappy: Snappy native library loaded
13/09/24 13:03:03 INFO compress.CodecPool: Got brand-new decompressor
Stats:
no data available for statistics
Scanned kv count -> 0

If you want to examine the actual file, I would be happy to email it to you
directly.

--Tom
On Tue, Sep 24, 2013 at 12:42 PM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:

> Can you try with less parameters and see if you are able to get something
> from it? This exception is caused by the "printMeta", so if you remove -m
> it should be ok. However, printMeta was what I was looking for ;)
>
> getFirstKey for this file seems to return null. So it might simply be an
> empty file, not necessary a corrupted one.
>
>
> 2013/9/24 Tom Brown <[EMAIL PROTECTED]>
>
> > /usr/lib/hbase/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -m -s -v
> -f
> >
> >
> /hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/fca0882dc7624342a8f4fce4b89420ff
> > 13/09/24 12:33:40 INFO util.ChecksumType: Checksum using
> > org.apache.hadoop.util.PureJavaCrc32
> > Scanning ->
> >
> >
> /hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/fca0882dc7624342a8f4fce4b89420ff
> > 13/09/24 12:33:41 INFO hfile.CacheConfig: Allocating LruBlockCache with
> > maximum size 2.9g
> > 13/09/24 12:33:41 ERROR metrics.SchemaMetrics: Inconsistent
> configuration.
> > Previous configuration for using table name in metrics: true, new
> > configuration: false
> > 13/09/24 12:33:41 WARN snappy.LoadSnappy: Snappy native library is
> > available
> > 13/09/24 12:33:41 INFO util.NativeCodeLoader: Loaded the native-hadoop
> > library
> > 13/09/24 12:33:41 INFO snappy.LoadSnappy: Snappy native library loaded
> > 13/09/24 12:33:41 INFO compress.CodecPool: Got brand-new decompressor
> > Block index size as per heapsize: 336
> > Exception in thread "main" java.lang.NullPointerException
> >         at
> org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:716)
> >         at
> >
> >
> org.apache.hadoop.hbase.io.hfile.AbstractHFileReader.toStringFirstKey(AbstractHFileReader.java:138)
> >         at
> >
> >
> org.apache.hadoop.hbase.io.hfile.AbstractHFileReader.toString(AbstractHFileReader.java:149)
> >         at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.printMeta(HFilePrettyPrinter.java:318)
> >         at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:234)
> >         at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:189)
> >         at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:756)
> >
> >
> > Does this mean the problem might have been caused by a corrupted file(s)?
> >
> > --Tom
> >
> >
> > On Tue, Sep 24, 2013 at 12:21 PM, Jean-Marc Spaggiari <
> > [EMAIL PROTECTED]> wrote:
> >
> > > One more Tom,
> > >
> > > When you will have been able capture de HFile locally, please run run
> the
> > > HFile class on it to see the number of keys (is it empty?) and the
> other
> > > specific information.
> > >
> > > bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -m -s -v -f HFILENAME
> > >
> > > Thanks,
> > >
> > > JM
> > >
> > >
> > > 2013/9/24 Jean-Marc Spaggiari <[EMAIL PROTECTED]>
> > >
> > > > We get -1 because of this:
> > > >
> > > >       byte [] timerangeBytes = metadataMap.get(TIMERANGE_KEY);
> > > >       if (timerangeBytes != null) {
+
Jean-Marc Spaggiari 2013-09-24, 20:11
+
Tom Brown 2013-09-24, 20:27
+
Jean-Marc Spaggiari 2013-09-24, 23:08
+
Sergey Shelukhin 2013-09-25, 02:21
+
Tom Brown 2013-09-24, 17:19
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB