Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # dev >> Recent in TestHFileBlock.testConcurrentReading[1] in 0.94


Copy link to this message
-
Re: Recent in TestHFileBlock.testConcurrentReading[1] in 0.94
Thanks for bringing this up, Lars.

I looked at three builds where this test OOM'ed: 631, 634 and 635.

It seems that this problem is reproducible on ubuntu3.

I wonder if juggling of build servers would make this OOM disappear.

Cheers

On Mon, Dec 17, 2012 at 9:58 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:

> Does anybody know what may have caused the recent OOM failures in
> TestHFileBlock.testConcurrentReading[1]?
>
>
> This is the exception:
>
>
> Caused by: java.lang.OutOfMemoryError
>     at java.util.zip.Inflater.init(Native Method)
>     at java.util.zip.Inflater.<init>(Inflater.java:83)
>     at
> org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.<init>(BuiltInGzipDecompressor.java:45)
>     at
> org.apache.hadoop.io.compress.GzipCodec.createDecompressor(GzipCodec.java:136)
>     at
> org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:127)
>     at
> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getDecompressor(Compression.java:290)
>     at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.decompress(HFileBlock.java:1397)
>     at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1830)
>     at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1643)
>     at
> org.apache.hadoop.hbase.io.hfile.TestHFileBlock$BlockReaderThread.call(TestHFileBlock.java:639)
>     at
> org.apache.hadoop.hbase.io.hfile.TestHFileBlock$BlockReaderThread.call(TestHFileBlock.java:603)
>     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>
>
> Here's the latest test run with that failure:
> https://builds.apache.org/job/HBase-0.94/635/
>
> Looks like this is creating a new Decompressor for each single block.
> Looking at the code that seems to be by design when the
> BuiltInGzipDecompressor is used.
> Seems somewhat inefficient, though.
>
>
> I initially thought this was caused by HBASE-7336, but that turned out to
> be not the case (OOMs still occurred with that change reverted).
>
> If anybody knows anything about this, please let me know. It might also
> just be an environment issue.
>
>
> Thanks.
>
>
> -- Lars
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB