Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Problems with block compression using native codecs (Snappy, LZO) and MapFile.Reader.get()


Copy link to this message
-
Re: Problems with block compression using native codecs (Snappy, LZO) and MapFile.Reader.get()
Hi Jason,

Sounds like a bug. Unfortunately the mailing list strips attachments.

Can you file a jira in the HADOOP project, and attach your test case there?

Thanks
Todd

On Mon, May 21, 2012 at 3:57 PM, Jason B <[EMAIL PROTECTED]> wrote:
> I am using Cloudera distribution cdh3u1.
>
> When trying to check native codecs for better decompression
> performance such as Snappy or LZO, I ran into issues with random
> access using MapFile.Reader.get(key, value) method.
> First call of MapFile.Reader.get() works but a second call fails.
>
> Also  I am getting different exceptions depending on number of entries
> in a map file.
> With LzoCodec and 10 record file, jvm gets aborted.
>
> At the same time the DefaultCodec works fine for all cases, as well as
> record compression for the native codecs.
>
> I created a simple test program (attached) that creates map files
> locally with sizes of 10 and 100 records for three codecs: Default,
> Snappy, and LZO.
> (The test requires corresponding native library available)
>
> The summary of problems are given below:
>
> Map Size: 100
> Compression: RECORD
> =================> DefaultCodec:  OK
> SnappyCodec: OK
> LzoCodec: OK
>
> Map Size: 10
> Compression: RECORD
> =================> DefaultCodec:  OK
> SnappyCodec: OK
> LzoCodec: OK
>
> Map Size: 100
> Compression: BLOCK
> ===============> DefaultCodec:  OK
>
> SnappyCodec: java.io.EOFException  at
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:114)
>
> LzoCodec: java.io.EOFException at
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:114)
>
> Map Size: 10
> Compression: BLOCK
> =================> DefaultCodec:  OK
>
> SnappyCodec: java.lang.NoClassDefFoundError: Ljava/lang/InternalError
> at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
> Method)
>
> LzoCodec:
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x00002b068ffcbc00, pid=6385, tid=47304763508496
> #
> # JRE version: 6.0_21-b07
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (17.0-b17 mixed mode linux-amd64 )
> # Problematic frame:
> # C  [liblzo2.so.2+0x13c00]  lzo1x_decompress+0x1a0
> #
> # An error report file with more information is saved as:
> # /hadoop/user/yurgis/testapp/hs_err_pid6385.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://java.sun.com/webapps/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #

--
Todd Lipcon
Software Engineer, Cloudera
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB