Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> RE: Question related to Decompressor interface


+
java8964 java8964 2013-02-10, 15:13
+
java8964 java8964 2013-02-09, 20:49
Copy link to this message
-
Re: Question related to Decompressor interface
Hello

> Can someone share some idea what the Hadoop source code of class
> org.apache.hadoop.io.compress.BlockDecompressorStream, method
> rawReadInt() is trying to do here?

The BlockDecompressorStream class is used for block-based decompression
(e.g. snappy).  Each chunk has a header indicating how many bytes it is.
That header is obtained by the rawReadInt method so it is expected to
return a non-negative value (since you can't have a negative length).
George
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB