Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Avro >> mail # user >> Avro + Snappy changing blocksize of snappy compression

snikhil0 2012-04-18, 17:23
Tatu Saloranta 2012-04-18, 17:52
Harsh J 2012-04-18, 17:44
snikhil0 2012-04-18, 20:33
Scott Carey 2012-04-18, 21:18
Copy link to this message
Re: Avro + Snappy changing blocksize of snappy compression
On Wed, Apr 18, 2012 at 2:18 PM, Scott Carey <[EMAIL PROTECTED]> wrote:
> Try a range from smaller block sizes (4k) and up.  256K is a larger block
> size than many compression codecs are sensitive to.

Agreed: most codecs only go up to 32k or 64k (in fact, Snappy may use
just 32k, not 64k).
Deflate doesn't benefit from above 64k either, nor does lzf.
The only codecs that I think use larger buffers are bzip and lzma;
both of which are typically way too slow to be used for streaming data
processing anyway.

So testing up to 64k is usually enough.

-+ Tatu +-