Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> Snappy compression with pig


Copy link to this message
-
Re: Snappy compression with pig
I think I need to write both store and load functions. It appears that only
intermediate output that is stored on temp location can be compressed using:

SET mapred.compress.map.output true;

SET mapred.output.compression org.apache.hadoop.io.compress.SnappyCodec;

Any pointers as to how I can store and load using snappy would be helpful.
On Thu, Apr 26, 2012 at 12:32 PM, Mohit Anchlia <[EMAIL PROTECTED]>wrote:

> I am able to write with Snappy  compression. But I don't think pig
> provides anything to read such records. Can someone suggest or point me to
> relevant code that might help me write LoadFunc for it?
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB