Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Avro >> mail # user >> Avro file Compression


+
amit nanda 2013-08-19, 16:12
+
Harsh J 2013-08-22, 06:31
Copy link to this message
-
Re: Avro file Compression
As with any compression, how much you get depends on the size and nature of
the data.  I have objects where unserialized they take 4 or 5k, and they
serialize to 1.5 to 3k, or about 2 to 1.  However, for the same object
structure (which contains several nested arrays ... lots of strings,
numbers ... basic business data) when uncompressed it 17MB, it deflates to
1MB (or 17 to 1).  For very small objects, deflate will actually produce a
larger output, but it does quite well as the size of the data being
deflated grows.

Bill
On Wed, Aug 21, 2013 at 11:31 PM, Harsh J <[EMAIL PROTECTED]> wrote:

> Can you share your test? There is an example at
> http://svn.apache.org/repos/asf/avro/trunk/lang/c/examples/quickstop.c
> which has the right calls for using a file writer with a deflate codec
> - is yours similar?
>
> On Mon, Aug 19, 2013 at 9:42 PM, amit nanda <[EMAIL PROTECTED]> wrote:
> > I am try to compress the avro files that i am writing, for that i am
> using
> > the latest Avro C, with "deflate" option, but i am not able to see any
> > difference in the file size.
> >
> > Is there any special type to data that this works on, or is there any
> more
> > setting that needs to be done for this to work.
> >
> >
>
>
>
> --
> Harsh J
>
+
Scott Carey 2013-08-22, 21:35
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB