Ruslan Al-Fakikh 2012-07-04, 13:32
Russell Jurney 2012-07-04, 21:58
Ruslan Al-Fakikh 2012-07-05, 14:53
Doug Cutting 2012-07-05, 17:24
Ruslan Al-Fakikh 2012-07-05, 22:11
You can use the Avro command-line tool to dump the metadata, which
will show the schema and codec:
java -jar avro-tools.jar getmeta <file>
On Thu, Jul 5, 2012 at 3:11 PM, Ruslan Al-Fakikh <[EMAIL PROTECTED]> wrote:
> Hey Doug,
> Here is a little more of explanation
> I'll answer your questions later after some investigation
> Thank you!
> On Thu, Jul 5, 2012 at 9:24 PM, Doug Cutting <[EMAIL PROTECTED]> wrote:
>> This is unexpected. Perhaps we can understand it if we have more information.
>> What Writable class are you using for keys and values in the SequenceFile?
>> What schema are you using in the Avro data file?
>> Can you provide small sample files of each and/or code that will reproduce this?
>> On Wed, Jul 4, 2012 at 6:32 AM, Ruslan Al-Fakikh <[EMAIL PROTECTED]> wrote:
>>> In my organization currently we are evaluating Avro as a format. Our
>>> concern is file size. I've done some comparisons of a piece of our
>>> Say we have sequence files, compressed. The payload (values) are just
>>> lines. As far as I know we use line number as keys and we use the
>>> default codec for compression inside sequence files. The size is 1.6G,
>>> when I put it to avro with deflate codec with deflate level 9 it
>>> becomes 2.2G.
>>> This is interesting, because the values in seq files are just string,
>>> but Avro has a normal schema with primitive types. And those are kept
>>> binary. Shouldn't Avro be less in size?
>>> Also I took another dataset which is 28G (gzip files, plain
>>> tab-delimited text, don't know what is the deflate level) and put it
>>> to Avro and it became 38G
>>> Why Avro is so big in size? Am I missing some size optimization?
>>> Thanks in advance!
Ey-Chih chow 2012-07-18, 23:59
Harsh J 2012-07-20, 02:07
Ey-Chih chow 2012-07-20, 17:02
Ey-Chih chow 2012-07-20, 17:12
Doug Cutting 2012-07-20, 20:00
Ey-Chih chow 2012-07-20, 20:32