Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Avro >> mail # user >> Using AVRO C with a large schema


Copy link to this message
-
Re: Using AVRO C with a large schema
On Fri, Aug 16, 2013 at 11:22 AM,  <[EMAIL PROTECTED]> wrote:
> 1. There's a 1:1 relationship between schema and file.  You can't mix
> different schemas in the same file.
>
> 2. Each value written to a file represents the file's full schema.  You
> can't write pieces of a schema.

These are both correct.  If you want to intermix, use a union as the
file's schema.

> 3. AVRO C cannot write values that are bigger than the file writer's
> specified block_size.  I don't think there's enough memory to hold both the
> original structures and a gigantic block_size.

I don't know enough about the C implementation to verify this one and
will leave it to others.

Doug
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB