The major reason for looking at row/column major differences involves that
amount of transformation necessary to go from one format to the other. With
row major formats we will have to do a transformation similar to providing
results to an ODBC or JDBC system, which will expect them represented as
individual records. This process is very simple, but it is also
time-consuming, and many existing java programs/libraries are going to
likely be using practices we will slow down drill to much, the biggest one
I am concerned about right now is new object allocation, but I'm sure we
will be able to find other inefficiencies as well.
For column major formats, we are likely to be able to write long runs of
values form value vectors into the files. That being said there are some
problems with value level compression like dictionary/ RLE and bit-packed
encodings. There is also a consideration I forgot to mention in the
document about the representation of nulls in the various formats. In drill
we leave empty spaces for nulls in VVs because we want random access to
values for fast pointer sorting. In storage formats, the primary concern is
limiting the size of the data, without going too crazy on reading/ record
re-assembly overhead. Most of the space efficient binary formats will
likely leave nulls out (as is the case with parquet, and I believe ORC as
While this will slow down the transformation a bit, as we will have to find
runs of defined values to write all at once, and then skip each sequence of
nulls we have in our VV and continue writing when we find more defined
values, it will in many cases still be faster than pulling out individual
integers or string out of a value vector and writing them through the row
major interfaces provided by their libraries.
On Fri, Oct 4, 2013 at 7:32 PM, Timothy Chen <[EMAIL PROTECTED]> wrote:
> I see, didn't know we have plans to write results into various formats. If
> we can do that it could integrate even with other data processing tools to
> integrate with drill (which is probably the aim too?)
> So if we're just writing results into disk, I wonder why we need a writer
> interface that needs to consider row/column major differences?
> Can't we just take the in-memory vv that is being produced and write a
> Recordbatch at a time directly to formats we want?
> On Fri, Oct 4, 2013 at 11:24 AM, Jason Altekruse
> <[EMAIL PROTECTED]>wrote:
> > Tim,
> > Answers to your questions are below. I am almost always available after
> > your time, feel free to send me some dates/times that work for you.
> > - Maybe a bit more context? A writer interface doesn't seem to suggest
> > it really is about. Also if this is focused on writing (from record
> > into drill vv), why is there many comments around reading in your
> > consideration?
> > - I don't see any writer interface proposed?
> > There actually isn't a writer interface written yet. The document I
> > are some thoughts I'm compiling about what the writer interface needs to
> > handle. I hope to gather as much information about various formats before
> > proposing a hard interface. I believe there could be a lot of value in
> > trying to generalize the readers and writers, even across formats. I'm
> > hoping it will minimize the burden of maintaining support for formats and
> > they evolve, as well as update the readers and writers and the value
> > vectors become more complex (compressed representations of data in
> > dictionary encodings, etc.)
> > The reader interface was included for reference in the document, because
> > believe we should work on both the reader and writer together, as both
> > many similar properties and really just perform a translation in opposite
> > directions.
> > For clarity the writer interface is what will allow us to enable a create
> > table operation and store results to disk. Obviously we will want to
> > support a variety of formats, as most users will likely want to export in