Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Drill >> mail # dev >> Query regarding Scalar Functions implementation

Copy link to this message
Re: Query regarding Scalar Functions implementation
On Oct 2, 2013, at 4:46 PM, Jacques Nadeau <[EMAIL PROTECTED]> wrote:

> I haven't run the test yet on MSSQL but reading this suggests that it int/int == int as opposed to oracle int/int == float4
> http://technet.microsoft.com/en-us/library/ms175009.aspx
> We should probably pick one and stick to it.  I personally prefer MS but Oracle is more prevalent.  If I remember correctly, Optiq is modeled more after one of the two and we should probably continue that trend.  Maybe Julian can comment here...

SQL type-derivation rules can be quite complex. You have got different sizes of exact and approximate numeric types, parameterized types (e.g. VARCHAR(10) and DECIMAL(6, 2)), and whether the result is NOT NULL.

When implementing LucidDB, we sweated over the SQL standard, and when the standard said "implementation defined", we used MS SQL and Oracle as reference points. Oracle isn't the greatest for this, because their numeric types are all NUMBER under the covers and their date types are all DATE.

The wisdom that we accumulated from that process all ended up in SqlOperatorBaseTest. I think you'd save yourself a lot of effort if you just use that test as a starting point and change the code until the tests all pass. I strongly suggest you do that.

The different result types and overloadings suggest that there are several variants of each operator (e.g. for "+" you need "int + int", "long + long", "long + int (upcasting to long)", "double + double", "date + interval", "interval + interval", and so forth; and you may wish to generate optimal versions if either/both of the arguments are known to be not null.)