Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> change column type of orc table will throw exception in query time


Copy link to this message
-
change column type of orc table will throw exception in query time
hi,
 Currently, if we change orc format hive table using "alter table orc_table
change c1 c1 bigint ", it will throw exception  from SerDe
("org.apache.hadoop.io.IntWritable
cannot be cast to org.apache.hadoop.io.LongWritable" ) in query time, this
is different behavior from hive (using other file format), where it will
try to perform cast (null value in case of incompatible type).
  I find HIVE-6784 <https://issues.apache.org/jira/browse/HIVE-6784>
happen to be the same issue with parquet while it says that currently it
works with partitioned table:

  According to my test with hive branch-0.13, it still fail with orc
partitioned table.I think this behavior is unexpected and I'm digging into
the code to find a way to fix it now. Any help is appreciated.

I use the following script to test it with partitioned table on branch-0.13:

use test;
and it throw exception with branch-0.13:
Thanks.

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB