Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # dev >> Review Request 24289: MetadataUpdater: provide a mechanism to edit the statistics of a column in a table (or a partition of a table)


Copy link to this message
-
Re: Review Request 24289: MetadataUpdater: provide a mechanism to edit the statistics of a column in a table (or a partition of a table)

This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24289/#review50047
Please add .q tests for these. Test for partitioned table with more than one partition column on variety of column types and variety of stats type.
ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
<https://reviews.apache.org/r/24289/#comment87572>

    Include example sql statement for which this task is meant for.

ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
<https://reviews.apache.org/r/24289/#comment87575>

    Add a comment saying grammar prohibits more than 1 column, so we are guaranteed to have only 1 element in this lists.

ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
<https://reviews.apache.org/r/24289/#comment87576>

    Is clear() needed here?

ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
<https://reviews.apache.org/r/24289/#comment87579>

    Add else{
    
    throw SemanticException ("Unknown stat");
    }
    
    add to all of subsequent block.
    
    You may also want to reconsider some of this reptition in private method.

ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
<https://reviews.apache.org/r/24289/#comment87580>

    Add else {
    throw Exception ("Unsupported type");
    }

ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
<https://reviews.apache.org/r/24289/#comment87574>

    Copy-paste comments?

ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
<https://reviews.apache.org/r/24289/#comment87573>

    Comments seem out of place. Copy-paste?

ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
<https://reviews.apache.org/r/24289/#comment87563>

    throw  new SemanticException ("table " + tbl + "not found");

ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
<https://reviews.apache.org/r/24289/#comment87564>

    if (colType == null) throw new Semantic Exception ("col not found");

ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
<https://reviews.apache.org/r/24289/#comment87565>

    There can be multiple partitioning column, in which case this assert will fail. Dont think you want that.

ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
<https://reviews.apache.org/r/24289/#comment87566>

    Instead of this for loop, you want to use Warehouse.makePartName(partSpec, false);

ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
<https://reviews.apache.org/r/24289/#comment87567>

    throw SemanticEx

ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
<https://reviews.apache.org/r/24289/#comment87568>

    check colType != null

ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java
<https://reviews.apache.org/r/24289/#comment87562>

    I don't think this if block is required. Further, you need to add a HiveOperation corresponding to this new token.

ql/src/java/org/apache/hadoop/hive/ql/plan/ColumnStatsUpdateWork.java
<https://reviews.apache.org/r/24289/#comment87571>

    Add comment like, work corresponding to statement:
    alter table t1 partition (p1=c1,p2=c2), update...

ql/src/java/org/apache/hadoop/hive/ql/plan/ColumnStatsUpdateWork.java
<https://reviews.apache.org/r/24289/#comment87569>

    This field doesnt seem to be used. Can be removed.

ql/src/java/org/apache/hadoop/hive/ql/plan/ColumnStatsUpdateWork.java
<https://reviews.apache.org/r/24289/#comment87570>

    Good to implement this. Useful for debugging.
- Ashutosh Chauhan
On Aug. 5, 2014, 6:40 p.m., pengcheng xiong wrote: