Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> alter table add partition error


Copy link to this message
-
RE: alter table add partition error
I tried debugging in code a little more. Here is what I found:

The code in ThriftHiveMetaStore eventually makes a call -
get_partition() passing the partition key values for the partition I am
trying to add using alter table. I assume this is to check that the
partition doesn't already exist.

I added a debug line in the following code:

  public Partition recv_get_partition() throws MetaException, TException

    {

      TMessage msg = iprot_.readMessageBegin();

      if (msg.type == TMessageType.EXCEPTION) {

        TApplicationException x = TApplicationException.read(iprot_);

        iprot_.readMessageEnd();

        throw x;

      }

      get_partition_result result = new get_partition_result();

      

      result.read(iprot_);

      System.err.println("XXX: result:" + result);

      iprot_.readMessageEnd();

      if (result.isSetSuccess()) {

        return result.success;

      }

      if (result.o1 != null) {

        throw result.o1;

      }

      throw new
TApplicationException(TApplicationException.MISSING_RESULT,
"get_partition failed: unknown result");

    }

 

I also put a debug statements in the read() method:

    public void read(TProtocol iprot) throws TException {

      TField field;

      iprot.readStructBegin();

      System.err.println("XXX: Reading TProtocol object:");

      while (true)

      {

        field = iprot.readFieldBegin();

        System.err.println("XXX: field just read:" + field);

        if (field.type == TType.STOP) {

          break;

 

I got

XXX: Reading TProtocol object:

XXX: field just read:<TField name:'' type:0 field-id:0>

XXX: result:get_partition_result(success:null, o1:null)

 

The field read in the thrift response message is of type "STOP" and with
id of type SUCCESS. This seems right since there are no existing
partitions. But the way the rest of the code handles this, results in
the exception.

 

Any pointers?

 

TIA,

Pradeep

________________________________

From: Pradeep Kamath [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 18, 2010 2:51 PM
To: [EMAIL PROTECTED]
Subject: RE: alter table add partition error

 

Looks like the standalone script works fine against the existing
partition:

./ThriftHiveMetastore-remote -h localhost:9080 get_partition_by_name
default dummy datestamp=20100602/srcid=100/action=view/testid=10

Partition(parameters={'transient_lastDdlTime': '1276881277'},
tableName='dummy', createTime=1276881277, lastAccessTime=0,
values=['20100602', '100', 'view', '10'], dbName='default',
sd=StorageDescriptor(outputFormat='org.apache.hadoop.hive.ql.io.HiveIgno
reKeyTextOutputFormat', sortCols=[],
inputFormat='org.apache.hadoop.mapred.TextInputFormat',
cols=[FieldSchema(comment=None, type='string', name='partition_name'),
FieldSchema(comment=None, type='int', name='partition_id')],
compressed=False, bucketCols=[], numBuckets=-1, parameters={},
serdeInfo=SerDeInfo(serializationLib='org.apache.hadoop.hive.serde2.lazy
.LazySimpleSerDe', name=None, parameters={'serialization.format': '1'}),
location='hdfs://wilbur21.labs.corp.sp1.yahoo.com/user/pradeepk/dummy/20
100602/100/view/10'))

[pradeepk@chargesize:~/dev/howl/src/metastore/src/gen-py/hive_metastore]

 

However when I tried to add another partition with the hive cli using
thrift:

hive  -e "ALTER TABLE dummy add partition(datestamp = '20100602', srcid
= '100',action='click',testid='10') location
'/user/pradeepk/dummy/20100602/100/click/10';"

10/06/18 14:49:13 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
found in the classpath. Usage of hadoop-site.xml is deprecated. Instead
use core-site.xml, mapred-site.xml and hdfs-site.xml to override
properties of core-default.xml, mapred-default.xml and hdfs-default.xml
respectively

Hive history
file=/tmp/pradeepk/hive_job_log_pradeepk_201006181449_1158492515.txt

FAILED: Error in metadata: org.apache.thrift.TApplicationException:
get_partition failed: unknown result

FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask

 

tail -30 /tmp/pradeepk/hive.log

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
Impl.java:25)

        at java.lang.reflect.Method.invoke(Method.java:597)

        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

 

2010-06-18 14:49:14,124 ERROR exec.DDLTask
(SessionState.java:printError(277)) - FAILED: Error in metadata:
org.apache.thrift.TApplicationException: get_partition failed: unknown
result

org.apache.hadoop.hive.ql.metadata.HiveException:
org.apache.thrift.TApplicationException: get_partition failed: unknown
result

        at
org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:778)

        at
org.apache.hadoop.hive.ql.exec.DDLTask.addPartition(DDLTask.java:255)

        at
org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:169)

        at
org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)

        at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:
55)

        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)

        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)

        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)

        at
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)

        at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)

        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:267)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.jav
a:39)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
Impl.java:25)

        at java.lang.reflect.Method.invoke(Method.java:597)

        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Caused by: org.apache.thrift.TApplicationException: get_partition
failed: unknown result

        at
org.apache.hadoop.hive.metastore.api.ThriftHiveMe
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB