Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Hive 0.7.1 with MySQL 5.5 as metastore

Copy link to this message
RE: Hive 0.7.1 with MySQL 5.5 as metastore
Hi Mark,
I just started to restore the data to a separate MySQL 5.1 schema, will try to create a table and post back here.
I copied the error stack trace below.
Nov  5 22:24:02 local3:[ETLManager] ERROR [pool-2-thread-1] exec.MoveTask - Failed with exception Insert of object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@1db0454f" using statement "INSERT INTO `SDS` (`SD_ID`,`LOCATION`,`OUTPUT_FORMAT`,`IS_COMPRESSED`,`NUM_BUCKETS`,`INPUT_FORMAT`,`SERDE_ID`) VALUES (?,?,?,?,?,?,?)" failed : Duplicate entry '5152711' for key 'PRIMARY'javax.jdo.JDODataStoreException: Insert of object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@1db0454f" using statement "INSERT INTO `SDS` (`SD_ID`,`LOCATION`,`OUTPUT_FORMAT`,`IS_COMPRESSED`,`NUM_BUCKETS`,`INPUT_FORMAT`,`SERDE_ID`) VALUES (?,?,?,?,?,?,?)" failed : Duplicate entry '5152711' for key 'PRIMARY' at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:313) at org.datanucleus.jdo.JDOTransaction.commit(JDOTransaction.java:132) at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:315) at org.apache.hadoop.hive.metastore.HiveAlterHandler.alterTable(HiveAlterHandler.java:172) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$29.run(HiveMetaStore.java:1687) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$29.run(HiveMetaStore.java:1684) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table(HiveMetaStore.java:1684) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:166) at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:354) at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1194) at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:197) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:131) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1063) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:900) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:748)
Date: Mon, 5 Nov 2012 17:17:46 -0800
Subject: Re: Hive 0.7.1 with MySQL 5.5 as metastore

Venkatesh,What's the exact integrity constraint error you are seeing?
I'd be curious to see if you restored the data from the mysqldump onto a separate schema/db on MySQL 5.1 server whether you still get the error or not.


On Mon, Nov 5, 2012 at 3:37 PM, Venkatesh Kavuluri <[EMAIL PROTECTED]> wrote:
Sorry for the confusion, the problem is not with the MySQL version upgrade - I have indeed performed the upgrade by doing a mysqldump and restoring the data.
The problem is with how Hive 0.7.1 is interacting with the same metastore data on a different version of MySQL server.
> Date: Mon, 5 Nov 2012 18:31:37 -0500
> Subject: Re: Hive 0.7.1 with MySQL 5.5 as metastore

> Moving underlying data files around is not the correct way to perform
> an upgrade.
> https://dev.mysql.com/doc/refman/5.5/en/upgrading-from-previous-series.html

> I would do a mysqldump and then re-insert the data for maximum comparability.
> On Mon, Nov 5, 2012 at 6:21 PM, Venkatesh Kavuluri
> <[EMAIL PROTECTED]> wrote:

> > I am working on copying existing Hive metadata (Hive 0.7.1 with MySQL 5.1)
> > to a new cluster environment (Hive 0.7.1 with MySQL 5.5). I copied over the
> > metastore tables and modified the data under SDS (sub-directories) table to

> > reflect the new data path. However I am getting MySQL integrity constraint
> > violation against SDS.SD_ID column while trying to create new Hive tables.
> > Is this a problem with the MySQL version I am using ? Does Hive 0.7.1

> > support MySQL 5.5 as the metastore.