If its any help I've done this kind of thing frequently:
1. create the table on the new cluster.
2. distcp the data right into the hdfs directory where the table resides on
the new cluster - no temp storage required.
3. run this hive command: msck repair table <table>; -- this command
will create your partitions for you - its pretty slick that way.
Let us know how it goes.
On Mon, Sep 23, 2013 at 10:46 AM, Edward Capriolo <[EMAIL PROTECTED]>wrote:
> Did you try ALTER TABLE table ADD IF NOT EXISTS PARTITION (partition=NULL);
> If that does not work you will need to create a dynamic partition type
> query that will create the dummy partition. File a jira if the above syntax
> does not work. There should be SOME way to create the default partition by
> On Mon, Sep 23, 2013 at 10:48 AM, Ivan Kruglov <[EMAIL PROTECTED]>wrote:
>> Hello to everyone,
>> I'm working on the task of syncing data between two tables which have
>> similar structure (read the same set of partitions). The tables are in
>> different data centers and one table is a backup copy of another one. I'm
>> trying to achieve this goal through distcp-ing data into target DC in
>> temporary folder, recreating all needed partitions in target table and
>> moving files from temporary place to final place. But I'm stuck on issue of
>> creating partitions with value ' __HIVE_DEFAULT_PARTITION__'
>> So, my question is: Is it possible in hive to manually create partition
>> with '__HIVE_DEFAULT_PARTITION__' value?
>> Neither of this way work:
>> ALTER TABLE table ADD IF NOT EXISTS PARTITION (partition=);
>> ALTER TABLE table ADD IF NOT EXISTS PARTITION (partition='');
>> ALTER TABLE table ADD IF NOT EXISTS PARTITION
>> Thank you.
>> Ivan Kruglov.