Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Alter table is giving error


Copy link to this message
-
Re: Alter table is giving error
Hi Mark,
Sorry, I forgot to mention. I have also tried
                msck repair table <Table name>;
and same output I got which I got from msck only.
Do I need to do any other settings for this to work, because I have
prepared Hadoop and Hive setup from start on EC2.

Thanks,
Chunky.

On Wed, Nov 7, 2012 at 11:58 AM, Mark Grover <[EMAIL PROTECTED]>wrote:

> Chunky,
> You should have run:
> msck repair table <Table name>;
>
> Sorry, I should have made it clear in my last reply. I have added an entry
> to Hive wiki for benefit of others:
>
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Recoverpartitions
>
> Mark
>
>
> On Tue, Nov 6, 2012 at 9:55 PM, Chunky Gupta <[EMAIL PROTECTED]>wrote:
>
>> Hi Mark,
>> I didn't get any error.
>> I ran this on hive console:-
>>          "msck table Table_Name;"
>> It says Ok and showed the execution time as 1.050 sec.
>> But when I checked partitions for table using
>>           "show partitions Table_Name;"
>> It didn't show me any partitions.
>>
>> Thanks,
>> Chunky.
>>
>>
>> On Tue, Nov 6, 2012 at 10:38 PM, Mark Grover <[EMAIL PROTECTED]
>> > wrote:
>>
>>> Glad to hear, Chunky.
>>>
>>> Out of curiosity, what errors did you get when using msck?
>>>
>>>
>>> On Tue, Nov 6, 2012 at 5:14 AM, Chunky Gupta <[EMAIL PROTECTED]>wrote:
>>>
>>>> Hi Mark,
>>>> I tried msck, but it is not working for me. I have written a python
>>>> script to partition the data individually.
>>>>
>>>> Thank you Edward, Mark and Dean.
>>>> Chunky.
>>>>
>>>>
>>>> On Mon, Nov 5, 2012 at 11:08 PM, Mark Grover <
>>>> [EMAIL PROTECTED]> wrote:
>>>>
>>>>> Chunky,
>>>>> I have used "recover partitions" command on EMR, and that worked fine.
>>>>>
>>>>> However, take a look at https://issues.apache.org/jira/browse/HIVE-874. Seems
>>>>> like msck command in Apache Hive does the same thing. Try it out and let us
>>>>> know it goes.
>>>>>
>>>>> Mark
>>>>>
>>>>> On Mon, Nov 5, 2012 at 7:56 AM, Edward Capriolo <[EMAIL PROTECTED]
>>>>> > wrote:
>>>>>
>>>>>> Recover partitions should work the same way for different file
>>>>>> systems.
>>>>>>
>>>>>> Edward
>>>>>>
>>>>>> On Mon, Nov 5, 2012 at 9:33 AM, Dean Wampler
>>>>>> <[EMAIL PROTECTED]> wrote:
>>>>>> > Writing a script to add the external partitions individually is the
>>>>>> only way
>>>>>> > I know of.
>>>>>> >
>>>>>> > Sent from my rotary phone.
>>>>>> >
>>>>>> >
>>>>>> > On Nov 5, 2012, at 8:19 AM, Chunky Gupta <[EMAIL PROTECTED]>
>>>>>> wrote:
>>>>>> >
>>>>>> > Hi Dean,
>>>>>> >
>>>>>> > Actually I was having Hadoop and Hive cluster on EMR and I have S3
>>>>>> storage
>>>>>> > containing logs which updates daily and having partition with
>>>>>> date(dt). And
>>>>>> > I was using this recover partition.
>>>>>> > Now I wanted to shift to EC2 and have my own Hadoop and Hive
>>>>>> cluster. So,
>>>>>> > what is the alternate of using recover partition in this case, if
>>>>>> you have
>>>>>> > any idea ?
>>>>>> > I found one way of individually partitioning all dates, so I have
>>>>>> to write
>>>>>> > script for that to do so for all dates. Is there any easiest way
>>>>>> other than
>>>>>> > this ?
>>>>>> >
>>>>>> > Thanks,
>>>>>> > Chunky
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > On Mon, Nov 5, 2012 at 6:28 PM, Dean Wampler
>>>>>> > <[EMAIL PROTECTED]> wrote:
>>>>>> >>
>>>>>> >> The RECOVER PARTITIONS is an enhancement added by Amazon to their
>>>>>> version
>>>>>> >> of Hive.
>>>>>> >>
>>>>>> >>
>>>>>> >>
>>>>>> http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/emr-hive-additional-features.html
>>>>>> >>
>>>>>> >> <shameless-plus>
>>>>>> >>   Chapter 21 of Programming Hive discusses this feature and other
>>>>>> aspects
>>>>>> >> of using Hive in EMR.
>>>>>> >> </shameless-plug>
>>>>>> >>
>>>>>> >> dean
>>>>>> >>
>>>>>> >>
>>>>>> >> On Mon, Nov 5, 2012 at 5:34 AM, Chunky Gupta <
>>>>>> [EMAIL PROTECTED]>
>>>>>> >> wrote:
>>>>>> >