-Re: Mapping existing HBase table with many columns to Hive.
kulkarni.swarnim@...) 2012-12-10, 00:52
The data type corresponds to the type of data that the qualifiers holds and
not the data type of the qualifier. So, with the above query, for the
column value map<string,string>, the type is map<string,string> which means
that your qualifiers mapped to keys are of type "string" and underlying
values of type "string". The patch will just allow hive to read data which
is stored under a binary qualifier like you have and once the mapping is
done, treat the binary qualifier as a "string" as well since that is the
only type hive allows for map keys.
Hope that helps.
On Sun, Dec 9, 2012 at 6:03 PM, David Koch <[EMAIL PROTECTED]> wrote:
> I tried the shell command which Swarnim kindly provided and it allows me
> to map an existing HBase table into Hive. However, since my qualifiers are
> long but map only accepts string as a key, the result is garbled. Even with
> the suggested patch which allows binary keys, the resulting datatype in
> Hive would not be long but binary, making it hard to query from shell. It
> seems there is no API for now, right?
> Currently, is there any way to map HBase byte to Hive datatypes?
> The assumption is, that all byte were generated using Hadoop's
> Byte.toBytes(<type>) method and that either all row keys, qualifiers and
> values share the same data type respectively (for example: row keys are
> ints, qualifiers are longs and values are strings).
> Thank you,
> On Thu, Dec 6, 2012 at 9:23 PM, David Koch <[EMAIL PROTECTED]> wrote:
>> Hello Swarnim,
>> Thank you for your answer. I will try the options you pointed out.
>> On Thu, Dec 6, 2012 at 9:10 PM, [EMAIL PROTECTED] <
>> [EMAIL PROTECTED]> wrote: