Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Compiling Hive ODBC


Copy link to this message
-
Re: Compiling Hive ODBC
Hi Sebastien,

Most (if not all) of the ant targets will fail if run from the submodule
directories. You must run the targets from the root source directory
instead.

Thanks.

Carl
On Fri, Apr 26, 2013 at 6:09 AM, Sebastien FLAESCH <[EMAIL PROTECTED]> wrote:

> I have now installed Thrift, but I still get the error with the "osfamily"
> task:
>
> Please help!
>
> hive@orca:~/hive-0.10.0/src/**odbc$ ant compile-cpp
> -Dthrift.home=/home/hive/**thrift-0.9.0
>
>
> Buildfile: /home/hive/hive-0.10.0/src/**odbc/build.xml
>
> BUILD FAILED
> /home/hive/hive-0.10.0/src/**odbc/build.xml:30: The following error
> occurred while executing this line:
> /home/hive/hive-0.10.0/src/**build-common.xml:117: Problem: failed to
> create task or type osfamily
> Cause: The name is undefined.
> Action: Check the spelling.
> Action: Check that any custom tasks/types have been declared.
> Action: Check that any <presetdef>/<macrodef> declarations have taken
> place.
>
>
> Total time: 0 seconds
>
> Seb
>
>
> On 04/26/2013 01:40 PM, Sebastien FLAESCH wrote:
>
>> Go it, need to install Apache Thrift first ... sorry for this mail.
>> Seb
>>
>> On 04/26/2013 01:33 PM, Sebastien FLAESCH wrote:
>>
>>> Hi all,
>>>
>>> Just started with an Hadoop / Hive POC to write a
>>>
>>> So far, I have installed:
>>>
>>> So far I have downloaded the Hadoop and Hive packages from:
>>>
>>> http://hadoop.apache.org/**releases.html<http://hadoop.apache.org/releases.html>
>>> http://hive.apache.org/**releases.html<http://hive.apache.org/releases.html>
>>>
>>> I took:
>>>
>>> http://hadoop.apache.org/**releases.html#18+April%2C+**
>>> 2013%3A+Release+0.23.7+**available<http://hadoop.apache.org/releases.html#18+April%2C+2013%3A+Release+0.23.7+available>
>>>
>>>
>>>
>>> and
>>>
>>> http://hive.apache.org/**releases.html#11+January%2C+**
>>> 2013%3A+release+0.10.0+**available<http://hive.apache.org/releases.html#11+January%2C+2013%3A+release+0.10.0+available>
>>>
>>>
>>>
>>>
>>> After setting the env, it seems that hive is working:
>>>
>>> hive@orca:~$ $HIVE_HOME/bin/hive
>>> ...
>>> hive> create table t1 ( k int, s string );
>>> OK
>>>
>>> hive> select count(*) from t1;
>>> Total MapReduce jobs = 1
>>> ...
>>> OK
>>> 0
>>> Time taken: 7.634 seconds
>>>
>>>
>>> No I want to connect through ODBC...
>>>
>>> Following the instructions of this page:
>>>
>>> https://cwiki.apache.org/**confluence/display/Hive/**HiveODBC<https://cwiki.apache.org/confluence/display/Hive/HiveODBC>
>>>
>>> Section "Hive Client Build/Setup", I tried:
>>>
>>> hive@orca:~/hive-0.10.0$ cd $HIVE_HOME
>>> hive@orca:~/hive-0.10.0$ ant compile-cpp
>>> -Dthrift.home=/home/hive/hive-**0.10.0/odbc
>>> Buildfile: build.xml does not exist!
>>> Build failed
>>>
>>> Then, from the sources directory src/odbc:
>>>
>>> hive@orca:~/hive-0.10.0$ cd src/odbc/
>>> hive@orca:~/hive-0.10.0/src/**odbc$ ant compile-cpp
>>> -Dthrift.home=/home/hive/hive-**0.10.0/odbc
>>> Buildfile: /home/hive/hive-0.10.0/src/**odbc/build.xml
>>>
>>> BUILD FAILED
>>> /home/hive/hive-0.10.0/src/**odbc/build.xml:30: The following error
>>> occurred while executing this line:
>>> /home/hive/hive-0.10.0/src/**build-common.xml:117: Problem: failed to
>>> create task or type osfamily
>>> Cause: The name is undefined.
>>> Action: Check the spelling.
>>> Action: Check that any custom tasks/types have been declared.
>>> Action: Check that any <presetdef>/<macrodef> declarations have taken
>>> place.
>>>
>>>
>>> Total time: 0 seconds
>>>
>>>
>>> Can someone help or point me to an up to date documentation?
>>>
>>> Thanks!
>>> Seb
>>>
>>>
>>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB