If your requirement is that queries are not going to be run on fly then i
would suggest following.
1) Create Hive script
2) Combine it with Oozie workflow to run at scheduled time and push results
to some DB say MySQL
3) Use some application to talk to MySQL and generate those reports.
On Thu, Dec 13, 2012 at 7:15 PM, Manish Malhotra <
[EMAIL PROTECTED]> wrote:
> Ideally, push the aggregated data to some RDBMS like MySQL and have REST
> API or some API to enable ui to build report or query out of it.
> If the use case is ad-hoc query then once that qry is submitted, and
> result is generated in batch mode, the REST API can be provided to get the
> results from HDFS directly.
> For this can use WebHDFS or build own which can internally using
> FileSystem API.
> On Wed, Dec 12, 2012 at 11:30 PM, Nitin Pawar <[EMAIL PROTECTED]>wrote:
>> Hive takes a longer time to respond to queries as the data gets larger.
>> Best way to handle this is you process the data on hive and store in some
>> rdbms like mysql etc.
>> On top of that then you can write your own API or use pentaho like
>> interface where they can write the queries or see predefined reports.
>> Alternatively, pentaho does have hive connection as well. There are other
>> platforms such as talend, datameer etc. You can have a look at them
>> On Thu, Dec 13, 2012 at 1:15 AM, Leena Gupta <[EMAIL PROTECTED]>wrote:
>>> We are using Hive as our data warehouse to run various queries on large
>>> amounts of data. There are some users who would like to get access to the
>>> output of these queries and display the data on an existing UI application.
>>> What is the best way to give them the output of these queries? Should we
>>> write REST APIs that the Front end can call to get the data? How can this
>>> be done?
>>> I'd like to know what have other people done to meet this requirement ?
>>> Any pointers would be very helpful.
>> Nitin Pawar