First off, processing the data from (say, Apache) logs in Hive and storing aggregates in a reporting server like you mentioned is a fairly common paradigm.
You have some large scale data (Apache logs) and some dimension data (user data). The problem you really have is how to make use of this dimension data during your analysis.
For each of your options:
1. Running away from your problem isn't really a solution:-)
2. You wouldn't want to connect to Application DB from your Hive/Hadoop jobs. All your Hadoop nodes could be connecting to your Application DB at the same time, in the worst case.
3. Now this sounds promising. Have a periodic job that runs and populates/updates a file on S3 with the user status information. Then create an external Hive table on top of this S3 file and use this Hive table in your analysis. Alternatively (depending on the use case, size and other considerations), you could add this S3 file to distributed cache; that way this file becomes available to all mappers and reducers for possible consumption.
Here is option 4: What if you analyzed the data for all locations of the user? Then, when you export this aggregate data in reporting server, drop/ignore all aggregate analyzed data for each user for all but the present location. That way, Hive just needs a list of new/changed users that it needs to run periodic analysis on. Since your stakeholders would be interacting only with the reporting server, you just need to make sure that they see the latest location data.
----- Original Message -----
From: "shrikanth shankar" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Sent: Tuesday, May 15, 2012 2:41:21 PM
Subject: Re: What's the right data storage/representation?
Hive tables can sit on top of S3 storage so you dont really need a separate export process
On May 15, 2012, at 11:35 AM, Jon Palmer wrote:
> That seems like a very reasonable approach. However, if we use a technology like Amazon Elastic Map Reduce my Hive cluster is (potentially) going to be destroyed and recreated. As a result I'd really need to export the update history Hive table to some other store (like S3) so that it can be re-imported on the next spin up of the Hive cluster. Do I have that right?
> -----Original Message-----
> From: shrikanth shankar [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, May 15, 2012 1:14 PM
> To: [EMAIL PROTECTED]
> Subject: Re: What's the right data storage/representation?
> I would agree on keeping track of the history of updates in a separate table in Hive (you may not need to maintain it in the application tier). This pattern seems to be the "Slowly Changing Dimension" pattern used in other (more traditional) Data Warehouses... I suspect the challenge here would be writing a ETL process to maintain the Hive table based on the current status of the application db table ..
> On May 15, 2012, at 9:41 AM, Owen O'Malley wrote:
>> On Tue, May 15, 2012 at 5:11 AM, Jon Palmer <[EMAIL PROTECTED]> wrote:
>>> I can see a few potential solutions:
>>> 1. Don't solve it. Accept that you have some artifacts in your
>>> reporting data that cannot be recovered from the source data.
>>> 2. Create status and location history tables in the application db and
>>> use that during the analytics process.
>>> 3. Log the status and location change 'events' to some other log file
>>> and use those logs in the Hive analysis.
>> I would probably create a Hive table that includes the status and
>> location updates. One of the advantages of Hive & Hadoop is that it is
>> easy to store the raw information in bulk and continue to process it.
>> Once you have the information, you will likely find new uses for it.
>> -- Owen
> This email is intended for the person(s) to whom it is addressed and may contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized use, distribution, copying, or disclosure by any person other than the addressee(s) is strictly prohibited. If you have received this email in error, please notify the sender immediately by return email and delete the message and any attachments from your system.