-Re: Efficient way to read a large number of files in S3 and upload their content to HBase
Marcos Ortiz 2012-05-24, 19:52
On 05/24/2012 03:21 PM, Amandeep Khurana wrote:
> Can you elaborate on your use case a little bit? What is the nature of
> data in S3 and why you want to use HBase? Why do you want to combine
> HFiles and upload back to S3? It'll help us answer your questions
Ok, let me explain more.
We are working on a ads optimization platform on top of Hadoop and HBase.
Another team of my organization create a type of log file per click by user
and store this file in S3. I discussed with them that a better approach
is to storage this
"workflow" log in HBase, instead S3, because in this way, we can quit
the another step
to read from S3 the content of the file, build the HFile and upload it
The content of the file in S3 is the basic information for the operation:
- Source URL
- User Id
- User agent of the user
- Campaign id
and more fields.
So, we want this to then create MapReduce jobs on top of HBase to some
calculations and reports
for this data.
We are valuating HBase because our current solution is on top of
PostgreSQL, but the main issue is when you
launch a campaign on the platform, the INSERTs and UPDATEs to PostgreSQL
in a short time, could rise from 1 to
100 clicks per second. We did some preliminary tests and in two days,
the table where we store the "workflow"
log grow exponentially to 350, 000 tuples, so, it could be a problem.
For that reason, we want to migrate this to HBase.
But I think that the approach to generate a file in S3 and then upload
to HBase is not the best way to do this; because, you can always
create the workflow log for every user, build a Put for it and upload it
to HBase, and to avoid the locks, Iï¿½m valuating to use the asynchronous
by StumbleUpon. 
What do you think about this?
> On May 24, 2012, at 12:19 PM, Marcos Ortiz<[EMAIL PROTECTED]> wrote:
>> Thanks a lot for your answer, Amandeep.
>> On 05/24/2012 02:55 PM, Amandeep Khurana wrote:
>>> You could to a distcp from S3 to HDFS and then do a bulk import into HBase.
>> The quantity of files are very large, so, we want to combine some files,
>> and then construct
>> the HFile to upload to HBase.
>> Any example of a custom FileMerger for it?
>>> Are you running HBase on EC2 or on your own hardware?
>> We have created a small HBase in our own hardware, but we want to build
>> another cluster on top of Amazon EC2. This
>> could be very good for the integration between S3 and the HBase cluster.
>>> On Thursday, May 24, 2012 at 11:52 AM, Marcos Ortiz wrote:
>>>> Regards to all the list.
>>>> We are using Amazon S3 to store millions of files with certain format,
>>>> and we want to read the content of these files and then upload its
>>>> content to
>>>> a HBase cluster.
>>>> Anyone has done this?
>>>> Can you recommend me a efficient way to do this?
>>>> Best wishes.
>>>> Marcos Luis Ortï¿½z Valmaseda
>>>> Data Engineer&& Sr. System Administrator at UCI
>>>> Twitter: @marcosluis2186
>>>> 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
>>>> CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
>>> 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
>>> CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
>> Marcos Luis Ortï¿½z Valmaseda
>> Data Engineer&& Sr. System Administrator at UCI
>> Twitter: @marcosluis2186
>> 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
Marcos Luis Ortï¿½z Valmaseda
Data Engineer&& Sr. System Administrator at UCI
10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION