Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> incremental loads into hadoop

Copy link to this message
Re: incremental loads into hadoop
There is two method is there for processing OLTP

   1.  Hstremming or scibe  these are only methodes
   2. if not use chukuwa for storing the data so that when i you got a
   tesent volume then you can move to HDFS

            Thanks and Regards,
On Sat, Oct 1, 2011 at 4:32 AM, Sam Seigal [via Lucene] <
ml-node+[EMAIL PROTECTED]> wrote:

> Hi,
> I am relatively new to Hadoop and was wondering how to do incremental
> loads into HDFS.
> I have a continuous stream of data flowing into a service which is
> writing to an OLTP store. Due to the high volume of data, we cannot do
> aggregations on the OLTP store, since this starts affecting the write
> performance.
> We would like to offload this processing into a Hadoop cluster, mainly
> for doing aggregations/analytics.
> The question is how can this continuous stream of data be
> incrementally loaded and processed into Hadoop ?
> Thank you,
> Sam
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
> http://lucene.472066.n3.nabble.com/incremental-loads-into-hadoop-tp3383949p3383949.html
>  To unsubscribe from Lucene, click here<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=472066&code=aW4uYWJkdWxAZ21haWwuY29tfDQ3MjA2NnwxMDczOTUyNDEw>.
View this message in context: http://lucene.472066.n3.nabble.com/incremental-loads-into-hadoop-tp3383949p3385689.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.