I am in a project that has three databases with flat files. Our plan is to
normalize these DB in one. We will need to follow the Data warehouse
concept (ETL - Extraction, Transform, Load).

We are thinking to use Hadoop at the Transform step, because we need to
relate datas from the three databases. Do you think this is a good option?
Is there any tutorial/article about it?

We are also thinking to use HIVE to Extract the files, insert it on Hadoop
and use HIVE to query these datas. At this step we are going to eliminate
blank spaces, duplicate datas, transform a name register to an ID.

What are yours experience about this?

Thanks a lot for any contribution!

*---- Felipe Oliveira Gutierrez-- [EMAIL PROTECTED]

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB