Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Hadoop and MainFrame integration


Copy link to this message
-
Re: Hadoop and MainFrame integration
The problem with it, is that Hadoop depends on top of HDFS to storage in
blocks of 64/128 MB of size (or the size that you determine, 64 MB is
the de-facto size), and then make the calculations.
So, you need to move all your data to a HDFS cluster to use data in
MapReduce jobs if you want to make the calculations with Hadoop.
Best wishes

El 28/08/2012 12:24, Siddharth Tiwari escribi�:
> Hi Users.
>
> We have flat files on mainframes with around a billion records. We
> need to sort them and then use them with different jobs on mainframe
> for report generation. I was wondering was there any way I could
> integrate the mainframe with hadoop do the sorting and keep the file
> on the sever itself ( I do not want to ftp the file to a hadoop
> cluster and then ftp back the sorted file to Mainframe as it would
> waste MIPS and nullify the advantage ). This way I could save on MIPS
> and ultimately improve profitability.
>
> Thank you in advance
>
>
> **------------------------**
> *_Cheers !!!_*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of
> worship of God.� *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>
> <http://www.uci.cu/>
10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci