Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Uber Job!


    Suppose that your input file are 10 with total size 64mb , I think you will get the 10 maps.

    By the ways,the uber mode is only for yarn . Suppose you have actually 1 map ,yarn will at least create two containers , one for app master and the other for the map , if uber mode is enabled with the yarn , yarn will only create 1 container for both app master and the map.

发自我的 iPhone

在 2013-5-6,22:45,Rahul Bhattacharjee <[EMAIL PROTECTED]> 写道:

> Hi,
> I was going through the definition of Uber Job of Hadoop.
> A job is considered uber when it has 10 or less maps , one reducer and the complete data is less than one dfs block size.
> I have some doubts here-
> Splits are created as per the dfs block size.Creating 10 mappers are possible from one block of data by some settings change (changing the max split size). But trying to understand , why would some job need to run around 10 maps for 64 MB of data.
> One thing may be that the job is immensely CUP intensive. Will it be a correct assumption? or is there is any other reason for this.
> Thanks,
> Rahul