Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Memory based scheduling


Copy link to this message
-
Re: Memory based scheduling
Not true, take a look at my prev. response.

On Oct 30, 2012, at 9:08 AM, lohit wrote:

> As far as I recall this is not possible. Per job or per user configurations like these are little difficult in existing version.
> What you could try is to set max map per job to be say half of cluster capacity. (This is possible with FairSchedule, I do not know of CapacityScheduler)
> For eg, if you have 10 nodes with 4 slots each. You would create pool and set max maps to be 20.
> JobTracker will try its best to spread tasks across nodes provided they are empty slots. But again, this is not guaranteed.
>
>
> 2012/10/30 Marco Zühlke <[EMAIL PROTECTED]>
> Hi,
>
> on our cluster our jobs usually satisfied with less than 2 GB of heap space.
> so we have on our 8 GB computers 3 maps maximum and on our 16 GB
> computers 4 maps maximum (we only have quad core CPUs and to have
> memory left for reducers). This works very well.
>
> But now we have a new kind of jobs. Each mapper requires at lest 4 GB
> of heap space.
>
> Is it possible to limit the number of tasks (mapper) per computer to 1 or 2 for
> these kinds of jobs ?
>
> Regards,
> Marco
>
>
>
>
> --
> Have a Nice Day!
> Lohit

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB