Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # dev >> Max Maps for default queues in FairScheduler


Copy link to this message
-
Re: Max Maps for default queues in FairScheduler
You're correct that there's no way to put a hard limit on the number of
maps or reduces for a given user, and a user can potentially consume all of
the cluster resources.  However, if there are multiple users contending for
resources, the scheduler makes an effort to schedule tasks equally, so it
would be unlikely for a single user to get that much of the cluster.

Can I ask what you need a userMaxMaps/Reducers for?

On Thu, Oct 18, 2012 at 4:41 PM, lohit <[EMAIL PROTECTED]> wrote:

> I am trying to understand FairScheduler configs I am trying to see if there
> is a way to achieve the below.
> I see that if there are no pools configured (or only few pools are
> configured ) and a user submits a job, it would end up in his own pool,
> right?
> Now, I see there are some limits you can set globally for such users, for
> example userMaxJobsDefault.
> Is there a way to set userMaxMaps or userMaxReducers? It looks like if I
> have few pools configured and a user who submits a job without specify a
> pool will be given his own pool. He can potentially consume 100% of
> Map/Reduce slots. Is my understand correct?
>
> --
> Have a Nice Day!
> Lohit
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB