Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> default capacity scheduler only one job in running status


Copy link to this message
-
Re: default capacity scheduler only one job in running status
and i did some test,the following is result, i also notice that the user
only use 11G memory,due to the resource-calculator is based on memory ,i
guess if i let allocate more
memory ,it can allow more job run parallel, the Total memory list in my
http://RM_IP:8088 <http://rm_ip:8088/>
is 24G ,i wander why the user can not take all 24G memory for running it's
jobs? any options limit?
yarn.scheduler.capacity.maximum-am-resource-percent  0.8
yarn.scheduler.capacity.root.default.user-limit-factor      0.3
2 job running (small memory)
 yarn.scheduler.capacity.maximum-am-resource-percent  0.8
yarn.scheduler.capacity.root.default.user-limit-factor      0.2
1 job running(small memory)

yarn.scheduler.capacity.maximum-am-resource-percent  0.9
yarn.scheduler.capacity.root.default.user-limit-factor   0.4
3 job running  (small memory require job)
1 job running  (large memory require job),3 job blocked
 yarn.scheduler.capacity.maximum-am-resource-percent  0.9
 yarn.scheduler.capacity.root.default.user-limit-factor   0.2
2 job runnning
On Tue, Nov 26, 2013 at 6:58 PM, Olivier Renault
<[EMAIL PROTECTED]>wrote:

> At the queue level, you've define a certain amount of ressources. For
> argument sake, let's say that your queue is allowed to consume 50% of your
> cluster and max 100%. As a single user, you won't be able to consume more
> than 50%. If you've got two different user within the queue, they would be
> able to use 100% of teh overall cluster. You can define how much a user is
> entitle to take of the overall Q by playing with yarn.scheduler.
> capacity.root.production.user-limit-factor.
>
> If with job1 userA has reached the max he is entitled, he will need to
> wait for some slots to become free before job2 start.
>
> Olivier
>
>
> On 26 November 2013 10:46, ch huang <[EMAIL PROTECTED]> wrote:
>
>> so ,by default ,user A submitted 5 jobs ,only 1 job is running ,if i
>> modified the option value to 5,all job will be running parallel,right?
>>
>>
>>  On Tue, Nov 26, 2013 at 6:29 PM, Olivier Renault <
>> [EMAIL PROTECTED]> wrote:
>>
>>>  If you're running all the job from the same user, by default, you
>>> can't take more than the value of the queue. It can be modified by setting
>>> the following in capacity-scheduler.xml
>>>
>>>    <name>yarn.scheduler.capacity.root.production.user-limit-factor</name>
>>>
>>>    <value>1</value>
>>>
>>> Olivier
>>>
>>>
>>> On 26 November 2013 09:20, ch huang <[EMAIL PROTECTED]> wrote:
>>>
>>>> hi,maillist:
>>>>             i set the following option in yarn-site.xml ,let yarn
>>>> framework to use capacity scheduler,but i submit three job,only one job in
>>>> running status,other two stay in accepted status,why ,the default queue
>>>> only 50% capacity used,i do not know why?
>>>>
>>>> <property>
>>>>     <name>yarn.resourcemanager.scheduler.class</name>
>>>>
>>>> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
>>>> </property>
>>>>
>>>
>>>
>>>
>>>
>>>
>  Latest From Our Blog: SAP HANA + Hadoop: A Perfect Match
> <http://hortonworks.com/blog/sap-hana-hadoop-a-perfect-match/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>