From: Arun C Murthy [mailto:[EMAIL PROTECTED]]
Sent: Monday, November 11, 2013 7:25 PM
To: [EMAIL PROTECTED]
Subject: Re: worker affinity and YARN scheduling
On Nov 11, 2013, at 6:28 AM, John Lilley <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
I would like to better understand YARN's scheduling with named workers and relaxedLocality==true. For example, suppose that I have a three-node cluster with nodes A,B,C. Each node has capacity to run two tasks of the kind I desire simultaneously. My AM then requests nine containers with worker-name set so that I am requesting three containers per worker. The cluster starts idle and has no other users. My questions:
* Is it optimal to issue three ResourceRequests, each with numContainers==3? (As opposed to nine requests)
Correct, that is why the resource protocol is designed as it is i.e. reduce #requests required.
* Initially, I expect the RM to allocate two containers per node, and I expect to have the containers match the named workers. Is this always the case?
Generally - yes. It does depend on the scheduler implementation though.
* If the first task completes on worker "B", can I rely on the ResourceRequest for "B" to be fulfilled next?
Generally - yes.
* What techniques should be used to get the containers on the workers I expect most often?
Nothing special, you could use relaxLocality = false if you really want it on a specific node/rack.
* What techniques should be used to reduce container allocation latency, if possible?
Typically, latency is very small - there are some ongoing enhancements to make it better.
Arun C. Murthy
NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.