Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> RE: Containers and CPU


Copy link to this message
-
RE: Containers and CPU
I believe this is the default behavior.
By default, only memory limit on resources is enforced.
The capacity scheduler will use DefaultResourceCalculator to compute resource allocation for containers by default, which also does not take CPU into account.

-Chuan

From: John Lilley [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 02, 2013 8:57 AM
To: [EMAIL PROTECTED]
Subject: Containers and CPU

I have YARN tasks that benefit from multicore scaling.  However, they don't *always* use more than one core.  I would like to allocate containers based only on memory, and let each task use as many cores as needed, without allocating exclusive CPU "slots" in the scheduler.  For example, on an 8-core node with 16GB memory, I'd like to be able to run 3 tasks each consuming 4GB memory and each using as much CPU as they like.  Is this the default behavior if I don't specify CPU restrictions to the scheduler?
Thanks
John