Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # dev >> pluggable resources


Copy link to this message
-
pluggable resources
I have proposal for improved resource scheduling.

https://issues.apache.org/jira/browse/MAPREDUCE-4256

as i see, development seems to go other way for example in
https://issues.apache.org/jira/browse/YARN-2 for every added kind of
resource there has to be significant rework.

you do not see benefits of having framework able to handle custom
resource types? Its not all about memory and cores. You need to schedule
jobs based on other factors (network capacity, availability of GPU
cores, data locality).

And every cluster might have special considerations for example do not
overload central SQL database. We usually have few hundred submitted
jobs, proper resource sharing is essential. No point in running jobs
which needs GPU which is in use by other mapper, better to run some
other jobs until gpu becomes available again.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB