Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> mapred.max.tracker.failures


Copy link to this message
-
Re: mapred.max.tracker.failures
It is a per-job config which controls the automatic job-level
blacklist: If, for a single job, a specific tracker has failed 4 (or
X) total tasks, then as prevent scheduling anymore of the job's tasks
to that tracker (but we don't eliminate more than 25% of the available
trackers this way, as for a bad logic job causing failures, that'd
make the job simply hang).

On Thu, Mar 7, 2013 at 11:21 AM, Mohit Anchlia <[EMAIL PROTECTED]> wrote:
> I am wondering what the correct behaviour is of this parameter? If it's set
> to 4 does it mean job should fail if a job has more than 4 failures?

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB