Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Job cleanup


What does the job cleanup task do?  My understanding was it just cleaned up
any intermediate/temporary files and moved the reducer output to the output
directory?  Does it do more?

One of my jobs runs, all maps and reduces finish, but then the job cleanup
task never finishes.  Instead it gets killed several times until the entire
Job gets killed:

Task attempt_201303272327_0772_m_000105_0 failed to report status for
600 seconds. Killing!
I suppose that since my reducers generate around 20GB of output, that
perhaps moving it takes too long?

Is it possible to disable speculative execution *only* for the cleanup task?
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB