Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Re: Abort a job when a counter reaches to a threshold


Copy link to this message
-
Re: Abort a job when a counter reaches to a threshold
Yes there is a job level end-point upon success via OutputCommitter:
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/OutputCommitter.html#commitJob(org.apache.hadoop.mapreduce.JobContext)

On Fri, May 24, 2013 at 1:13 PM, abhinav gupta <[EMAIL PROTECTED]> wrote:
> Hi,
>
> While running a map-reduce job, that has only mappers, I have a counter that
> counts the number of failed documents .And after all the mappers are done, I
> want the job to fail if the total number of failed documents are above a
> fixed fraction. ( I need it in the end because I don't know the total number
> of documents initially). How can I achieve this without implementing a
> reduce just for this ?
> I know that there are task level cleanup method. But is there any job level
> cleanup method, that can be used to perform this after all the tasks are
> done ?
>
> Thanks
> Abhinav

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB