Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Disable retries


Copy link to this message
-
Re: Disable retries
You may use the APIs directly:
http://hadoop.apache.org/common/docs/stable/api/org/apache/hadoop/mapred/JobConf.html#setMaxMapAttempts(int)
and http://hadoop.apache.org/common/docs/stable/api/org/apache/hadoop/mapred/JobConf.html#setMaxReduceAttempts(int)
to avoid config strings pain.

On Fri, Aug 3, 2012 at 5:59 AM, Marco Gallotta <[EMAIL PROTECTED]> wrote:
> Great, thanks!
>
> --
> Marco Gallotta | Mountain View, California
> Software Engineer, Infrastructure | Loki Studios
> fb.me/marco.gallotta | twitter.com/marcog
> [EMAIL PROTECTED] | +1 (650) 417-3313
>
> Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
>
>
> On Thursday 02 August 2012 at 5:02 PM, Bejoy KS wrote:
>
>> Hi Marco
>>
>> You can disable retries by setting
>> mapred.map.max.attempts and mapred.reduce.max.attempts to 1.
>>
>> Also if you need to disable speculative execution. You can disable it by setting
>> mapred.map.tasks.speculative.execution and mapred.reduce.tasks.speculative.execution to false.
>>
>> With these two steps you can ensure that a task is attempted only once.
>>
>> These properties to be set in mapred-site.xml or at job level.
>>
>>
>> Regards
>> Bejoy KS
>>
>> Sent from handheld, please excuse typos.
>>
>> -----Original Message-----
>> From: Marco Gallotta <[EMAIL PROTECTED] (mailto:[EMAIL PROTECTED])>
>> Date: Thu, 2 Aug 2012 16:52:00
>> To: <[EMAIL PROTECTED] (mailto:[EMAIL PROTECTED])>
>> Reply-To: [EMAIL PROTECTED] (mailto:[EMAIL PROTECTED])
>> Subject: Disable retries
>>
>> Hi there
>>
>> Is there a way to disable retries when a mapper/reducer fails? I'm writing data in my mapper and I'd rather catch the failure, recover from a backup (fairly lightweight in this case, as the output tables aren't big) and restart.
>>
>>
>>
>> --
>> Marco Gallotta | Mountain View, California
>> Software Engineer, Infrastructure | Loki Studios
>> fb.me/marco.gallotta (http://fb.me/marco.gallotta) | twitter.com/marcog (http://twitter.com/marcog)
>> [EMAIL PROTECTED] (mailto:[EMAIL PROTECTED]) | +1 (650) 417-3313
>>
>> Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
>

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB