Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Fixing a failed reduce task


Copy link to this message
-
Re: Fixing a failed reduce task
Steve Lewis 2010-07-14, 01:57
Yes - of course but the question is whether there is a way to do it while
the job is running rather than
restarting with different parameter

On Tue, Jul 13, 2010 at 4:51 PM, Ted Yu <[EMAIL PROTECTED]> wrote:

> A general solution for OOME is to reduce the size of input to (reduce) task
> so that each (reduce) task consumes less memory.
>
>
> On Tue, Jul 13, 2010 at 10:16 AM, Steve Lewis <[EMAIL PROTECTED]>wrote:
>
>> I am running a map reduce ob where a few reduce tasks fail with an out of
>> memory error -
>> Increasing the memory is not an option. However if a retry had information
>> that an earlier attempt
>> failed out of memory and especially it it had access to a few numbers
>> describing how far the earlier attempt
>> managed to get, it could defend against the error
>> I have seen little information about how a retried task might access the
>> error logs or other information
>> from previous attempts - is there such a mechanism???
>>
>>
>> --
>> Steven M. Lewis PhD
>> Institute for Systems Biology
>> Seattle WA
>>
>
>
--
Steven M. Lewis PhD
Institute for Systems Biology
Seattle WA