Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> what does it mean when a job fails at 100%?


Copy link to this message
-
Re: what does it mean when a job fails at 100%?
Hi Mike,

This % reported represents % of records read by framework not % of records
processed. So, for sake of example lets say you only have one record in the
data, framework will report 100% as soon as it is read even though you might
be doing lot of processing on that record and that processing is still going
on. Second, there can be floating point errors here so e.g., after reading
9991 records out of total 10000 for the split, counter will say 100% while
some records are still untouched. Lastly, if you are using close() method,
your task might be failing there and framework will report 100% before that.
I am not expert on counters, so you may to hear from others before believing
what I am saying :)

Thanks,
Ashutosh

On Fri, Nov 13, 2009 at 17:15, brien colwell <[EMAIL PROTECTED]> wrote:

> It could be that the result can't be written to HDFS. Is there any hint in
> the log? I recently encountered this behavior when writing many files back.
>
>
>
> Mike Kendall wrote:
>
>> title says it all..  this isn't the first job i've written either.  very
>> confused.
>>
>>
>>
>
>