Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> Task attempt failed to report status for 602 seconds. Killing!


Copy link to this message
-
Re: Task attempt failed to report status for 602 seconds. Killing!
+1 for increasing the number parallel and also try adding mapred.task.timeout to your job configuration for this particular script.

We've had a similar problem and it helps but not sure it's going to solve the issue completely cause we still get memory problems under certain conditions.
Try also looking at optimizing your JOIN statement using hints from the Pig Cookbook.

On May 20, 2010, at 2:38 AM, Rekha Joshi wrote:

> Did you try increasing the parallelism? Also at times mapred.task.timeout tuning works.If you are doing it via pig, some have reported good performance by speculative execution.
> Cheers,
> /R
>
> On 5/20/10 1:39 PM, "Alexander SchÀtzle" <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I often get this error message when executing a Join over big data (~ 160 GB):
>
> "Task attempt failed to report status for 602 seconds. Killing!"
>
> The job finally finishes but a lot of reduce tasks are killed with this error message.
> I execute the JOIN with a PARALLEL statement of 9.
> Finally all 9 reduces succeed but there are also, for example, 13 Failed Taks attempts.
> This also causes the execution time to get very slow!
>
> Does anybody have an idea what's happening or have the same problem?
>
> Thx in advance,
> Alex
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB