There are few reasons to cause this problem. I have listed below some reasons with solutions. This might help you to solve this. If you post the logs, the problem can be figured out.
It could be that the mapping in the /etc/hosts file is not present.
The DNS server is down as a result of which the hostnames cannot be resolved.
The DNS server is in-correctly configured.
Solution: Setting the slave.host.name property can be one solution. Appropriate changes need to be done based on the problem.
Reason 2: If the map outputs are larger, we may get java.lang.OutOfMemoryError: Java heap space. Because of this there are too many fetch failures.
Solution: The error, java.lang.OutOfMemoryError: Java heap space in task tracker logs can be solved by any of the following methods:
By decreasing the value configured for mapred.job.shuffle.input.buffer.percent.
By increasing the heap memory of child JVM options for the property mapred.child.java.opts.
From: bharath vissapragada [[EMAIL PROTECTED]]
Sent: Monday, September 26, 2011 8:54 PM
To: [EMAIL PROTECTED]
Subject: Re: Too many fetch failures. Help!
Try configuring your cluster with hostnames instead of ips and add
those entries to /etc/hosts and sync it across all the nodes in the
cluster. You need to restart the cluster after making these changes.
Hope this helps,
On Mon, Sep 26, 2011 at 8:46 PM, Abdelrahman Kamel <[EMAIL PROTECTED]> wrote:
> This is my first post here.
> I'm new to Hadoop.
> I've already installed Hadoop on 2 Ubuntu boxes (one is both master and
> slave and the other is only slave).
> When I run a Wordcount example on 5 small txt files, the process never
> completes and I get a "Too many fetch failures" error on my terminal.
> If you can help me, I cant post my terminal's output and any log files
> Great thanks.
> Abdelrahman Kamel