Thanks for the information. Few jobs were running in the cluster at the time.
Cheers! Manoj. On Fri, Feb 1, 2013 at 11:22 PM, Vijay Thakorlal <[EMAIL PROTECTED]>wrote:
> Hi Manoj,**** > > ** ** > > As you may be aware this means the reduces are unable to fetch > intermediate data from TaskTrackers that ran map tasks – you can try:**** > > * increasing tasktracker.http.threads so there are more threads to handle > fetch requests from reduces. **** > > * decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches > are performed in parallel**** > > ** ** > > It could also be due to a temporary DNS issue.**** > > ** ** > > See slide 26 of this presentation for potential causes for this message: > http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera > **** > > ** ** > > Not sure why you did not receive the problem before but was it the same > data or different data? Did you have other jobs running on your cluster?** > ** > > ** ** > > Hope that helps**** > > ** ** > > Regards**** > > Vijay**** > > ** ** > > *From:* Manoj Babu [mailto:[EMAIL PROTECTED]] > *Sent:* 01 February 2013 15:09 > *To:* [EMAIL PROTECTED] > *Subject:* Reg Too many fetch-failures Error**** > > ** ** > > Hi All,**** > > ** ** > > I am getting Too many fetch-failures exception.**** > > What might be the reason for this exception, For same size of data i dint > face this error earlier and there is change in code.**** > > How to avoid this?**** > > ** ** > > Thanks in advance.**** > > ** ** > > Cheers!**** > > Manoj.**** >
All projects made searchable here are trademarks of the Apache Software Foundation.
Service operated by Sematext