Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: regarding hadoop


Copy link to this message
-
Re: regarding hadoop
Hi,

This is most likely caused by an improper network environment wherein
the reducer is not able to resolve all available tasktrackers to read
the map outputs. Check the logs of the task attempt
attempt_201304091351_0001_r_000000_0 from the web UI for more specific
information on which host it wasn't able to resolve.

On Tue, Apr 9, 2013 at 2:48 PM, Rajashree Bagal
<[EMAIL PROTECTED]> wrote:
> we are getting the following error/warning while running wordcount program
> on hadoop 2 node cluster with one master and one slave...
>
>
> arpit@arpit:~/hadoop-1.0.3$ bin/hadoop jar hadoop-examples-1.0.3.jar
> wordcount /Input /Output
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please
> use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties
> files.
> 13/04/09 13:51:56 INFO input.FileInputFormat: Total input paths to process :
> 3
> 13/04/09 13:51:56 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library
> 13/04/09 13:51:56 WARN snappy.LoadSnappy: Snappy native library not loaded
> 13/04/09 13:51:57 INFO mapred.JobClient: Running job: job_201304091351_0001
> 13/04/09 13:51:58 INFO mapred.JobClient:  map 0% reduce 0%
> 13/04/09 13:52:13 INFO mapred.JobClient:  map 66% reduce 0%
> 13/04/09 13:52:16 INFO mapred.JobClient:  map 100% reduce 0%
> 13/04/09 13:52:22 INFO mapred.JobClient:  map 100% reduce 22%
> 13/04/09 13:59:47 INFO mapred.JobClient:  map 100% reduce 0%
> 13/04/09 13:59:52 INFO mapred.JobClient: Task Id :
> attempt_201304091351_0001_r_000000_0, Status : FAILED
> Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
> 13/04/09 13:59:52 WARN mapred.JobClient: Error reading task outputhadoop
> 13/04/09 13:59:52 WARN mapred.JobClient: Error reading task outputhadoop
> 13/04/09 14:00:05 INFO mapred.JobClient:  map 100% reduce 11%
>
> what can be the possible solution.... is it the fault of setup or anything
> else....
> please help

--
Harsh J