Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # dev >> TestReduceFetch fails on 64 bit java on branch 0.20  and y! hadoop 0.20.1


Copy link to this message
-
Re: FW: TestReduceFetch fails on 64 bit java on branch 0.20 and y! hadoop 0.20.1
It's just verifying that the reduce retains map output segments in
memory under some conditions. MAPREDUCE-433 made the test more
reliable on 0.21; it should also work for 0.20. -C

On Wed, Aug 5, 2009 at 11:19 PM, Zheng Shao<[EMAIL PROTECTED]> wrote:
> Saw the same error on https://issues.apache.org/jira/browse/HADOOP-4302
>
> junit.framework.AssertionFailedError
>        at org.apache.hadoop.mapred.TestReduceFetch.testReduceFromPartialMem(TestReduceFetch.java:119)
>        at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
>        at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
>        at junit.extensions.TestSetup.run(TestSetup.java:23)
>
>
> Chris, can you shed some light on what this test does, and whether it matters if this test fails?
>
>
> ===========================> [zshao branch-0.20] svn annotate ./src/test/org/apache/hadoop/mapred/TestReduceFetch.java
> 694459   cdouglas   public void testReduceFromPartialMem() throws Exception {
> 694459   cdouglas     JobConf job = mrCluster.createJobConf();
> 700918   cdouglas     job.setNumMapTasks(5);
> 700918   cdouglas     job.setInt("mapred.inmem.merge.threshold", 0);
> 694459   cdouglas     job.set("mapred.job.reduce.input.buffer.percent", "1.0");
> 700918   cdouglas     job.setInt("mapred.reduce.parallel.copies", 1);
> 700918   cdouglas     job.setInt("io.sort.mb", 10);
> 700918   cdouglas     job.set("mapred.child.java.opts", "-Xmx128m");
> 700918   cdouglas     job.set("mapred.job.shuffle.input.buffer.percent", "0.14");
> 700918   cdouglas     job.setNumTasksToExecutePerJvm(1);
> 700918   cdouglas     job.set("mapred.job.shuffle.merge.percent", "1.0");
> 694459   cdouglas     Counters c = runJob(job);
> 718229       ddas     final long hdfsWritten = c.findCounter(Task.FILESYSTEM_COUNTER_GROUP,
> 718229       ddas         Task.getFileSystemCounterNames("hdfs")[1]).getCounter();
> 718229       ddas     final long localRead = c.findCounter(Task.FILESYSTEM_COUNTER_GROUP,
> 718229       ddas         Task.getFileSystemCounterNames("file")[0]).getCounter();
> 700918   cdouglas     assertTrue("Expected at least 1MB fewer bytes read from local (" +
> 700918   cdouglas         localRead + ") than written to HDFS (" + hdfsWritten + ")",
> 700918   cdouglas         hdfsWritten >= localRead + 1024 * 1024);
> 694459   cdouglas   }
>
>
> [zshao branch-0.20] svn log ./src/test/org/apache/hadoop/mapred/TestReduceFetch.java
> ...
> ------------------------------------------------------------------------
> r718229 | ddas | 2008-11-17 04:23:15 -0800 (Mon, 17 Nov 2008) | 1 line
>
> HADOOP-4188. Removes task's dependency on concrete filesystems. Contributed by Sharad Agarwal.
> ------------------------------------------------------------------------
> r700918 | cdouglas | 2008-10-01 13:57:36 -0700 (Wed, 01 Oct 2008) | 3 lines
>
> HADOOP-4302. Fix a race condition in TestReduceFetch that can yield false
> negatvies.
>
> ------------------------------------------------------------------------
> r696640 | ddas | 2008-09-18 04:47:59 -0700 (Thu, 18 Sep 2008) | 1 line
>
> HADOOP-3829. Narrown down skipped records based on user acceptable value. Contributed by Sharad Agarwal.
> ------------------------------------------------------------------------
> r694459 | cdouglas | 2008-09-11 13:26:11 -0700 (Thu, 11 Sep 2008) | 5 lines
>
> HADOOP-3446. Keep map outputs in memory during the reduce. Remove
> fs.inmemory.size.mb and replace with properties defining in memory map
> output retention during the shuffle and reduce relative to maximum heap
> usage.
>
> ------------------------------------------------------------------------
>
> ===========================>
>
> Zheng
> -----Original Message-----
> From: Rama Ramasamy [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, August 04, 2009 5:19 PM
> To: [EMAIL PROTECTED]
> Subject: TestReduceFetch fails on 64 bit java on branch 0.20 and y! hadoop 0.20.1
>
>
> With JAVA_HOME set to 64 bit version of jvm,   "ant  test -Dtestcase=TestReduceFetch" fails with error message
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB