Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re: Errors about MRunit

Copy link to this message
Re: Errors about MRunit
your problem is simple, you are mixing mapred (old api) and mapreduce(new
api) libraries. MRUnit has implementation for both apis.

Here's an example of WordCountTest with use of new api.


package com.infoobjects.hadoop.wc;
import java.util.ArrayList;

import java.util.List;
import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mrunit.mapreduce.MapDriver;

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver;

import org.apache.hadoop.mrunit.mapreduce.ReduceDriver;

import org.junit.Before;

import org.junit.Test;
public class WordCountTest {

MapDriver<LongWritable, Text, Text, IntWritable> mapDriver;

ReduceDriver<Text, IntWritable, Text, IntWritable> reduceDriver;

MapReduceDriver<LongWritable, Text, Text, IntWritable, Text, IntWritable>

public void init() {

 WordMapper mapper = new WordMapper();

 WordReducer reducer = new WordReducer();

 mapDriver = new MapDriver<LongWritable, Text, Text, IntWritable>();


    reduceDriver = ReduceDriver.newReduceDriver(reducer);

    mapReduceDriver = MapReduceDriver.newMapReduceDriver(mapper, reducer);


public void testMapper() {

 mapDriver.withInput(new LongWritable(1), new Text("foo bar"));

 mapDriver.withOutput(new Text("foo"), new IntWritable(1));

 mapDriver.withOutput(new Text("bar"), new IntWritable(1));




  public void testReducer() {

    List<IntWritable> values = new ArrayList<IntWritable>();

    values.add(new IntWritable(1));

    values.add(new IntWritable(1));

    reduceDriver.withInput(new Text("foo"), values);

    reduceDriver.withOutput(new Text("foo"), new IntWritable(2));




  public void testMapReduce() {

   mapReduceDriver.withInput(new LongWritable(1), new Text("brian felix"));

   mapReduceDriver.withOutput(new Text("foo"), new IntWritable(1));

   mapReduceDriver.withOutput(new Text("bar"), new IntWritable(1));



Thanks and Regards,

Rishi Yadav

(o) 408.988.2000x113 ||  (f) 408.716.2726

InfoObjects Inc || http://www.infoobjects.com *(Big Data Solutions)*

*INC 500 Fastest growing company in 2012 || 2011*

*Best Place to work in Bay Area 2012 - *SF Business Times and the Silicon
Valley / San Jose Business Journal

2041 Mission College Boulevard, #280 || Santa Clara, CA 95054
On Sat, Apr 20, 2013 at 7:14 AM, 姚吉龙 <[EMAIL PROTECTED]> wrote:

> This is what I got form my eclipse. Why still errors about the lib from
> hadoop
> [image: 内嵌图片 1][image: 内嵌图片 2]
> anybody tell me how to use MRunit and Maven
> 2013/4/20 Hemanth Yamijala <[EMAIL PROTECTED]>
>> Hi,
>> If your goal is to use the new API, I am able to get it to work with the
>> following maven configuration:
>>     <dependency>
>>       <groupId>org.apache.mrunit</groupId>
>>       <artifactId>mrunit</artifactId>
>>       <version>0.9.0-incubating</version>
>>       <classifier>hadoop1</classifier>
>>     </dependency>
>> If I switch with classifier hadoop2, I get the same errors as what you
>> facing.
>> Thanks
>> Hemanth
>> On Sat, Apr 20, 2013 at 3:42 PM, 姚吉龙 <[EMAIL PROTECTED]> wrote:
>>> Hi Everyone
>>> I am testing my MR programe with MRunit, it's version
>>> is mrunit-0.9.0-incubating-hadoop2. My hadoop version is 1.0.4
>>> The error trace is below:
>>> java.lang.IncompatibleClassChangeError: Found class
>>> org.apache.hadoop.mapreduce.TaskInputOutputContext, but interface was
>>> expected
>>> at
>>> org.apache.hadoop.mrunit.mapreduce.mock.MockContextWrapper.createCommon(MockContextWrapper.java:53)
>>>  at
>>> org.apache.hadoop.mrunit.mapreduce.mock.MockMapContextWrapper.create(MockMapContextWrapper.java:70)
>>> at
>>> org.apache.hadoop.mrunit.mapreduce.mock.MockMapContextWrapper.<init>(MockMapContextWrapper.java:62)