Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Why Hadoop can't find Reducer when Mapper reads data from HBase?


Copy link to this message
-
Why Hadoop can't find Reducer when Mapper reads data from HBase?
yonghu 2012-07-12, 11:15
Hello,

I tried the program as follows:
public class MRTableAccess {

static class MRTableMapper extends TableMapper<Text, Text>{
private Text rowInfor = new Text();
private Text column = new Text();
public void map(ImmutableBytesWritable row, Result values, Context
context) throws IOException, InterruptedException{
for(KeyValue kv : values.raw()){
String row_key = Bytes.toString(kv.getRow()) + "/" +
Bytes.toString(kv.getFamily()) + "/" +
Bytes.toString(kv.getQualifier());
String opType =  KeyValue.Type.codeToType(kv.getType()).toString();
String column_value = opType + "/" + new
Long(kv.getTimestamp()).toString() + "/" +
Bytes.toString(kv.getValue());
rowInfor.set(row_key);
column.set(column_value);
context.write(rowInfor, column);
}
}
}

static class MTableReducer extends Reducer<Text, Text, Text, Text>{

public void reduce(Text key, Iterable<Text> values, Context context)
throws IOException, InterruptedException{
Iterator it = values.iterator();
while(it.hasNext()){
context.write(key, (Text)it.next());
}
}
}

public static void main(String[] args) throws Exception{  //only
extract the latest data version for each row
long start_time = System.currentTimeMillis();
//System.out.println("start_time is " + start_time);
Configuration conf = new Configuration();
Configuration hconf = HBaseConfiguration.create(conf);
hconf.set("hbase.zookeeper.quorum", "localhost");
       hconf.set("hbase.zookeeper.property.clientPort", "2181");
      hconf.set("fs.default.name", "hdfs://localhost:8020");
Job job = new Job(hconf,"MRTableScann");
job.setJarByClass(MRTableAccess.class);
Scan scan = new Scan();
TableMapReduceUtil.initTableMapperJob("Baseball", scan,
MRTableMapper.class, Text.class, Text.class, job);
job.setReducerClass(MTableReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileOutputFormat.setOutputPath(job, new
Path("hdfs://localhost/MRExtraction"));
boolean i = job.waitForCompletion(true);
System.exit(i ? 1 : 0);
}
}
 I run this program in my laptop. It works fine if only Map taks run,
but if I add reduce task, an error occurs:

java.lang.RuntimeException: java.lang.ClassNotFoundException:
com.mapreducetablescan.MRTableAccess$MTableReducer;

Does anybody know why?

regards!

Yong