Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Re: Reducer not getting called


Copy link to this message
-
Re: Reducer not getting called
The programming error is already mentioned. You are actually not overriding
base classes method , rather created a new method.

Thanks,
Rahul
On Thu, Jun 13, 2013 at 11:12 AM, Omkar Joshi
<[EMAIL PROTECTED]>wrote:

> Ok but that link is broken - can you provide a working one?
>
> Regards,
> Omkar Joshi
>
>
> -----Original Message-----
> From: Harsh J [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, June 13, 2013 11:01 AM
> To: <[EMAIL PROTECTED]>
> Subject: Re: Reducer not getting called
>
> You're not using the recommended @Override annotations, and are
> hitting a classic programming mistake. Your issue is same as this
> earlier discussion: http://search-hadoop.com/m/gqA3rAaVQ7 (and the
> ones before it).
>
> On Thu, Jun 13, 2013 at 9:52 AM, Omkar Joshi
> <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> >
> >
> > I have a SequenceFile which contains several jpeg images with (image
> name,
> > image bytes) as key-value pairs. My objective is to count the no. of
> images
> > by grouping them by the source, something like this :
> >
> >
> >
> > Nikon Coolpix  100
> >
> > Sony Cybershot 251
> >
> > N82 100
> >
> >
> >
> >
> >
> > The MR code is :
> >
> >
> >
> > package com.hadoop.basics;
> >
> >
> >
> > import java.io.BufferedInputStream;
> >
> > import java.io.ByteArrayInputStream;
> >
> > import java.io.IOException;
> >
> > import java.util.Iterator;
> >
> >
> >
> > import org.apache.hadoop.conf.Configuration;
> >
> > import org.apache.hadoop.conf.Configured;
> >
> > import org.apache.hadoop.fs.Path;
> >
> > import org.apache.hadoop.io.BytesWritable;
> >
> > import org.apache.hadoop.io.IntWritable;
> >
> > import org.apache.hadoop.io.Text;
> >
> > import org.apache.hadoop.mapreduce.Job;
> >
> > import org.apache.hadoop.mapreduce.Mapper;
> >
> > import org.apache.hadoop.mapreduce.Reducer;
> >
> > import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
> >
> > import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
> >
> > import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
> >
> > import org.apache.hadoop.util.Tool;
> >
> > import org.apache.hadoop.util.ToolRunner;
> >
> >
> >
> > import com.drew.imaging.ImageMetadataReader;
> >
> > import com.drew.imaging.ImageProcessingException;
> >
> > import com.drew.metadata.Directory;
> >
> > import com.drew.metadata.Metadata;
> >
> > import com.drew.metadata.exif.ExifIFD0Directory;
> >
> >
> >
> > public class ImageSummary extends Configured implements Tool {
> >
> >
> >
> >             public static class ImageSourceMapper extends
> >
> >                                     Mapper<Text, BytesWritable, Text,
> > IntWritable> {
> >
> >
> >
> >                         private static int tagId = 272;
> >
> >                         private static final IntWritable one = new
> > IntWritable(1);
> >
> >
> >
> >                         public void map(Text imageName, BytesWritable
> > imageBytes,
> >
> >                                                 Context context) throws
> > IOException, InterruptedException {
> >
> >                                     // TODO Auto-generated method stub
> >
> >
> >
> >                                     System.out.println("In the map
> method,
> > image is "
> >
> >                                                             +
> > imageName.toString());
> >
> >
> >
> >                                     byte[] imageInBytes > > imageBytes.getBytes();
> >
> >                                     ByteArrayInputStream bais = new
> > ByteArrayInputStream(imageInBytes);
> >
> >                                     BufferedInputStream bis = new
> > BufferedInputStream(bais);
> >
> >
> >
> >                                     Metadata imageMD = null;
> >
> >
> >
> >                                     try {
> >
> >                                                 imageMD > > ImageMetadataReader.readMetadata(bis, true);
> >
> >                                     } catch (ImageProcessingException e)
> {
> >
> >                                                 // TODO Auto-generated
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB