Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> DistributedCache in NewAPI on 0.20.X branch


Copy link to this message
-
Re: DistributedCache in NewAPI on 0.20.X branch
Hi Shi
         My Bad, the syntax i posted last time was not the right one ,
sorry was from my hand held

@Override
public void setup(Context context)
{
            File file = new File("TestFile.txt");
.
.
.
}

I didn't get a chance to debug your code, but if you are looking for a
working example of Distributed Cache using the new API please find the
files below

DistCacheTest.java - http://pastebin.com/PkdXrDgc
DistCacheTestMapper.java - http://pastebin.com/EcE3kEQW

I had a working sample with me, just pasted your logic in there and tested
it on my cluster. I was working good for me.

Regards
Bejoy.K.S

On Sat, Dec 17, 2011 at 3:16 AM, Shi Yu <[EMAIL PROTECTED]> wrote:

> Follow my previous question, I put the complete code as
> follows, I doubt is there any method to get this working on
> 0.20.X using the new API.
>
> The command I executed was:
>
> bin/hadoop jar myjar.jar FileTest -files textFile.txt /input/
> /output/
>
> The complete code:
>
> public class FileTest extends Configured implements Tool {
>       private static final Logger sLogger > Logger.getLogger(FileTest.class);
>
>       public static class Map extends
> org.apache.hadoop.mapreduce.Mapper<LongWritable, Text, Text,
> Text>{
>           Text word;
>
>           public void
> setup(org.apache.hadoop.mapreduce.Mapper<LongWritable, Text,
> Text, Text>.Context context){
>               String line;
>               try{
>                   File z1_file = new File("textFile.txt");
>                   BufferedReader bf = new BufferedReader(new
> FileReader(z1_file));
>                   while ((line=bf.readLine())!=null){
>
>                       word = new Text(line);
>                   }
>
>               } catch(IOException ioe){
>                   sLogger.error(ioe.toString());
>               }
>           }
>
>            public void map(LongWritable key, Text value,
> org.apache.hadoop.mapreduce.Mapper.Context context) throws
> IOException, InterruptedException{
>                     context.write(new Text("test"), word);
>                 }
>       }
>
>
>       public int run(String[] args) throws
> IOException,URISyntaxException {
>             GenericOptionsParser parser = new
> GenericOptionsParser(args);
>            Configuration conf = parser.getConfiguration();
>            String[] otherArgs = parser.getRemainingArgs();
>            Job job = new Job(conf, "MyJob");
>
>            Path in = new Path(otherArgs[0]);
>            Path out = new Path(otherArgs[1]);
>
>
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInput
> Paths(job, in);
>
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOut
> putPath(job, out);
>
>            job.setJarByClass(FileTest.class);
>            job.setMapperClass(FileTest.Map.class);
>
>            job.setNumReduceTasks(0);
>            job.setMapOutputKeyClass(Text.class);
>            job.setMapOutputValueClass(Text.class);
>
>            try {
>                System.exit(job.waitForCompletion(true) ? 0 :
> 1);
>                 return 0;
>            }catch( Throwable e) {
>                  sLogger.error( "Job failed ", e);
>                 return -1;
>            }
>       }
>
>       public static void main(String[] args) throws Exception
> {
>            int exitCode = ToolRunner.run(new
> FileTest(),args);
>            System.exit(exitCode);
>       }
>
> }
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB