Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo, mail # user - Programtically invoking a Map/Reduce job


+
Mike Hugo 2013-01-16, 17:11
+
John Vines 2013-01-16, 17:20
+
Mike Hugo 2013-01-16, 20:07
Copy link to this message
-
Re: Programtically invoking a Map/Reduce job
Billie Rinaldi 2013-01-16, 21:11
Your job is running in "local" mode (Running job: job_local_0001).  This
basically means that the hadoop configuration is not present on the
classpath of the java client kicking off the job.  If you weren't planning
to have the hadoop config on that machine, you might be able to get away
with setting "mapred.job.tracker" and probably also "fs.default.name" on
the Configuration object.

Billie
On Wed, Jan 16, 2013 at 12:07 PM, Mike Hugo <[EMAIL PROTECTED]> wrote:

> Cool, thanks for the feedback John, the examples have been helpful in
> getting up and running!
>
> Perhaps I'm not doing something quite right.  When I jar up my jobs and
> deploy the jar to the server and run it via the tool.sh command on the
> cluster, I see the job running in the jobtracker (servername:50030) and it
> runs as I would expect.
>
> 13/01/16 14:39:53 INFO mapred.JobClient: Running job: job_201301161326_0006
> 13/01/16 14:39:54 INFO mapred.JobClient:  map 0% reduce 0%
> 13/01/16 14:41:29 INFO mapred.JobClient:  map 50% reduce 0%
> 13/01/16 14:41:35 INFO mapred.JobClient:  map 100% reduce 0%
> 13/01/16 14:41:40 INFO mapred.JobClient: Job complete:
> job_201301161326_0006
> 13/01/16 14:41:40 INFO mapred.JobClient: Counters: 18
> 13/01/16 14:41:40 INFO mapred.JobClient:   Job Counters
> 13/01/16 14:41:40 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=180309
> 13/01/16 14:41:40 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 13/01/16 14:41:40 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 13/01/16 14:41:40 INFO mapred.JobClient:     Rack-local map tasks=2
> 13/01/16 14:41:40 INFO mapred.JobClient:     Launched map tasks=2
> 13/01/16 14:41:40 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 13/01/16 14:41:40 INFO mapred.JobClient:   File Output Format Counters
> 13/01/16 14:41:40 INFO mapred.JobClient:     Bytes Written=0
> 13/01/16 14:41:40 INFO mapred.JobClient:   FileSystemCounters
> 13/01/16 14:41:40 INFO mapred.JobClient:     HDFS_BYTES_READ=248
> 13/01/16 14:41:40 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=60214
> 13/01/16 14:41:40 INFO mapred.JobClient:   File Input Format Counters
> 13/01/16 14:41:40 INFO mapred.JobClient:     Bytes Read=0
> 13/01/16 14:41:40 INFO mapred.JobClient:   Map-Reduce Framework
> 13/01/16 14:41:40 INFO mapred.JobClient:     Map input records=1036434
> 13/01/16 14:41:40 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=373760000
> 13/01/16 14:41:40 INFO mapred.JobClient:     Spilled Records=0
> 13/01/16 14:41:40 INFO mapred.JobClient:     CPU time spent (ms)=24410
> 13/01/16 14:41:40 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=168394752
> 13/01/16 14:41:40 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=2124627968
> 13/01/16 14:41:40 INFO mapred.JobClient:     Map output records=2462684
> 13/01/16 14:41:40 INFO mapred.JobClient:     SPLIT_RAW_BYTES=248
>
>
>
> When I kick off a job via a java client running on a different host, the
> job seems to run (I can see things being scanned and ingested) but I don't
> see anything via the jobtracker UI on the server.  Is that normal?  Or do I
> have something mis-configured?
>
>
>
> Here's how I'm starting things from the client:
>
>     @Override
>     public int run(String[] strings) throws Exception {
>         Job job = new Job(getConf(), getClass().getSimpleName());
>         job.setJarByClass(getClass());
>         job.setMapperClass(MyMapper.class);
>
>         job.setInputFormatClass(AccumuloRowInputFormat.class);
>
> AccumuloRowInputFormat.setZooKeeperInstance(job.getConfiguration(),
> instanceName, zookeepers);
>
>         AccumuloRowInputFormat.setInputInfo(job.getConfiguration(),
>                 username,
>                 password.getBytes(),
>                 "...",
>                 new Authorizations());
>
>         job.setNumReduceTasks(0);
>
>         job.setOutputFormatClass(AccumuloOutputFormat.class);
>         job.setOutputKeyClass(Key.class);
+
Mike Hugo 2013-01-17, 19:16
+
Billie Rinaldi 2013-01-17, 19:57
+
Mike Hugo 2013-01-17, 20:41