Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Accumulo Map Reduce is not distributed


Copy link to this message
-
Re: Accumulo Map Reduce is not distributed
On Mon, Nov 5, 2012 at 6:46 AM, Cornish, Duane C.
<[EMAIL PROTECTED]>wrote:

> Billie,****
>
> ** **
>
> I think I just started to come to that same conclusion (I’m relatively new
> to cloud computing).  It appears that it is running in local mode.  My
> console output says “mapred.LocalJobRunner” and the job never appears on my
> Hadoop Job page.  How do I fix this problem?  I also found that the
> “JobTracker” link on my Accumulo Overview page points to
> http://0.0.0.0:50030/  instead of the actual computer name.
>

First check your accumulo-env.sh in the Accumulo conf directory.  For the
lines that look like the following, change the "/path/to/X" locations to
the actual Java, Hadoop, and Zookeeper directories.

test -z "$JAVA_HOME"             && export JAVA_HOME=/path/to/java
test -z "$HADOOP_HOME"           && export HADOOP_HOME=/path/to/hadoop
test -z "$ZOOKEEPER_HOME"        && export ZOOKEEPER_HOME=/path/to/zookeeper

You may also need to make sure that the command you use to run the MR job
has JAVA_HOME, HADOOP_HOME, ZOOKEEPER_HOME, and ACCUMULO_HOME environment
variables, which can be done by using export commands like the ones above.

Billie

> ****
>
> ** **
>
> Duane****
>
> ** **
>
> *From:* Billie Rinaldi [mailto:[EMAIL PROTECTED]]
> *Sent:* Monday, November 05, 2012 9:41 AM
>
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Accumulo Map Reduce is not distributed****
>
> ** **
>
> On Mon, Nov 5, 2012 at 6:13 AM, John Vines <[EMAIL PROTECTED]> wrote:****
>
> So it sounds like the job was correctly set to 4 mappers and your issue is
> in your MapReduce configuration. I would check the jobtracker page and
> verify the number of map slots, as well as how they're running, as print
> statements are not the most accurate in the framework.****
>
>
> Also make sure your MR job isn't running in local mode.  Sometimes that
> happens if your job can't find the Hadoop configuration directory.
>
> Billie
>
>  ****
>
> Sent from my phone, pardon the typos and brevity.****
>
> On Nov 5, 2012 8:59 AM, "Cornish, Duane C." <[EMAIL PROTECTED]>
> wrote:****
>
> Hi William,****
>
>  ****
>
> Thanks for helping me out and sorry I didn’t get back to you sooner, I was
> away for the weekend.  I am only callying ToolRunner.run once.****
>
>  ****
>
> *public* *static* *void* ExtractFeaturesFromNewImages() *throws*Exception{
> ****
>
>               String[] parameters = *new* String[1];****
>
>               parameters[0] = "foo";****
>
>               *InitializeFeatureExtractor*();****
>
>               ToolRunner.*run*(CachedConfiguration.*getInstance*(), *new*Accumulo_FE_MR_Job(), parameters);
> ****
>
>        }****
>
>  ****
>
> Another indicator that I’m only calling it once is that before I was
> pre-splitting the table, I was just getting one larger map-reduce job with
> only 1 mapper.  Based on my print statements, the job was running in
> sequence (which I guess makes sense because the table only existed on one
> node in my cluster.  Then after pre-splitting my table, I was getting one
> job that had 4 mappers.  Each was running one after the other.  I hadn’t
> changed any code (other than adding in the splits).  So, I’m only calling
> ToolRunner.run once.  Furthermore, my run function in my job class is
> provided below:****
>
>  ****
>
>        @Override****
>
>        *public* *int* run(String[] arg0) *throws* Exception {        ****
>
>               runOneTable();****
>
>               *return* 0;****
>
>        }****
>
>  ****
>
> Thanks,****
>
> Duane****
>
> *From:* William Slacum [mailto:[EMAIL PROTECTED]]
> *Sent:* Friday, November 02, 2012 8:48 PM
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Accumulo Map Reduce is not distributed****
>
>  ****
>
> What about the main method that calls ToolRunner.run? If you have 4 jobs
> being created, then you're calling run(String[]) or runOneTable() 4 times.
> ****
>
> On Fri, Nov 2, 2012 at 5:21 PM, Cornish, Duane C. <
> [EMAIL PROTECTED]> wrote:****