Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Erro running pi programm


Copy link to this message
-
RE: Erro running pi programm
Kartashov, Andy 2012-11-09, 19:31
Try running "hostname -f" on each node, take a note of the fully qualified host address and replace your "master" with the your respective finding.

Here are my configuration files

core-site.xml

<configuration>
        <property>
                <name>fs.default.name<http://fs.default.name></name>
                <value>hdfs://master:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop/tmp</value>
        </property>
</configuration>
mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name<http://mapreduce.framework.name></name>
                <value>yarn</value>
        </property>

</configuration>

hdfs-site.xml

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
         <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/home/hduser/yarn_data/hdfs/namenode</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/home/hduser/yarn_data/hdfs/datanode</value>
        </property>
</configuration>

yarn-site.xml

<?xml version="1.0"?>
<configuration>
 <property>
 <name>yarn.nodemanager.aux-services</name>

  <value>mapreduce.shuffle</value>
 </property>
 <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
 <property>
        <name>yarn.nodemanager.log-aggregation-enable</name>
        <value>true</value>
 </property>
<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8050</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
   <value>master:60400</value>
 </property>

</configuration>

On Fri, Nov 9, 2012 at 9:51 AM, yinghua hu <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Hi, Andy

Thanks for suggestions!

I am running it on a four node cluster on EC2. All the services started fine, Namenode, Datanode, ResourceManager, NodeManager and JobHistoryServer. Each node can ssh to all the nodes without problem.

But problem appears when trying to run any job.

From: Kartashov, Andy
Sent: Friday, November 09, 2012 12:37 PM
To: [EMAIL PROTECTED]
Subject: Erro running pi programm

Yinghua,

What mode are you running your hadoop in: Local/Pseud/Fully...?

Your hostname is not recognised

Your configuration setting seems to be wrong.

Hi, all

Could some help looking at this problem? I am setting up a four node cluster on EC2 and seems that the cluster is set up fine until I start testing.

I have tried password-less ssh from each node to all the nodes and there is no problem connecting. Any advice will be greatly appreciated!

[hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -Dmapreduce.clientfactory.class.name<http://Dmapreduce.clientfactory.class.name>=org.apache.hadoop.mapred.YarnClientFactory -libjars share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
Number of Maps  = 16
Samples per Map = 10000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Wrote input for Map #10
Wrote input for Map #11
Wrote input for Map #12
Wrote input for Map #13
Wrote input for Map #14
Wrote input for Map #15
Starting Job
12/11/09 12:02:59 INFO input.FileInputFormat: Total input paths to process : 16
12/11/09 12:02:59 INFO mapreduce.JobSubmitter: number of splits:16
12/11/09 12:02:59 WARN conf.Configuration: mapred.job.classpath.files is deprecated. Instead, use mapreduce.job.classpath.files
12/11/09 12:02:59 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar
12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
12/11/09 12:02:59 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
12/11/09 12:02:59 WARN conf.Configuration: mapred.used.genericoptionsparser is deprecated. Instead, use mapreduce.client.genericoptionsparser.used
12/11/09 12:02:59 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
12/11/09 12:02:59 WARN conf.Configuration: mapred.job.name<http://mapred.job.name> is deprecated. Instead, use mapreduce.job.name<http://mapreduce.job.name>
12/11/09 12:02:59 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
12/11/09 12:02:59 WARN conf.Configuration: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
12/11/09 12:02:59 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
12/11/09 12:02:59 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat