Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Erro running pi programm


Copy link to this message
-
Re: Erro running pi programm
yinghua hu 2012-11-09, 18:20
Here are my configuration files

core-site.xml

<configuration>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://master:9000</value>
        </property>
        <property>

                <name>hadoop.tmp.dir</name>

                <value>/usr/local/hadoop/tmp</value>

        </property>

</configuration>
mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>

</configuration>

hdfs-site.xml

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
         <property>

                <name>dfs.permissions</name>

                <value>false</value>

        </property>

        <property>

                <name>dfs.namenode.name.dir</name>

                <value>file:/home/hduser/yarn_data/hdfs/namenode</value>

        </property>

        <property>

                <name>dfs.datanode.data.dir</name>

                <value>file:/home/hduser/yarn_data/hdfs/datanode</value>

        </property>

</configuration>

yarn-site.xml

<?xml version="1.0"?>
<configuration>
 <property>
 <name>yarn.nodemanager.aux-services</name>

  <value>mapreduce.shuffle</value>
 </property>
 <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
 <property>
        <name>yarn.nodemanager.log-aggregation-enable</name>
        <value>true</value>
 </property>
<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8050</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>

<property>

    <name>yarn.resourcemanager.address</name>

   <value>master:60400</value>

 </property>

</configuration>

On Fri, Nov 9, 2012 at 9:51 AM, yinghua hu <[EMAIL PROTECTED]> wrote:

> Hi, Andy
>
> Thanks for suggestions!
>
> I am running it on a four node cluster on EC2. All the services started
> fine, Namenode, Datanode, ResourceManager, NodeManager and
> JobHistoryServer. Each node can ssh to all the nodes without problem.
>
> But problem appears when trying to run any job.
>
>
>
>
> On Fri, Nov 9, 2012 at 9:36 AM, Kartashov, Andy <[EMAIL PROTECTED]>wrote:
>
>>   Yinghua,
>>
>>
>>
>> What mode are you running your hadoop in: Local/Pseud/Fully...?
>>
>>
>>
>> Your hostname is not recognised
>>
>>
>>
>> Your configuration setting seems to be wrong.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Hi, all
>>
>>
>>
>> Could some help looking at this problem? I am setting up a four node
>> cluster on EC2 and seems that the cluster is set up fine until I start
>> testing.
>>
>>
>>
>> I have tried password-less ssh from each node to all the nodes and there
>> is no problem connecting. Any advice will be greatly appreciated!
>>
>>
>>
>> [hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar
>> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -
>> Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
>> -libjars
>> share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
>>
>> Number of Maps  = 16
>>
>> Samples per Map = 10000
>>
>> Wrote input for Map #0
>>
>> Wrote input for Map #1
>>
>> Wrote input for Map #2
>>
>> Wrote input for Map #3
>>
>> Wrote input for Map #4
>>
>> Wrote input for Map #5
>>
>> Wrote input for Map #6
>>
>> Wrote input for Map #7
>>
>> Wrote input for Map #8
>>
>> Wrote input for Map #9
>>
>> Wrote input for Map #10
>>
>> Wrote input for Map #11
>>
>> Wrote input for Map #12
>>
>> Wrote input for Map #13
>>
>> Wrote input for Map #14
>>
>> Wrote input for Map #15

Regards,

Yinghua