Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> YARN Pi example job stuck at 0%(No MR tasks are started by ResourceManager)


Copy link to this message
-
Re: YARN Pi example job stuck at 0%(No MR tasks are started by ResourceManager)
Hi Harsh,

I have set the *yarn.nodemanager.resource.memory-mb *to 1200 mb. Also, does
it matters if i run the jobs as "root" while the RM service and NM service
are running as "yarn" user? However, i have created the /user/root
directory for root user in hdfs.

Here is the yarn-site.xml:
<configuration>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce.shuffle</value>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>

  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
  </property>

  <property>
    <description>List of directories to store localized files
in.</description>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/disk/yarn/local</value>
  </property>

  <property>
    <description>Where to store container logs.</description>
    <name>yarn.nodemanager.log-dirs</name>
    <value>/disk/yarn/logs</value>
  </property>

  <property>
    <description>Where to aggregate logs to.</description>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/var/log/hadoop-yarn/apps</value>
  </property>

  <property>
    <description>Classpath for typical applications.</description>
     <name>yarn.application.classpath</name>
     <value>
        $HADOOP_CONF_DIR,
        $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
        $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
        $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
        $YARN_HOME/*,$YARN_HOME/lib/*
     </value>
  </property>
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>ihub-an-l1:8025</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
    <value>ihub-an-l1:8040</value>
</property>
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>ihub-an-l1:8030</value>
</property>
<property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>ihub-an-l1:8141</value>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>ihub-an-l1:8088</value>
</property>
<property>
        <name>mapreduce.jobhistory.intermediate-done-dir</name>
        <value>/disk/mapred/jobhistory/intermediate/done</value>
</property>
<property>
        <name>mapreduce.jobhistory.done-dir</name>
        <value>/disk/mapred/jobhistory/done</value>
</property>

<property>
        <name>yarn.web-proxy.address</name>
        <value>ihub-an-l1:9999</value>
</property>
<property>
    <name>yarn.app.mapreduce.am.staging-dir</name>
        <value>/user</value>
</property>

*<property>
    <description>Amount of physical memory, in MB, that can be allocated
          for containers.</description>
       <name>yarn.nodemanager.resource.memory-mb</name>
        <value>1200</value>
</property>*

</configuration>
On Fri, Jul 27, 2012 at 2:23 PM, Harsh J <[EMAIL PROTECTED]> wrote:

> Can you share your yarn-site.xml contents? Have you tweaked memory
> sizes in there?
>
> On Fri, Jul 27, 2012 at 11:53 PM, anil gupta <[EMAIL PROTECTED]>
> wrote:
> > Hi All,
> >
> > I have a Hadoop 2.0 alpha(cdh4)  hadoop/hbase cluster runnning on
> > CentOS6.0. The cluster has 4 admin nodes and 8 data nodes. I have the RM
> > and History server running on one machine. RM web interface shows that 8
> > Nodes are connected to it. I installed this cluster with HA capability
> and
> > I have already tested HA for Namenodes, ZK, HBase Master. I am running
> the
> > pi example mapreduce job with user "root" and i have created "/user/root"
> > directory in HDFS.
> >
> > Last few lines of one of the nodemanager:
> > 2012-07-26 21:58:38,745 INFO org.mortbay.log: Extract
> >
> jar:file:/usr/lib/hadoop-yarn/hadoop-yarn-common-2.0.0-cdh4.0.0.jar!/webapps/node
> > to /tmp/Jetty_0_0_0_0_8042_node____19tj0x/webapp
> > 2012-07-26 21:58:38,907 INFO org.mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:8042
> > 2012-07-26 21:58:38,907 INFO org.apache.hadoop.yarn.webapp.WebApps: Web

Thanks & Regards,
Anil Gupta
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB