Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Can't find the Job Status in WEB UI


Copy link to this message
-
答复: Can't find the Job Status in WEB UI
Of course.

 

Mapreduce-site.xml

 

<configuration>

 

  <!-- kira 2013-01-18 如果没配置以下会导致报错 -->

  <property>

    <name>mapreduce.framework.name</name>

    <value>yarn</value>

  </property>

  

</configuration>

 

<?xml version="1.0"?>

<configuration>

 

  <!-- Site specific YARN configuration properties -->

  <!-- kira 2013-01-18 ref:
http://www.cnblogs.com/scotoma/archive/2012/09/18/2689902.html -->

  <property>

    <name>yarn.resourcemanager.address</name>

    <value>master2:18040</value>

  </property>

 

  <property>

    <name>yarn.resourcemanager.scheduler.address</name>

    <value>master2:18030</value>

  </property>

  

  <property>

    <name>yarn.resourcemanager.webapp.address</name>

    <value>0.0.0.0:18088</value>

  </property>

  <property>

    <name>yarn.resourcemanager.resource-tracker.address</name>

    <value>master2:18025</value>

  </property>

 

  <property>

    <name>yarn.resourcemanager.admin.address</name>

    <value>master2:18141</value>

  </property>

 

  <property>

    <name>yarn.nodemanager.aux-services</name>

    <value>mapreduce.shuffle</value>

  </property>

  

  <property>

    <name>yarn.nodemanager.local-dirs</name>

    <value>/hadoop/tmp/nm-local-dir</value>

    <description>the local directories used by thenodemanager</description>

  </property>

 

  <property>

   <name>yarn.nodemanager.log-dirs</name>

   <value>/hadoop/tmp/log</value>

   <description>the directories used by Nodemanagers as
logdirectories</description>

  </property>

  

</configuration>

 

Core-site.xml

<configuration>

  

   <property>

    <name>fs.default.name</name>

    <value>hdfs://master2:9000</value>

    <final>true</final>

   </property>  

  

   <property>

    <name>hadoop.tmp.dir</name>

    <value>/hadoop/tmp</value>

   </property>

  

   <!-- freepose, 2013/1/17, create directory: /hadoop/tmp first -->

   <!-- 如果DataNode启动不起来,就尝试删除DataNode下面的hadoop.tmp.dir文件
-->

</configuration>

 

Hdfs-site.xml

<configuration>

 

  <!-- kira: replication是数据副本数量,默认为3,slave少于3台会报错 -->

  <property>

    <name>dfs.replication</name>

    <value>1</value>

    <description>Default block replication. The actual number of
replications can be specified when the file is created. The default(3
replication -- by kira) is used if replication is not specified in create
time.</description>

  </property>

  

  <property>

    <name>dfs.namenode.name.dir</name>

    <value>file:/hadoop/tmp/dfs/data,file:/opt/tmp/dfs/data</value>

    <description>Determines where on the local filesystem the DFS name node
should store the name table(fsimage). If this is a comma-delimited list of
directories then the name table is replicated in all of the directories, for
redundancy.</description>

    <final>true</final>

  </property>

  

  <property>

    <name>dfs.datanode.data.dir</name>

    <value>/hadoop/tmp/dfs/data,/opt/tmp/dfs/data</value>

    <description>Determines where on the local filesystem an DFS data node
should store its blocks.If this is a comma-delimited list of
directories,then data will be stored in all named directories,typically on
different devices.Directories that do not exist are ignored.</description>

    <final>true</final>

  </property>

  

  <property>

    <name>dfs.block.access.key.update.interval</name>

    <value>600</value>

    <description>Interval in minutes at which namenode updates its access
keys.</description>

  </property>

  

   <property>

    <name>dfs.block.access.token.lifetime</name>

    <value>600</value>

    <description>The lifetime of access tokens in minutes.</description>

  </property>

  

  <!-- kira 2012-01-19: yarn执行时: File does not exist hdfs://... 这里配置
成允许 -->

  <property>

    <name>dfs.permissions.enabled</name>

    <value>false</value>

    <description>If "true", enable permission checking in HDFS. If "false",
permission checking is turned off, but all other behavior is unchanged.
Switching from one parameter value to the other does not change the mode,
owner or group of files or directories.</description>

  </property>

  

</configuration>

 

 

 

发件人: Mohammad Tariq [mailto:[EMAIL PROTECTED]]
发送时间: 2013年1月21日 17:25
收件人: [EMAIL PROTECTED]
主题: Re: Can't find the Job Status in WEB UI

 

Could you share your config files with us?
Warm Regards,

Tariq

https://mtariq.jux.com/

cloudfront.blogspot.com

 

On Mon, Jan 21, 2013 at 2:49 PM, kira.wang <[EMAIL PROTECTED]> wrote:

1.      Actually, The job in the picture in the last email was running via
the local form.  Because I delete the mapred-site.xml in
@HADOOP_HOME/etc/Hadoop, and start resourcemanager.

2.      But, when I configured mapreduce-site.xml as below:

<property>

    <name>mapreduce.framework.name</name>

    <value>yarn</value>

  </property>

 

It does not work and carry out the errors:

 

13/01/21 16:53:16 INFO mapreduce.Job:  map 0% reduce 0%

13/01/21 16:53:16 INFO mapreduce.Job: Job job_1358758352533_0001 failed with
state FAILED due to: Application appl
ication_1358758352533_0001 failed 1 times due to AM Container for
appattempt_1358758352533_0001_000001 exited with
exitCode: 1 due to:

.Failing this attempt.. Failing the application.

13/01/21 16:53:16 INFO mapreduce.Job: Counters: 0

Job Finished in 6.192 seconds

java.io.FileNotFoundException: File does not exist:
hdfs://master2:9000/user/root/QuasiMonteCarlo_TMP_3_141592654/
out/reduce-out

        at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSy
stem.java:736)

        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1685)

        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)

        at
org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:3
14)

        at
org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)

        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

        at
org.apache.hadoop.e
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB