Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Re: why i can not track the job which i submitted in yarn?


Copy link to this message
-
Re: why i can not track the job which i submitted in yarn?
kun yan 2013-09-12, 01:17
Write your own mapreduce task is not to see progress on tasks? Maybe I have
the same problem
2013/9/12 ch huang <[EMAIL PROTECTED]>

> i already set this option ,this is in my mapred-site.xml,and all my hive
> job can be see in RM UI
>
>
>
> <property>
>         <name>mapreduce.framework.name</name>
>         <value>yarn</value>
>         <description>The runtime framework for executing MapReduce jobs.
> Can be one of local, classic or yarn</description>
> </property>
>
>
> On Wed, Sep 11, 2013 at 5:51 PM, Devaraj k <[EMAIL PROTECTED]> wrote:
>
>>  Your Job is running in local mode, that’s why you don’t see in the RM
>> UI/Job History.****
>>
>> ** **
>>
>> Can you change ‘mapreduce.framework.name’ configuration value to ‘yarn’,
>> it will show in RM UI.****
>>
>> ** **
>>
>> Thanks****
>>
>> Devaraj k****
>>
>> ** **
>>
>> *From:* ch huang [mailto:[EMAIL PROTECTED]]
>> *Sent:* 11 September 2013 15:08
>> *To:* [EMAIL PROTECTED]
>> *Subject:* why i can not track the job which i submitted in yarn?****
>>
>> ** **
>>
>> hi,all:****
>>
>>      i do now know why i can not track my job which submitted to yarn ? *
>> ***
>>
>>  ****
>>
>> # hadoop jar
>> /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.0.0-cdh4.3.0.jar pi
>> 20 10****
>>
>> Number of Maps  = 20
>> Samples per Map = 10
>> Wrote input for Map #0
>> Wrote input for Map #1
>> Wrote input for Map #2
>> Wrote input for Map #3
>> Wrote input for Map #4
>> Wrote input for Map #5
>> Wrote input for Map #6
>> Wrote input for Map #7
>> Wrote input for Map #8
>> Wrote input for Map #9
>> Wrote input for Map #10
>> Wrote input for Map #11
>> Wrote input for Map #12
>> Wrote input for Map #13
>> Wrote input for Map #14
>> Wrote input for Map #15
>> Wrote input for Map #16
>> Wrote input for Map #17
>> Wrote input for Map #18
>> Wrote input for Map #19
>> Starting Job
>> 13/09/11 17:32:02 WARN conf.Configuration: session.id is deprecated.
>> Instead, use dfs.metrics.session-id
>> 13/09/11 17:32:02 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> processName=JobTracker, sessionId>> 13/09/11 17:32:02 WARN conf.Configuration: slave.host.name is
>> deprecated. Instead, use dfs.datanode.hostname
>> 13/09/11 17:32:02 WARN mapred.JobClient: Use GenericOptionsParser for
>> parsing the arguments. Applications should implement Tool for the same.
>> 13/09/11 17:32:02 INFO mapred.FileInputFormat: Total input paths to
>> process : 20
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner: OutputCommitter set in
>> config null
>> 13/09/11 17:32:03 INFO mapred.JobClient: Running job:
>> job_local854997782_0001
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner: OutputCommitter is
>> org.apache.hadoop.mapred.FileOutputCommitter
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner: Waiting for map tasks
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner: Starting task:
>> attempt_local854997782_0001_m_000000_0
>> 13/09/11 17:32:03 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 13/09/11 17:32:03 INFO util.ProcessTree: setsid exited with exit code 0
>> 13/09/11 17:32:03 INFO mapred.Task:  Using ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7f342545
>> 13/09/11 17:32:03 INFO mapred.MapTask: Processing split:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part0:0+118
>> 13/09/11 17:32:03 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES
>> is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as
>> counter name instead
>> 13/09/11 17:32:03 INFO mapred.MapTask: numReduceTasks: 1
>> 13/09/11 17:32:03 INFO mapred.MapTask: Map output collector class >> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>> 13/09/11 17:32:03 INFO mapred.MapTask: io.sort.mb = 100
>> 13/09/11 17:32:03 INFO mapred.MapTask: data buffer = 79691776/99614720
>> 13/09/11 17:32:03 INFO mapred.MapTask: record buffer = 262144/327680
>> 13/09/11 17:32:03 INFO mapred.MapTask: Starting flush of map output

In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code

YanBit
[EMAIL PROTECTED]