Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Number of mapreduce job and the time spent


Copy link to this message
-
Re: Number of mapreduce job and the time spent
I think that my job id is in this line :

12/12/12 10:43:00 INFO mapred.JobClient: Running job: job_local_0001
but i have this response when i execute :

hadoop job -status  job_local_0001
Warning: $HADOOP_HOME is deprecated.

Could not find job job_local_0001

2012/12/12 long <[EMAIL PROTECTED]>

> get you jobid and use this command:
> $HADOOP_HOME/hadoop job -status job_xxx
>
>
>
>
> --
> Best Regards,
> longmans
>
> At 2012-12-12 17:23:39,"imen Megdiche" <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
>  I want to know from the output of the execution of the example of
> mapreduce  wordcount on hadoop : the number of mapreduce job and the time
> spent for the execution.
>
> There is an exceprt from the output.
>
> 12/12/12 10:20:09 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0'
> done.
> 12/12/12 10:20:10 INFO mapred.JobClient:  map 100% reduce 100%
> 12/12/12 10:20:10 INFO mapred.JobClient: Job complete: job_local_0001
> 12/12/12 10:20:10 INFO mapred.JobClient: Counters: 22
> 12/12/12 10:20:10 INFO mapred.JobClient:   File Input Format Counters
> 12/12/12 10:20:10 INFO mapred.JobClient:     Bytes Read=145966941
> 12/12/12 10:20:10 INFO mapred.JobClient:   File Output Format Counters
> 12/12/12 10:20:10 INFO mapred.JobClient:     Bytes Written=50704638
> 12/12/12 10:20:10 INFO mapred.JobClient:   org.myorg.WordCount$Map$Counters
> 12/12/12 10:20:10 INFO mapred.JobClient:     INPUT_WORDS=4980060
> 12/12/12 10:20:10 INFO mapred.JobClient:   FileSystemCounters
> 12/12/12 10:20:10 INFO mapred.JobClient:     FILE_BYTES_READ=1777104865
> 12/12/12 10:20:10 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1783494521
> 12/12/12 10:20:10 INFO mapred.JobClient:   Map-Reduce Framework
> 12/12/12 10:20:10 INFO mapred.JobClient:     Map output materialized
> bytes=170854986
> 12/12/12 10:20:10 INFO mapred.JobClient:     Map input records=4980060
> 12/12/12 10:20:10 INFO mapred.JobClient:     Reduce shuffle bytes=0
> 12/12/12 10:20:10 INFO mapred.JobClient:     Spilled Records=14940180
> 12/12/12 10:20:10 INFO mapred.JobClient:     Map output bytes=160894830
> 12/12/12 10:20:10 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=1185910784
> 12/12/12 10:20:10 INFO mapred.JobClient:     CPU time spent (ms)=0
> 12/12/12 10:20:10 INFO mapred.JobClient:     Map input bytes=145954650
> 12/12/12 10:20:10 INFO mapred.JobClient:     SPLIT_RAW_BYTES=614
> 12/12/12 10:20:10 INFO mapred.JobClient:     Combine input records=8426541
> 12/12/12 10:20:10 INFO mapred.JobClient:     Reduce input records=4980060
> 12/12/12 10:20:10 INFO mapred.JobClient:     Reduce input groups=1660020
> 12/12/12 10:20:10 INFO mapred.JobClient:     Combine output records=8426541
> 12/12/12 10:20:10 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=0
> 12/12/12 10:20:10 INFO mapred.JobClient:     Reduce output records=1660020
> 12/12/12 10:20:10 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=0
> 12/12/12 10:20:10 INFO mapred.JobClient:     Map output records=4980060
>
>
> Thank you for your responses.
>
>
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB