Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> How to get the read and write time of a hadoop job?


Copy link to this message
-
Re: How to get the read and write time of a hadoop job?
Every job has a job-history file which has a lot of stats. You can use 'rumen' to parse them. Rumen is included in every hadoop release.

hth,
Arun

On Oct 7, 2013, at 7:41 AM, Yong Guo <[EMAIL PROTECTED]> wrote:

> Hi,
>
> For a hadoop job, I can get its execution time by record the job submission timestamp and the job end timestamp. However, I would like to know the breakdown of the execution time, such as the time spent actually on reading the input files and on writing output files. How can I get the read and write time?
>
> Thanks,
> Yong
>
>
>

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/

--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB