Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> How to get the HDFS I/O information


+
Qu Chen 2012-04-24, 21:47
+
Qu Chen 2012-04-24, 22:25
Copy link to this message
-
Re: How to get the HDFS I/O information
Qu,

Every job has a history file that is, by default, stored under
$HADOOP_LOG_DIR/history.  These "job history" files list the amount of
hdfs read/write (and lots of other things) for every task.

On 2012/04/25 7:25, Qu Chen wrote:
> Let me add, I'd like to do this periodically to gather some
> performance profile information.
>
> On Tue, Apr 24, 2012 at 5:47 PM, Qu Chen <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     I am trying to gather the info regarding the amount of HDFS
>     read/write for each task in a given map-reduce job. How can I do that?
>
>
+
Devaraj k 2012-04-25, 06:31
+
Rajashekhar M A 2012-04-25, 07:49
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB