Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> measure throughput of cluster


Copy link to this message
-
measure throughput of cluster
I am trying to acquire statistics about my hdfs cluster in the lab. One stat
I am really interested in is the total throughput (gigabytes served) of the
cluster for 24 hours. I suppose I can look for 'cmd=open' in the log file of
the name node but how accurate is it?  It seems there is no 'cmd=close'
to distinguish a full file read. Is there a better way to acquire this?

--
--- Get your facts first, then you can distort them as you please.--
+
Brian Bockelman 2011-05-03, 12:14
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB