Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> collecting CPU, mem, iops of hadoop jobs


Copy link to this message
-
Re: collecting CPU, mem, iops of hadoop jobs
Thanks for reply, but I don't think metric exposed to Ganglia would be
what i am really looking for..

what i am looking for is some kind of these (but not limit to)

Job_xxxx_yyyy
CPU time: 10204 sec.   <--aggregate from all tasknodes
IOPS: 2344  <-- aggregated from all datanode
MEM: 30G   <-- aggregated

etc,

Job_aaa_bbb
CPU time:
IOPS:
MEM:

Sorry for ambiguous question.
Thanks

On Tue, Dec 20, 2011 at 12:47 PM, He Chen <[EMAIL PROTECTED]> wrote:
> You may need Ganglia. It is a cluster monitoring software.
>
> On Tue, Dec 20, 2011 at 2:44 PM, Patai Sangbutsarakum <
> [EMAIL PROTECTED]> wrote:
>
>> Hi Hadoopers,
>>
>> We're running Hadoop 0.20 CentOS5.5. I am finding the way to collect
>> CPU time, memory usage, IOPS of each hadoop Job.
>> What would be the good starting point ? document ? api ?
>>
>> Thanks in advance
>> -P
>>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB