Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Metrics: via Broker vs. Producer vs. Consumer


Copy link to this message
-
Re: Metrics: via Broker vs. Producer vs. Consumer
Hi,

Ah, I think I didn't ask my question clearly.  Another try:
* If I have a javaagent attached to the Kafka process, I'll be able to connect to its JMX and get all the Broker metrics for that Broker process.
* If I have another Broker process, I'll need to attach my agent to this process, too, to get all Broker metrics associated with this second Broker process.
So far OK - like you said, I can sum, average, etc.

But what if I want to get all Producer metrics?  What do I need to do? I *believe* I would have to attach the javaagent to whichever app is acting as a Kafka Consumer and get Consumer stats from the JMX associated with the JVM process running that app.
Is this correct?
Is there any way to avoid that and get all Consumer and all Producer metrics using the javaagent attached to one of the Broker processes?

Thanks,
Otis
----
Performance Monitoring for Solr / ElasticSearch / HBase / Hadoop - http://sematext.com/spm 
>________________________________
> From: Jay Kreps <[EMAIL PROTECTED]>
>To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
>Sent: Wednesday, July 24, 2013 5:22 PM
>Subject: Re: Metrics: via Broker vs. Producer vs. Consumer
>
>
>Yeah all our monitoring is just of the local process (basically just a
>counter exposed through yammer metrics which support jmx and other
>outputs). If I understand what you want instead of having a counter that
>tracks, say, produce requests per second for a single broker you want one
>that covers the whole cluster. Obviously this would require collecting the
>local count and aggregating across all the brokers.
>
>Our assumption is that you already have a separate monitoring system which
>can slurp all these up, aggregate them, graph them, and alert off them.
>There are a number of open source thingies like this and I think most
>bigger shops have something they use. Our assumption is that trying to do a
>kafka-specific monitoring system wouldn't work for most people because they
>are wedded to their current setup and just want to integrate with that.
>
>I'm not sure how valid any of those assumptions actually are.
>
>-Jay
>
>
>On Wed, Jul 24, 2013 at 7:29 AM, Otis Gospodnetic <
>[EMAIL PROTECTED]> wrote:
>
>> Hi,
>>
>> I was looking at
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/Operations#Operations-Monitoring
>> and noticed there is no information about which metrics are available
>> in which process/JVM/JMX.
>>
>> Some are available in the Broker process, but some are only available
>> from the JVM running Consumer and some only from the JVM running
>> Producer.  And yet some Producer and Consumer metrics are, I *believe*
>> available from Broker's JMX.
>>
>> Would it be possible for somebody in the know to mark the metrics in
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/Operations#Operations-Monitoring
>> so one can tell where to get it?
>>
>> Also, why is it that the Broker process doesn't have *all* metrics,
>> including Producer and Consumer one?  Is that because there can be N
>> Brokers and each P or C talk to one Broker at a time and thus there is
>> no single process/JMX that can know *all* stats for *all* Brokers and
>> for *all* Ps and Cs?
>>
>> Thank you!
>> Otis
>> --
>> Performance Monitoring -- http://sematext.com/spm
>> Solr & ElasticSearch Support -- http://sematext.com/
>>
>
>
>
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB