Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> Monitoring Accumulo with Ganglia


Copy link to this message
-
Monitoring Accumulo with Ganglia
Hey guys,

If you want to use Ganglia for historical metric gathering here is one way
to do it with Jmxtrans ( http://code.google.com/p/jmxtrans/ )

Note* Jmxtrans is modeled after the GangliaContext in Hadoop & HBase. You
can turn on Hadoop to Ganglia metrics in a configuration file depending on
your version of ganglia < 3.1 or > 3.1
Note I used CentOS 6.2

1) Enable metric accumulation in accumulo-metrics.xml under the conf dir (
set everything to true )
2) Enable Jmx remote monitoring in accumulo-env.sh

example:

export ACCUMULO_TSERVER_OPTS="-Dcom.sun.management.jmxremote.port=9001
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false $ACCUMULO_TSERVER_OPTS"
export ACCUMULO_MASTER_OPTS="-Dcom.sun.management.jmxremote.port=9002
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false $ACCUMULO_MASTER_OPTS"
export ACCUMULO_MONITOR_OPTS="-Dcom.sun.management.jmxremote.port=9003
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false $ACCUMULO_MONITOR_OPTS"
export ACCUMULO_GC_OPTS="-Dcom.sun.management.jmxremote.port=9004
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false $ACCUMULO_GC_OPTS"
export ACCUMULO_LOGGER_OPTS="-Dcom.sun.management.jmxremote.port=9005
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false $ACCUMULO_LOGGER_OPTS"

3) restart accumulo
4) install Jmxtrans rpm https://github.com/lookfirst/jmxtrans/downloads

https://github.com/downloads/lookfirst/jmxtrans/jmxtrans-287e3ce6fe-0.noarch.rpm

5) Go to the install directory /usr/share/jmxtrans
6) Change the default directories for logs, config, and json to your liking
     ( defaults to /var/lib/jmxtrans for json, /var/log/jmxtrans for log,
and /etc/sysconfig/jmxtrans for conf )

7) put your .json queries in the json directory

*Example:*
*Servers*: contains the servers and port you want to query, this example
queries* localhost:9001* ( the tablet server if you are using Jmx remote
ports specified above )
*Queries* contains the varios MBeans you want to query, this example
queries for all Accumulo Tablet Server MBeans (* you can also query
java.lang and hotspot and other such beans* )
The *GangliaWritter* specifies you want to send this to Ganglia although
you can send it anywere
     *resultAlias* defines a prefix for all metrics that return from this
query ( this query gets them all but u can be specific if you want ) this
is useful for viewing, for example, all node's ingest rate
 side by side.
     *groupName*  defines how Ganglia will group these metrics ( split them
up so you don't see 100 graphs at a time )
     *host* here specifies were to send the results I define Monitor in
/etc/hosts to 127.0.0.1 but you can send them to any machine running your
representative gmond daemon that will be queried by a gmetad daemon

If you want to query a cluster just copy the file and replace localhost
with the ip or fully qualified domain name of that node ( make sure it's
reachable )
if you have a custom ganglia port you can specify it rather than 8649
{
"servers":
[{
"host":"localhost",
"port":"9001",
"queries":
[{
"obj":"accumulo.server.metrics:service=TServerInfo,name=TabletServerMBean,instance=tserver",
"resultAlias":"Accumulo",
"outputWriters":
[{
"@class":"com.googlecode.jmxtrans.model.output.GangliaWriter",
"settings":
{
"groupName" : "TS Overall",
"host" : "Monitor",
"port" : 8649,
"typeNames" : [""]
}
}]
}, {
"obj":"accumulo.server.metrics:service=TServerInfo,name=TabletServerMinCMetricsMBean,instance=tserver",
"resultAlias":"Accumulo",
"outputWriters":
[{
"@class":"com.googlecode.jmxtrans.model.output.GangliaWriter",
"settings":
{
"groupName" : "TS Minor Compactions",
"host" : "Monitor",
"port" : 8649,
"typeNames" : [""]
}
}]
}, {
"obj":"accumulo.server.metrics:service=TServerInfo,name=TabletServerScanMetricsMBean,instance=tserver",
"resultAlias":"Accumulo",
"outputWriters":
[{
"@class":"com.googlecode.jmxtrans.model.output.GangliaWriter",
"settings":
{
"groupName" : "TS Scan Metrics",
"host" : "Monitor",
"port" : 8649,
"typeNames" : [""]
}
}]
}, {
"obj":"accumulo.server.metrics:service=TServerInfo,name=TabletServerUpdateMetricsMBean,instance=tserver",
"resultAlias":"Accumulo",
"outputWriters":
[{
"@class":"com.googlecode.jmxtrans.model.output.GangliaWriter",
"settings":
{
"groupName" : "TS Update Metrics",
"host" : "Monitor",
"port" : 8649,
"typeNames" : [""]
}
}]
}, {
"obj":"accumulo.server.metrics:service=TabletServer,name=ThriftMetricsMBean,instance=Thrift
Client Server",
"resultAlias":"Accumulo",
"outputWriters":
[{
"@class":"com.googlecode.jmxtrans.model.output.GangliaWriter",
"settings":
{
"groupName" : "TS Thrift Metrics",
"host" : "Monitor",
"port" : 8649,
"typeNames" : [""]
}
}]
}]
}, {
"host":"localhost",
"port":"9002",
"queries":
[{
"obj":"accumulo.server.metrics:service=LogWriter,name=LogWriterMBean,instance=logger",
"resultAlias":"Accumulo",
"outputWriters":
[{
"@class":"com.googlecode.jmxtrans.model.output.GangliaWriter",
"settings":
{
"groupName" : "Accumulo Logger",
"host" : "Monitor",
"port" : 8649,
"typeNames" : [""]
}
}]
}]
}]
}

8) install ganglia and ganglia web if its not already installed on your
cluster  ( the new ganglia web looks nicer & has more functionality )
9) ./jmxtrans.sh start
10) Also remember to turn on the Ganglia metrics for Hadoop if you want to
see mapreduce / hdfs / hadoop jvm statistics
9) restart the gmond and gmetad processes
10) go to the ganglia ui

Thats it ;) Cheers,
Miguel

Special Thanks to Eric Newton for getting started
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB