Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> Archive Task Logs (Stdout, Stderr, Sysogs) and Job Tracker logs of a Hadoop Cluster for later analysis


Copy link to this message
-
Archive Task Logs (Stdout, Stderr, Sysogs) and Job Tracker logs of a Hadoop Cluster for later analysis
Hi,
I need to collect log data from our Cluster.

For this I think I need to copy the Contents of:
* JobTracker: /var/log/hadoop-0.20-mapreduce/history/
* TaskTracker: /var/log/hadoop-0.20-mapreduce/userlogs/

It should also follow symlinks and copy recusrive.

Is flume the right tool to do this?

E.g. with the "Spooling Directory Source"?

Best Regards,
Christian.
+
Israel Ekpo 2013-04-08, 17:41
+
Christian Schneider 2013-04-11, 07:56
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB