I am glad that I could help.

In our case, we followed mostly the configuration from here:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html
(changing
it a bit to adapt to our requirements e.g. today we run 2GB containers
instead of 3-4GB, but it might change in the future). Make also sure that
memorry allocated in mapreduce.map.java.opts is smaller than
mapreduce.map.memory.mb (the same for reduce tasks).
2013/12/11 Silvina Caíno Lores <[EMAIL PROTECTED]>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB