Accidentally hit send too soon.


 A good rule of thumb is the aggregate of all Java heaps (daemons like
DataNOde, RegionServer, NodeManager, etc. + the max allowed number of
mapreduce jobs * task heap setting) ... should fit into available RAM.

If you don't have enough available RAM, then you need to take steps to
reduce resource consumption. Limit the allowed number of concurrent
mapreduce tasks. Reduce the heap size specified in
'mapred.child.java.opts'. Or both.  ​
On Tue, Jul 22, 2014 at 9:12 AM, Andrew Purtell <[EMAIL PROTECTED]> wrote:
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB