Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase vs Hadoop memory configuration.


Copy link to this message
-
Re: HBase vs Hadoop memory configuration.
Hey JM,

  I suspect they are referring to the DN process only.  It is important in
these discussion to talk about individual component memory usage.  In
my experience most HBase clusters only need 1 - 2 GB of heap space for the
DN process.  I am not a Map Reduce expert, but typically the actual TT
process only needs 1GB of memory then you control everything else through
max slots and child heap.  What is your current block count per DN?

On Sun, Jan 27, 2013 at 9:28 AM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:

> Hi,
>
> I saw on another message that hadoop only need 1GB...
>
> Today, I have configured my nodes with 45% memory for HBase, 45%
> memory for Hadoop. The last 10% are for the OS.
>
> Should I move that with 1GB for Hadoop, 10% for the OS and the rest
> for HBase? Even if running MR jobs?
>
> Thanks,
>
> JM
>

--
Kevin O'Dell
Customer Operations Engineer, Cloudera
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB