Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase vs Hadoop memory configuration.

Copy link to this message
Re: HBase vs Hadoop memory configuration.
Hi Kevin,

What do you mean by "current block count per DN"? I kept the standard settings.

fsck is telling me that I have 10893 titak blocks. Since I have 8
nodes, it's giving me 1361 blocks per node.

It that what you are asking?


2013/1/27, Kevin O'dell <[EMAIL PROTECTED]>:
> Hey JM,
>   I suspect they are referring to the DN process only.  It is important in
> these discussion to talk about individual component memory usage.  In
> my experience most HBase clusters only need 1 - 2 GB of heap space for the
> DN process.  I am not a Map Reduce expert, but typically the actual TT
> process only needs 1GB of memory then you control everything else through
> max slots and child heap.  What is your current block count per DN?
> On Sun, Jan 27, 2013 at 9:28 AM, Jean-Marc Spaggiari <
>> Hi,
>> I saw on another message that hadoop only need 1GB...
>> Today, I have configured my nodes with 45% memory for HBase, 45%
>> memory for Hadoop. The last 10% are for the OS.
>> Should I move that with 1GB for Hadoop, 10% for the OS and the rest
>> for HBase? Even if running MR jobs?
>> Thanks,
>> JM
> --
> Kevin O'Dell
> Customer Operations Engineer, Cloudera