Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> HBase vs Hadoop memory configuration.


+
Jean-Marc Spaggiari 2013-01-27, 14:28
+
Kevin Odell 2013-01-27, 15:03
Copy link to this message
-
Re: HBase vs Hadoop memory configuration.
Hi Kevin,

What do you mean by "current block count per DN"? I kept the standard settings.

fsck is telling me that I have 10893 titak blocks. Since I have 8
nodes, it's giving me 1361 blocks per node.

It that what you are asking?

JM

2013/1/27, Kevin O'dell <[EMAIL PROTECTED]>:
> Hey JM,
>
>   I suspect they are referring to the DN process only.  It is important in
> these discussion to talk about individual component memory usage.  In
> my experience most HBase clusters only need 1 - 2 GB of heap space for the
> DN process.  I am not a Map Reduce expert, but typically the actual TT
> process only needs 1GB of memory then you control everything else through
> max slots and child heap.  What is your current block count per DN?
>
> On Sun, Jan 27, 2013 at 9:28 AM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
>> Hi,
>>
>> I saw on another message that hadoop only need 1GB...
>>
>> Today, I have configured my nodes with 45% memory for HBase, 45%
>> memory for Hadoop. The last 10% are for the OS.
>>
>> Should I move that with 1GB for Hadoop, 10% for the OS and the rest
>> for HBase? Even if running MR jobs?
>>
>> Thanks,
>>
>> JM
>>
>
>
>
> --
> Kevin O'Dell
> Customer Operations Engineer, Cloudera
>
+
Kevin Odell 2013-01-27, 16:16
+
Jean-Marc Spaggiari 2013-01-27, 16:33
+
Kevin Odell 2013-01-28, 14:45
+
karunakar 2013-01-29, 01:46
+
Jean-Marc Spaggiari 2013-01-29, 20:26