Jean-Marc Spaggiari 2013-01-27, 14:28
Kevin O'dell 2013-01-27, 15:03
Jean-Marc Spaggiari 2013-01-27, 15:43
-Re: HBase vs Hadoop memory configuration.
Kevin O'dell 2013-01-27, 16:16
That is probably correct. You can check the NN UI and confirm that
number, but it doesn't seem too far off for an HBase cluster. You will be
fine with just 1GB of heap for the DN with a block count that low.
Typically you don't need to raise the heap until you are looking at a
couple hundred thousand blocks per DN.
On Sun, Jan 27, 2013 at 10:43 AM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:
> Hi Kevin,
> What do you mean by "current block count per DN"? I kept the standard
> fsck is telling me that I have 10893 titak blocks. Since I have 8
> nodes, it's giving me 1361 blocks per node.
> It that what you are asking?
> 2013/1/27, Kevin O'dell <[EMAIL PROTECTED]>:
> > Hey JM,
> > I suspect they are referring to the DN process only. It is important
> > these discussion to talk about individual component memory usage. In
> > my experience most HBase clusters only need 1 - 2 GB of heap space for
> > DN process. I am not a Map Reduce expert, but typically the actual TT
> > process only needs 1GB of memory then you control everything else through
> > max slots and child heap. What is your current block count per DN?
> > On Sun, Jan 27, 2013 at 9:28 AM, Jean-Marc Spaggiari <
> > [EMAIL PROTECTED]> wrote:
> >> Hi,
> >> I saw on another message that hadoop only need 1GB...
> >> Today, I have configured my nodes with 45% memory for HBase, 45%
> >> memory for Hadoop. The last 10% are for the OS.
> >> Should I move that with 1GB for Hadoop, 10% for the OS and the rest
> >> for HBase? Even if running MR jobs?
> >> Thanks,
> >> JM
> > --
> > Kevin O'Dell
> > Customer Operations Engineer, Cloudera
Customer Operations Engineer, Cloudera
Jean-Marc Spaggiari 2013-01-27, 16:33
Kevin O'dell 2013-01-28, 14:45
karunakar 2013-01-29, 01:46
Jean-Marc Spaggiari 2013-01-29, 20:26