Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase vs Hadoop memory configuration.


Copy link to this message
-
Re: HBase vs Hadoop memory configuration.
>From the UI:
15790 files and directories, 11292 blocks = 27082 total. Heap Size is
179.12 MB / 910.25 MB (19%)

I'm setting the memory into the hadoop-env.sh file using:
export HADOOP_HEAPSIZE=1024

I think that's fine for the datanodes, but does it mean also each task
traker, job tracker and name node will take 1G? So 2GB to 4GB on each
server? (1 NN+JB+DN+TT and 7 DN+TT) Or it will be 1GB in total?

And if we say 1GB for the DN, how much should we reserved for the
other deamons? I want to make sure I give the maximum I can give to
HBase without starving Hadoop...

JM

2013/1/27, Kevin O'dell <[EMAIL PROTECTED]>:
> JM,
>
>   That is probably correct.  You can check the NN UI and confirm that
> number, but it doesn't seem too far off for an HBase cluster.  You will be
> fine with just 1GB of heap for the DN with a block count that low.
>  Typically you don't need to raise the heap until you are looking at a
> couple hundred thousand blocks per DN.
>
> On Sun, Jan 27, 2013 at 10:43 AM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
>> Hi Kevin,
>>
>> What do you mean by "current block count per DN"? I kept the standard
>> settings.
>>
>> fsck is telling me that I have 10893 titak blocks. Since I have 8
>> nodes, it's giving me 1361 blocks per node.
>>
>> It that what you are asking?
>>
>> JM
>>
>> 2013/1/27, Kevin O'dell <[EMAIL PROTECTED]>:
>> > Hey JM,
>> >
>> >   I suspect they are referring to the DN process only.  It is important
>> in
>> > these discussion to talk about individual component memory usage.  In
>> > my experience most HBase clusters only need 1 - 2 GB of heap space for
>> the
>> > DN process.  I am not a Map Reduce expert, but typically the actual TT
>> > process only needs 1GB of memory then you control everything else
>> > through
>> > max slots and child heap.  What is your current block count per DN?
>> >
>> > On Sun, Jan 27, 2013 at 9:28 AM, Jean-Marc Spaggiari <
>> > [EMAIL PROTECTED]> wrote:
>> >
>> >> Hi,
>> >>
>> >> I saw on another message that hadoop only need 1GB...
>> >>
>> >> Today, I have configured my nodes with 45% memory for HBase, 45%
>> >> memory for Hadoop. The last 10% are for the OS.
>> >>
>> >> Should I move that with 1GB for Hadoop, 10% for the OS and the rest
>> >> for HBase? Even if running MR jobs?
>> >>
>> >> Thanks,
>> >>
>> >> JM
>> >>
>> >
>> >
>> >
>> > --
>> > Kevin O'Dell
>> > Customer Operations Engineer, Cloudera
>> >
>>
>
>
>
> --
> Kevin O'Dell
> Customer Operations Engineer, Cloudera
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB