Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Memory distribution for Hadoop/Hbase processes


Copy link to this message
-
Memory distribution for Hadoop/Hbase processes
Hi,
I have configured Hbase in pseudo distributed mode with HDFS as underlying
storage.I am not using map reduce framework as of now
I have 4GB RAM.
Currently i have following distribution of memory

Data Node,Name Node,Secondary Name Node each :1000MB(default HADOOP_HEAPSIZE
property)

Hmaster - 512 MB
HRegion - 1536 MB
Zookeeper - 512 MB

So total heap allocation becomes - 5.5 GB which is absurd as my total RAM
is only 4 GB , but still the setup is working fine on production. :-0

My questions are :
1) How this thing is working ?
2) I just have one table whose size at present is about 10-15 GB , so what
should be ideal memory distribution ?
--
Thanks and Regards,
Vimal Jain
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB