Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase vs Hadoop memory configuration.


Copy link to this message
-
Re: HBase vs Hadoop memory configuration.
Hi Jean,

AFAIK !!

The namenode can handle 1 million blocks for 1GB of namenode heap size ! It
depends on the configuration
dfs.block.size*1 milion blocks = 128 TB of data [considering 128 MB is the
default block size].

Using this command :export HADOOP_HEAPSIZE="-Xmx2g" will change across all
the daemons. Rather than using that, use the below configurations for
individual daemons.

You can set the namenode, datanode, jobtracker, tasktracker 2 gb heap size
for each daemon by using the following lines in hadoop-env.sh: Example

export HADOOP_NAMENODE_OPTS="-Xmx2g"
export HADOOP_DATANODE_OPTS="-Xmx2g"
export HADOOP_JOBTRACKER_OPTS="-Xmx2g"
export HADOOP_TASKTRACKER_OPTS="-Xmx2g"

Ex: If you have a server of 16 GB and concentrating more on HBase, and if
you are running datanode, tasktracker and regionserver on one node: then
give 4 GB for datanode, 2-3 GB for tasktracker [setting child jvm's] and 6-8
GB for regionserver.

Thanks,
karunakar.
--
View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-vs-Hadoop-memory-configuration-tp4037436p4037573.html
Sent from the HBase User mailing list archive at Nabble.com.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB