Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - HBase vs Hadoop memory configuration.


+
Jean-Marc Spaggiari 2013-01-27, 14:28
+
Kevin Odell 2013-01-27, 15:03
+
Jean-Marc Spaggiari 2013-01-27, 15:43
+
Kevin Odell 2013-01-27, 16:16
+
Jean-Marc Spaggiari 2013-01-27, 16:33
+
Kevin Odell 2013-01-28, 14:45
+
karunakar 2013-01-29, 01:46
Copy link to this message
-
Re: HBase vs Hadoop memory configuration.
Jean-Marc Spaggiari 2013-01-29, 20:26
Thanks all for this information.

I have try to adjust my setting to make sure the memory is used efficiently.

JM

2013/1/28, karunakar <[EMAIL PROTECTED]>:
> Hi Jean,
>
> AFAIK !!
>
> The namenode can handle 1 million blocks for 1GB of namenode heap size ! It
> depends on the configuration
> dfs.block.size*1 milion blocks = 128 TB of data [considering 128 MB is the
> default block size].
>
> Using this command :export HADOOP_HEAPSIZE="-Xmx2g" will change across all
> the daemons. Rather than using that, use the below configurations for
> individual daemons.
>
> You can set the namenode, datanode, jobtracker, tasktracker 2 gb heap size
> for each daemon by using the following lines in hadoop-env.sh: Example
>
> export HADOOP_NAMENODE_OPTS="-Xmx2g"
> export HADOOP_DATANODE_OPTS="-Xmx2g"
> export HADOOP_JOBTRACKER_OPTS="-Xmx2g"
> export HADOOP_TASKTRACKER_OPTS="-Xmx2g"
>
> Ex: If you have a server of 16 GB and concentrating more on HBase, and if
> you are running datanode, tasktracker and regionserver on one node: then
> give 4 GB for datanode, 2-3 GB for tasktracker [setting child jvm's] and
> 6-8
> GB for regionserver.
>
> Thanks,
> karunakar.
>
>
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/HBase-vs-Hadoop-memory-configuration-tp4037436p4037573.html
> Sent from the HBase User mailing list archive at Nabble.com.
>