Thanks all for this information.
I have try to adjust my setting to make sure the memory is used efficiently.
2013/1/28, karunakar <[EMAIL PROTECTED]>:
> Hi Jean,
> AFAIK !!
> The namenode can handle 1 million blocks for 1GB of namenode heap size ! It
> depends on the configuration
> dfs.block.size*1 milion blocks = 128 TB of data [considering 128 MB is the
> default block size].
> Using this command :export HADOOP_HEAPSIZE="-Xmx2g" will change across all
> the daemons. Rather than using that, use the below configurations for
> individual daemons.
> You can set the namenode, datanode, jobtracker, tasktracker 2 gb heap size
> for each daemon by using the following lines in hadoop-env.sh: Example
> export HADOOP_NAMENODE_OPTS="-Xmx2g"
> export HADOOP_DATANODE_OPTS="-Xmx2g"
> export HADOOP_JOBTRACKER_OPTS="-Xmx2g"
> export HADOOP_TASKTRACKER_OPTS="-Xmx2g"
> Ex: If you have a server of 16 GB and concentrating more on HBase, and if
> you are running datanode, tasktracker and regionserver on one node: then
> give 4 GB for datanode, 2-3 GB for tasktracker [setting child jvm's] and
> GB for regionserver.
> View this message in context:
> Sent from the HBase User mailing list archive at Nabble.com.