In general when you have like large hardware machine like the one you have
you can set
dfs.namenode.handler.count = 64 (you can increase this in proportion to
dfs.datanode.handler.count is defaulted to 3 but you can raise it to around
6 to 10. In some blogs I have read that increasing this will increase
memory consumption but no performance gains
mapreduce.jobtracker.handler.count this is basically you mean the number of
server threads you want to run for jobtracker. From the default hadoop site
xml it is recommended that you keep this size to around 4% of number of
mapred.child.java.opts i am not sure about this. by default its 200M but
this is setting which is set that the child processes of tasktrackers will
start with this option if you do not overwrite them from client side. I may
be wrong in this.
On Sun, Dec 22, 2013 at 6:50 PM, sam liu <[EMAIL PROTECTED]> wrote:
> We have 20 nodes cluster(1 namenode, 1 jobtracker, 18 datanodes). Each
> node has 20 cpu cores and 64 GB memory.
> How to set the values for following parameters?
> *dfs.namenode.handler.count - dfs.datanode.handler.count- *
> - *mapred.child.java.opts*
> Thanks very much!