I setup a cluster with 3 nodes, and after that I did not submit any job on
it. But, after few days, I found the cluster is unhealthy:
- No result returned after issuing command 'hadoop dfs -ls /' or 'hadoop
dfsadmin -report' for a while
- The page of 'http://namenode:50070' could not be opened as expected...
I did not find any usefull info in the logs, but found the avaible memory
of the cluster nodes are very low at that time:
- node1(NN,JT,DN,TT): 158 mb mem is available
- node2(DN,TT): 75 mb mem is available
- node3(DN,TT): 174 mb mem is available
I guess the issue of my cluster is caused by lacking of memeory, and my
- Without running jobs, what's the minimum memory requirements to datanode
- How to define the minimum memeory for datanode and namenode?