do you get any error when trying to connect to cluster, something like
'tried n times' or replicated 0 times.
On Sun, May 12, 2013 at 7:28 PM, sam liu <[EMAIL PROTECTED]> wrote:
> I setup a cluster with 3 nodes, and after that I did not submit any job on
> it. But, after few days, I found the cluster is unhealthy:
> - No result returned after issuing command 'hadoop dfs -ls /' or 'hadoop
> dfsadmin -report' for a while
> - The page of 'http://namenode:50070' could not be opened as expected...
> - ...
> I did not find any usefull info in the logs, but found the avaible memory
> of the cluster nodes are very low at that time:
> - node1(NN,JT,DN,TT): 158 mb mem is available
> - node2(DN,TT): 75 mb mem is available
> - node3(DN,TT): 174 mb mem is available
> I guess the issue of my cluster is caused by lacking of memeory, and my
> questions are:
> - Without running jobs, what's the minimum memory requirements to datanode
> and namenode?
> - How to define the minimum memeory for datanode and namenode?
> Sam Liu