We've got this weird problem regularly on our NameNode (apache
hadoop-0.20.205.0) - every couple of weeks:
The JobTracker had this error:
2012-10-08 11:44:03,928 WARN org.apache.hadoop.hdfs.DFSClient: Problem
renewing lease for DFSClient_1416124356
java.io.IOException: Call to nn-virtual.x.y.z/188.8.131.52:8020 failed on local
exception: java.net.BindException: Cannot assign requested address
at $Proxy5.renewLease(Unknown Source)
2012-10-08 11:44:03,927 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: nn-virtual.x.y.z/184.108.40.206:8020. Already tried 9 time(s).
in which "nn-virtual.x.y.z/220.127.116.11:8020" is our HDFS address.
I listed out all the local addresses on the NN and got about 24K (more or
less) open ports. The ip_local_port_range has:
We are not reaching the limit, but very close. What's strange is: almost
all of the local ports are used by the NN process. There might be some
holes in the list, but overall, it seems the NN was using up all the
ephemeral ports available in the range.
Right now, I strongly suspect that "Cannot assign requested address" is due
to lack of ports - although I'm not 100% sure since the ephemeral ports
change all the time.
Has anybody seen this before? Any pointers would be appreciated.
Also, we are using a virtual IP for the NN. All the ports are opened on the
virtual IP address. Could it be related to the problem?
Thanks for your help,