Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Hadoop on EC2 Managing Internal/External IPs

Copy link to this message
Re: Hadoop on EC2 Managing Internal/External IPs
Hi Igor,

Amazon offers a service where you can have a VPN gateway on your network
that leads directly back to the network where youre instances are at.  So
that 10.123.x.x subnet would be connected off of the VPN gateway on your
network and you'd set up your routers/routing to push traffic for that
subnet at the gateway.

On Thu, Aug 23, 2012 at 12:34 PM, igor Finkelshteyn <[EMAIL PROTECTED]>wrote:

> Hi,
> I'm currently setting up a Hadoop cluster on EC2, and everything works
> just fine when accessing the cluster from inside EC2, but as soon as I try
> to do something like upload a file from an external client, I get timeout
> errors like:
> 12/08/23 12:06:16 ERROR hdfs.DFSClient: Failed to close file
> /user/some_file._COPYING_
> java.net.SocketTimeoutException: 65000 millis timeout while waiting for
> channel to be ready for connect. ch :
> java.nio.channels.SocketChannel[connection-pending remote=/10.123.x.x:50010]
> What's clearly happening is my NameNode is resolving my DataNode's IPs to
> their internal EC2 values instead of their external values, and then
> sending along the internal IP to my external client, which is obviously
> unable to reach those. I'm thinking this must be a common problem. How do
> other people deal with it? Is there a way to just force my name node to
> send along my DataNode's hostname instead of IP, so that the hostname can
> be resolved properly from whatever box will be sending files?
> Eli