Checked the firewall rules?
On Jan 8, 2014, at 3:22 AM, Saeed Adel Mehraban <[EMAIL PROTECTED]> wrote:
> Hi all.
> I have an installation on Hadoop on 3 nodes, namely master, slave1 and slave2. When I try to run a job, assuming appmaster be on slave1, every map and reduce tasks which take place on slave2 will fail due to ConnectException.
> I checked the port which slave2 wants to connect to. It differs randomly each time, but when I look for it in slave1 logs, I can see this line:
> "2014-01-08 02:14:25,206 INFO [Socket Reader #1 for port 38226] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 38226"
> So there is a process on slave1 listening to this port, but slave2 tasks want to connect to this port on slave2.
> Do you know why is this happening?
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.