Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Socket closed Exception

lohit 2009-03-29, 18:55
Koji Noguchi 2009-03-30, 06:44
lohit 2009-03-30, 15:50
Koji Noguchi 2009-03-30, 16:30
Copy link to this message
Re: Socket closed Exception

Thanks Koji, Raghu.
This seemed to solve our problem, havent seen this happen in the past 2 days. What is the typical value of ipc.client.idlethreshold on big clusters.
Does default value of 4000 suffice?


----- Original Message ----
From: Koji Noguchi <[EMAIL PROTECTED]>
Sent: Monday, March 30, 2009 9:30:04 AM
Subject: RE: Socket closed Exception


You're right. We saw " java.net.SocketTimeoutException: timed out
waiting for rpc response" and not Socket closed exception.

If you're getting "closed exception", then I don't remember seeing that
problem on our clusters.

Our users often report "Socket closed exception" as a problem, but in
most cases those failures are due to jobs failing with completely
different reasons and race condition between 1) JobTracker removing
directory/killing tasks and 2) tasks failing with closed exception
before they get killed.


-----Original Message-----
From: lohit [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 30, 2009 8:51 AM
Subject: Re: Socket closed Exception
Thanks Koji.
If I look at the code, NameNode (RPC Server) seems to tear down idle
connections. Did you see 'Socket closed' exception instead of 'timed out
waiting for socket'?
We seem to hit the 'Socket closed' exception where client do not
timeout, but get back socket closed exception when they do RPC for

I will give this a try. Thanks again,

----- Original Message ----
From: Koji Noguchi <[EMAIL PROTECTED]>
Sent: Sunday, March 29, 2009 11:44:29 PM
Subject: RE: Socket closed Exception

Hi Lohit,

My initial guess would be

When this happened on our 0.17 cluster, all of our (task) clients were
using the max idle time of 1 hour due to this bug instead of the
configured value of a few seconds.
Thus each client kept the connection up much longer than we expected.
(Not sure if this applies to your 0.15 cluster, but it sounds similar to
what we observed.)

This worked until namenode started hitting the max limit of '

  <description>Defines the threshold number of connections after which
               connections will be inspected for idleness.

When inspecting for idleness, namenode uses

  <description>Defines the maximum idle time for a connected client
               after which it may be disconnected.

As a result, many connections got disconnected at once.
Clients only see the timeouts when they try to re-use that sockets the
next time and wait for 1 minute.  That's why they are not exactly at the
same time, but *almost* the same time.
# If this solves your problem, Raghu should get the credit.
  He spent so many hours to solve this mystery for us. :)
-----Original Message-----
From: lohit [mailto:[EMAIL PROTECTED]]
Sent: Sunday, March 29, 2009 11:56 AM
Subject: Socket closed Exception
Recently we are seeing lot of Socket closed exception in our cluster.
Many task's open/create/getFileInfo calls get back 'SocketException'
with message 'Socket closed'. We seem to see many tasks fail with same
error around same time. There are no warning or info messages in
NameNode /TaskTracker/Task logs. (This is on HDFS 0.15) Are there cases
where NameNode closes socket due heavy load or during conention of
resource of anykind?

Raghu Angadi 2009-03-30, 19:08
lohit 2009-03-30, 19:26