Do you happen to see something similar to:
10/03/17 15:47:58 WARN hdfs.DFSClient: NotReplicatedYetException sleeping
es_mstore_events_fact.txt retries left 4
10/03/17 15:47:58 INFO hdfs.DFSClient:
Other people saw the above along with Bad connect ack error.
On Fri, Jul 9, 2010 at 2:06 PM, Raymond Jennings III
> Hi Ted, thanks for your replay. That does not seem to make a difference
> though. I put that property in the xml file, restarted everything, tried
> transfer the file again but the same thing occurred.
> I had my cluster working perfectly for about a year but I recently had some
> failures and scrubbed all of my machines reinstalled linux (same version)
> moved from hadoop 0.20.1 to 0.20.2.
> ----- Original Message ----
> From: Ted Yu <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Sent: Fri, July 9, 2010 4:26:30 PM
> Subject: Re: Help with Hadoop runtime error
> Please see the description about xcievers at:
> You can confirm that you have a xcievers problem by grepping the
> datanode logs with the error message pasted in the last bullet point.
> On Fri, Jul 9, 2010 at 1:10 PM, Raymond Jennings III
> <[EMAIL PROTECTED]>wrote:
> > Does anyone know what might be causing this error? I am using version
> > Hadoop
> > 0.20.2 and it happens when I run bin/hadoop dfs -copyFromLocal ...
> > 10/07/09 15:51:45 INFO hdfs.DFSClient: Exception in
> > java.io.IOException: Bad connect ack with firstBadLink
> > 10/07/09 15:51:45 INFO hdfs.DFSClient: Abandoning block
> > blk_2932625575574450984_1002