Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Bad connect ack with firstBadLink


Copy link to this message
-
Re: Bad connect ack with firstBadLink
madhu phatak 2012-05-07, 09:37
Hi,
 Increasing the open file limit solved the issue. Thank you.

On Fri, May 4, 2012 at 9:39 PM, Mapred Learn <[EMAIL PROTECTED]> wrote:

> Check your number of blocks in the cluster.
>
> This also indicates that your datanodes are more full than they should be.
>
> Try deleting unnecessary blocks.
>
> On Fri, May 4, 2012 at 7:40 AM, Mohit Anchlia <[EMAIL PROTECTED]
> >wrote:
>
> > Please see:
> >
> > http://hbase.apache.org/book.html#dfs.datanode.max.xcievers
> >
> > On Fri, May 4, 2012 at 5:46 AM, madhu phatak <[EMAIL PROTECTED]>
> wrote:
> >
> > > Hi,
> > > We are running a three node cluster . From two days whenever we copy
> file
> > > to hdfs , it is throwing  java.IO.Exception Bad connect ack with
> > > firstBadLink . I searched in net, but not able to resolve the issue.
> The
> > > following is the stack trace from datanode log
> > >
> > > 2012-05-04 18:08:08,868 INFO
> > > org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> > > blk_-7520371350112346377_50118 received exception
> > java.net.SocketException:
> > > Connection reset
> > > 2012-05-04 18:08:08,869 ERROR
> > > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > > 172.23.208.17:50010,
> > > storageID=DS-1340171424-172.23.208.17-50010-1334672673051,
> > infoPort=50075,
> > > ipcPort=50020):DataXceiver
> > > java.net.SocketException: Connection reset
> > >        at java.net.SocketInputStream.read(SocketInputStream.java:168)
> > >        at
> java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> > >        at
> java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> > >        at java.io.DataInputStream.read(DataInputStream.java:132)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:262)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:309)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:373)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:525)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
> > >        at java.lang.Thread.run(Thread.java:662)
> > >
> > >
> > > It will be great if some one can point to the direction how to solve
> this
> > > problem.
> > >
> > > --
> > > https://github.com/zinnia-phatak-dev/Nectar
> > >
> >
>

--
https://github.com/zinnia-phatak-dev/Nectar