Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Bad connect ack with firstBadLink


Copy link to this message
-
Re: Bad connect ack with firstBadLink
When I checked my data node logs, I have:

2009-11-16 18:58:19,884 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(XXX.XXX.XXX.XXX:50010, storageID=DS-1473795645-152.3.144.235-50010-1258414119856, infoPort=50075, ipcPort=50020):DataXceiver
java.io.IOException: xceiverCount 257 exceeds the limit of concurrent xcievers 256

Also, when I do netstat I have a lot of tcp connections in CLOSE_WAIT state and it stays there until I restart HDFS.

-Harold
--- On Mon, 11/16/09, Jason Venner <[EMAIL PROTECTED]> wrote:

> From: Jason Venner <[EMAIL PROTECTED]>
> Subject: Re: Bad connect ack with firstBadLink
> To: [EMAIL PROTECTED]
> Date: Monday, November 16, 2009, 10:39 AM
> The common reason for that is something in
> your chain of apps has run out of file descriptors, usually
> your application, but on rare occasion the datanodes.
>
> The first time I saw this, with the hadoop dfs command, 
> it only happened on one machine in our cluster, when the
> copyFrom included a wild card, and upping the per user file
> descriptor limit resolved it.
>
>
> On Mon, Nov 16, 2009 at 3:35 PM,
> Harold Lim <[EMAIL PROTECTED]>
> wrote:
>
> Hi All,
>
>
>
> I'm trying to copy files from my local filesystem to
> HDFS but after a while I'm getting a
> java.io.IOException. Any ideas why I'm getting this?
>
>
>
>
>
> /mount/hadoop-core-0.22.0-dev/bin# ./hdfs dfs
> -copyFromLocal /mount/filestore /filestore
>
>
>
> 09/11/15 00:36:59 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack
> with firstBadLink as XXXXX:50010
>
> 09/11/15 00:36:59 INFO hdfs.DFSClient: Abandoning block
> blk_5186243442349315665_1490
>
> 09/11/15 00:37:05 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack
> with firstBadLink as XXXXX:50010
>
> 09/11/15 00:37:05 INFO hdfs.DFSClient: Abandoning block
> blk_-8119590188070910888_1490
>
> 09/11/15 00:37:12 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack
> with firstBadLink as XXXXX:50010
>
> 09/11/15 00:37:12 INFO hdfs.DFSClient: Abandoning block
> blk_8182324531276884825_1492
>
> 09/11/15 00:37:18 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack
> with firstBadLink as XXXXX:50010
>
> 09/11/15 00:37:18 INFO hdfs.DFSClient: Abandoning block
> blk_6572699021740215303_1492
>
> 09/11/15 00:37:24 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream java.io.EOFException
>
> 09/11/15 00:37:24 INFO hdfs.DFSClient: Abandoning block
> blk_-6796814980855591976_1492
>
> 09/11/15 00:37:30 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack
> with firstBadLink as XXXXX:50010
>
> 09/11/15 00:37:30 INFO hdfs.DFSClient: Abandoning block
> blk_-7019917111201756069_1494
>
> 09/11/15 00:37:36 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack
> with firstBadLink as XXXXX:50010
>
> 09/11/15 00:37:36 INFO hdfs.DFSClient: Abandoning block
> blk_6878833767189775663_1494
>
> 09/11/15 00:37:42 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack
> with firstBadLink as XXXXX:50010
>
> 09/11/15 00:37:42 INFO hdfs.DFSClient: Abandoning block
> blk_-272412162654992144_1494
>
> 09/11/15 00:37:48 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream java.io.EOFException
>
> 09/11/15 00:37:48 INFO hdfs.DFSClient: Abandoning block
> blk_1592504437802093135_1494
>
> 09/11/15 00:37:54 WARN hdfs.DFSClient: DataStreamer
> Exception: java.io.IOException: Unable to create new block.
>
>
>
>       at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSClient.java:3086)
>
>       at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2667)
>
>
>
> 09/11/15 00:37:54 WARN hdfs.DFSClient: Could not get block
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB