Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS, mail # user - datanode error "Cannot append to a non-existent replica BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_"


Copy link to this message
-
Re: datanode error "Cannot append to a non-existent replica BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_"
ch huang 2013-07-31, 00:46
thanks for reply, i the block did not exist ,but why it will missing?

On Wed, Jul 31, 2013 at 2:02 AM, Jitendra Yadav
<[EMAIL PROTECTED]>wrote:

> Hi,
>
> Can you please check the existence/status  of any of mentioned block
> in your hdfs cluster.
>
> Command:
> hdfs fsck / -block |grep 'blk number'
>
> Thanks
>
> On 7/30/13, ch huang <[EMAIL PROTECTED]> wrote:
> > i do not know how to solve this,anyone can help
> >
> > 2013-07-30 17:28:40,953 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> > BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_458861
> > received exce
> > ption org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException:
> > Cannot append to a non-existent replica
> > BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_
> > 458861
> > 2013-07-30 17:28:40,953 ERROR
> > org.apache.hadoop.hdfs.server.datanode.DataNode: CH34:50011:DataXceiver
> > error processing WRITE_BLOCK operation  src: /192.168.2.209:4421 dest:
> /192
> > .168.10.34:50011
> > org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot
> > append to a non-existent replica
> > BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_458861
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:353)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:489)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:92)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:168)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:451)
> >         at
> >
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
> >         at
> >
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2013-07-30 17:28:40,978 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> > BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_458863
> > src: /192.168.2
> > .209:4423 dest: /192.168.10.34:50011
> > 2013-07-30 17:28:40,978 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> > BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_458863
> > received exc
> > eption org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException:
> > Cannot append to a non-existent replica
> > BP-1099828917-192.168.10.22-1373361366827:blk_-205789402477599299
> > 3_458863
> > 2013-07-30 17:28:40,978 ERROR
> > org.apache.hadoop.hdfs.server.datanode.DataNode: CH34:50011:DataXceiver
> > error processing WRITE_BLOCK operation  src: /192.168.2.209:4423 dest:
> /192
> > .168.10.34:50011
> > org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot
> > append to a non-existent replica
> > BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_45886
> > 3
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:353)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:489)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:92)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:168)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:451)
> >         at
> >
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
> >         at
> >
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
> >         at