Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Premature EOF: no length prefix available


Copy link to this message
-
Re: Premature EOF: no length prefix available
But that's wierld. This instance is running on AWS. If there issues with
Hadoop and AWS I think some other people will have faced it before me.

Ok. I will move the discussion on the Hadoop mailing list since it seems to
be more related to hadoop vs OS.

Thank,

JM

2013/5/2 Andrew Purtell <[EMAIL PROTECTED]>

> > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient: Exception
> in
> createBlockOutputStream java.io.EOFException: Premature EOF: no length
> prefix available
>
> The DataNode aborted the block transfer.
>
> > 2013-05-02 14:02:41,063 ERROR org.apache.hadoop.hdfs.server.
> datanode.DataNode:
> ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver
> error processing WRITE_BLOCK operation  src: /10.238.38.193:39831 dest: /
> 10.238.38.193:50010 java.io.FileNotFoundException: /mnt/dfs/dn/current/BP-
> 1179773663-10.238.38.193-1363960970263/current/rbw/blk_
> 7082931589039745816_1955950.meta (Invalid argument)
> >        at java.io.RandomAccessFile.open(Native Method)
> >        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>
> This looks like the native (OS level) side of RAF got EINVAL back from
> create() or open(). Go from there.
>
>
>
> On Thu, May 2, 2013 at 12:27 PM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
> > Hi,
> >
> > Any idea what can be the cause of a "Premature EOF: no length prefix
> > available" error?
> >
> > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient: Exception
> in
> > createBlockOutputStream
> > java.io.EOFException: Premature EOF: no length prefix available
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1105)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487)
> > 2013-05-02 14:02:41,064 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
> > BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> > 2013-05-02 14:02:41,068 INFO org.apache.hadoop.hdfs.DFSClient: Excluding
> > datanode 10.238.38.193:50010
> >
> >
> >
> > I'm getting that on a server start. Logs are splitted correctly,
> > coprocessors deployed corretly, and then I'm getting this exception. It's
> > excluding the datanode, and because of that almost everything remaining
> is
> > failing.
> >
> > There is only one server in this "cluster"... But even so, it should be
> > working. There is one master, one RS, one NN and one DN. On a AWS host.
> >
> > At the same time on the hadoop datanode side I'm getting that:
> >
> > 2013-05-02 14:02:41,063 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> > BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> > received exception java.io.FileNotFoundException:
> >
> >
> /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
> > (Invalid argument)
> > 2013-05-02 14:02:41,063 ERROR
> > org.apache.hadoop.hdfs.server.datanode.DataNode:
> > ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver error
> > processing WRITE_BLOCK operation  src: /10.238.38.193:39831 dest: /
> > 10.238.38.193:50010
> > java.io.FileNotFoundException:
> >
> >
> /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
> > (Invalid argument)
> >         at java.io.RandomAccessFile.open(Native Method)
> >         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >         at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.createStreams(ReplicaInPipeline.java:187)
> >         at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:199)
> >         at
> >
> >