Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Premature EOF: no length prefix available


+
Jean-Marc Spaggiari 2013-05-02, 19:27
+
Ted Yu 2013-05-02, 19:40
+
Jean-Marc Spaggiari 2013-05-02, 19:57
+
Andrew Purtell 2013-05-02, 19:59
+
Jean-Marc Spaggiari 2013-05-02, 20:10
Copy link to this message
-
Re: Premature EOF: no length prefix available
Oh, I have faced issues with Hadoop on AWS personally. :-) But not this
one. I use instance-store aka "ephemeral" volumes for DataNode block
storage. Are you by chance using EBS?
On Thu, May 2, 2013 at 1:10 PM, Jean-Marc Spaggiari <[EMAIL PROTECTED]
> wrote:

> But that's wierld. This instance is running on AWS. If there issues with
> Hadoop and AWS I think some other people will have faced it before me.
>
> Ok. I will move the discussion on the Hadoop mailing list since it seems to
> be more related to hadoop vs OS.
>
> Thank,
>
> JM
>
> 2013/5/2 Andrew Purtell <[EMAIL PROTECTED]>
>
> > > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient:
> Exception
> > in
> > createBlockOutputStream java.io.EOFException: Premature EOF: no length
> > prefix available
> >
> > The DataNode aborted the block transfer.
> >
> > > 2013-05-02 14:02:41,063 ERROR org.apache.hadoop.hdfs.server.
> > datanode.DataNode:
> > ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver
> > error processing WRITE_BLOCK operation  src: /10.238.38.193:39831 dest:
> /
> > 10.238.38.193:50010 java.io.FileNotFoundException:
> /mnt/dfs/dn/current/BP-
> > 1179773663-10.238.38.193-1363960970263/current/rbw/blk_
> > 7082931589039745816_1955950.meta (Invalid argument)
> > >        at java.io.RandomAccessFile.open(Native Method)
> > >        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >
> > This looks like the native (OS level) side of RAF got EINVAL back from
> > create() or open(). Go from there.
> >
> >
> >
> > On Thu, May 2, 2013 at 12:27 PM, Jean-Marc Spaggiari <
> > [EMAIL PROTECTED]> wrote:
> >
> > > Hi,
> > >
> > > Any idea what can be the cause of a "Premature EOF: no length prefix
> > > available" error?
> > >
> > > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient:
> Exception
> > in
> > > createBlockOutputStream
> > > java.io.EOFException: Premature EOF: no length prefix available
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1105)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487)
> > > 2013-05-02 14:02:41,064 INFO org.apache.hadoop.hdfs.DFSClient:
> Abandoning
> > >
> BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> > > 2013-05-02 14:02:41,068 INFO org.apache.hadoop.hdfs.DFSClient:
> Excluding
> > > datanode 10.238.38.193:50010
> > >
> > >
> > >
> > > I'm getting that on a server start. Logs are splitted correctly,
> > > coprocessors deployed corretly, and then I'm getting this exception.
> It's
> > > excluding the datanode, and because of that almost everything remaining
> > is
> > > failing.
> > >
> > > There is only one server in this "cluster"... But even so, it should be
> > > working. There is one master, one RS, one NN and one DN. On a AWS host.
> > >
> > > At the same time on the hadoop datanode side I'm getting that:
> > >
> > > 2013-05-02 14:02:41,063 INFO
> > > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> > >
> BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> > > received exception java.io.FileNotFoundException:
> > >
> > >
> >
> /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
> > > (Invalid argument)
> > > 2013-05-02 14:02:41,063 ERROR
> > > org.apache.hadoop.hdfs.server.datanode.DataNode:
> > > ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver error
> > > processing WRITE_BLOCK operation  src: /10.238.38.193:39831 dest: /
> > > 10.238.38.193:50010
> > > java.io.FileNotFoundException:
> > >
> > >
> >
> /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta

Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)
+
Jean-Marc Spaggiari 2013-05-02, 20:23
+
Andrew Purtell 2013-05-02, 20:32
+
Loic Talon 2013-05-02, 20:53
+
Andrew Purtell 2013-05-02, 21:12
+
Loic Talon 2013-05-02, 21:21
+
Andrew Purtell 2013-05-02, 21:24
+
Andrew Purtell 2013-05-02, 21:18
+
Michael Segel 2013-05-02, 21:32
+
Andrew Purtell 2013-05-02, 21:47
+
Jean-Marc Spaggiari 2013-05-05, 11:34
+
Jean-Marc Spaggiari 2013-05-02, 21:15