Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Premature EOF: no length prefix available


+
Jean-Marc Spaggiari 2013-05-02, 19:27
+
Ted Yu 2013-05-02, 19:40
+
Jean-Marc Spaggiari 2013-05-02, 19:57
Copy link to this message
-
Re: Premature EOF: no length prefix available
> 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient: Exception
in
createBlockOutputStream java.io.EOFException: Premature EOF: no length
prefix available

The DataNode aborted the block transfer.

> 2013-05-02 14:02:41,063 ERROR org.apache.hadoop.hdfs.server.
datanode.DataNode:
ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver
error processing WRITE_BLOCK operation  src: /10.238.38.193:39831 dest: /
10.238.38.193:50010 java.io.FileNotFoundException: /mnt/dfs/dn/current/BP-
1179773663-10.238.38.193-1363960970263/current/rbw/blk_
7082931589039745816_1955950.meta (Invalid argument)
>        at java.io.RandomAccessFile.open(Native Method)
>        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)

This looks like the native (OS level) side of RAF got EINVAL back from
create() or open(). Go from there.

On Thu, May 2, 2013 at 12:27 PM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:

> Hi,
>
> Any idea what can be the cause of a "Premature EOF: no length prefix
> available" error?
>
> 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient: Exception in
> createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
>         at
>
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
>         at
>
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1105)
>         at
>
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039)
>         at
>
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487)
> 2013-05-02 14:02:41,064 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
> BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> 2013-05-02 14:02:41,068 INFO org.apache.hadoop.hdfs.DFSClient: Excluding
> datanode 10.238.38.193:50010
>
>
>
> I'm getting that on a server start. Logs are splitted correctly,
> coprocessors deployed corretly, and then I'm getting this exception. It's
> excluding the datanode, and because of that almost everything remaining is
> failing.
>
> There is only one server in this "cluster"... But even so, it should be
> working. There is one master, one RS, one NN and one DN. On a AWS host.
>
> At the same time on the hadoop datanode side I'm getting that:
>
> 2013-05-02 14:02:41,063 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> received exception java.io.FileNotFoundException:
>
> /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
> (Invalid argument)
> 2013-05-02 14:02:41,063 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver error
> processing WRITE_BLOCK operation  src: /10.238.38.193:39831 dest: /
> 10.238.38.193:50010
> java.io.FileNotFoundException:
>
> /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
> (Invalid argument)
>         at java.io.RandomAccessFile.open(Native Method)
>         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>         at
>
> org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.createStreams(ReplicaInPipeline.java:187)
>         at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:199)
>         at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:457)
>         at
>
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
>         at
>
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
>         at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:662)
>
>
> Does is sound more an hadoop issue than an HBase one?
>
> JM
>

--
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)
+
Jean-Marc Spaggiari 2013-05-02, 20:10
+
Andrew Purtell 2013-05-02, 20:18
+
Jean-Marc Spaggiari 2013-05-02, 20:23
+
Andrew Purtell 2013-05-02, 20:32
+
Loic Talon 2013-05-02, 20:53
+
Andrew Purtell 2013-05-02, 21:12
+
Loic Talon 2013-05-02, 21:21
+
Andrew Purtell 2013-05-02, 21:24
+
Andrew Purtell 2013-05-02, 21:18
+
Michael Segel 2013-05-02, 21:32
+
Andrew Purtell 2013-05-02, 21:47
+
Jean-Marc Spaggiari 2013-05-05, 11:34
+
Jean-Marc Spaggiari 2013-05-02, 21:15
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB