Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Premature EOF: no length prefix available


Copy link to this message
-
Re: Premature EOF: no length prefix available
Every instance type except t1.micro has a certain number of instance
storage (locally attached disk) volumes available, 1, 2, or 4 depending on
type.

As you probably know, you can use or create AMIs backed by instance-store,
in which the OS image is constructed on locally attached disk by a parallel
fetch process from slices of the root volume image stored in S3, or backed
by EBS, in which case the OS image is an EBS volume and attached over the
network, like a SAN.

If you launch an Amazon Linux instance store backed instance the first
"ephemeral" local volume will be automatically attached on
/media/ephemeral0. That's where that term comes from, it's a synonym for
instance-store. (You can by the way tell CloudInit via directives sent over
instance data to mount all of them.)

If you have an EBS backed instance the default is to NOT attach any of
these volumes.

If you are launching your instance with the Amazon Web console, in the
volume configuration part you can set up instance-store aka "ephemeral"
mounts whether it is instance-store backed or EBS backed.

Sorry I can't get into more background on this. Hope it helps.

On Thu, May 2, 2013 at 1:23 PM, Jean-Marc Spaggiari <[EMAIL PROTECTED]
> wrote:

> Hi Andrew,
>
> No, this AWS instance is configured with instance stores too.
>
> What do you mean by "ephemeral"?
>
> JM
>
> 2013/5/2 Andrew Purtell <[EMAIL PROTECTED]>
>
> > Oh, I have faced issues with Hadoop on AWS personally. :-) But not this
> > one. I use instance-store aka "ephemeral" volumes for DataNode block
> > storage. Are you by chance using EBS?
> >
> >
> > On Thu, May 2, 2013 at 1:10 PM, Jean-Marc Spaggiari <
> > [EMAIL PROTECTED]
> > > wrote:
> >
> > > But that's wierld. This instance is running on AWS. If there issues
> with
> > > Hadoop and AWS I think some other people will have faced it before me.
> > >
> > > Ok. I will move the discussion on the Hadoop mailing list since it
> seems
> > to
> > > be more related to hadoop vs OS.
> > >
> > > Thank,
> > >
> > > JM
> > >
> > > 2013/5/2 Andrew Purtell <[EMAIL PROTECTED]>
> > >
> > > > > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient:
> > > Exception
> > > > in
> > > > createBlockOutputStream java.io.EOFException: Premature EOF: no
> length
> > > > prefix available
> > > >
> > > > The DataNode aborted the block transfer.
> > > >
> > > > > 2013-05-02 14:02:41,063 ERROR org.apache.hadoop.hdfs.server.
> > > > datanode.DataNode:
> > > > ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver
> > > > error processing WRITE_BLOCK operation  src: /10.238.38.193:39831
> dest:
> > > /
> > > > 10.238.38.193:50010 java.io.FileNotFoundException:
> > > /mnt/dfs/dn/current/BP-
> > > > 1179773663-10.238.38.193-1363960970263/current/rbw/blk_
> > > > 7082931589039745816_1955950.meta (Invalid argument)
> > > > >        at java.io.RandomAccessFile.open(Native Method)
> > > > >        at
> java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > > >
> > > > This looks like the native (OS level) side of RAF got EINVAL back
> from
> > > > create() or open(). Go from there.
> > > >
> > > >
> > > >
> > > > On Thu, May 2, 2013 at 12:27 PM, Jean-Marc Spaggiari <
> > > > [EMAIL PROTECTED]> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > Any idea what can be the cause of a "Premature EOF: no length
> prefix
> > > > > available" error?
> > > > >
> > > > > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient:
> > > Exception
> > > > in
> > > > > createBlockOutputStream
> > > > > java.io.EOFException: Premature EOF: no length prefix available
> > > > >         at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
> > > > >         at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1105)
> > > > >         at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039)

Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)