Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Premature EOF: no length prefix available


Copy link to this message
-
Re: Premature EOF: no length prefix available
hbase@ip-10-238-38-193:/mnt/log/hadoop-hdfs$ hbase -version
java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
hbase@ip-10-238-38-193:/mnt/log/hadoop-hdfs$ hbase shell
13/05/02 19:44:05 WARN conf.Configuration: hadoop.native.lib is deprecated.
Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.2-cdh4.2.0, rUnknown, Fri Feb 15 11:48:32 PST 2013

hbase@ip-10-238-38-193:/mnt/log/hadoop-hdfs$ hadoop version
Hadoop 2.0.0-cdh4.2.0
Subversion
file:///var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.2.0-Packaging-Hadoop-2013-02-15_10-38-54/hadoop-2.0.0+922-1.cdh4.2.0.p0.12~precise/src/hadoop-common-project/hadoop-common
-r 8bce4bd28a464e0a92950c50ba01a9deb1d85686
Compiled by jenkins on Fri Feb 15 11:13:37 PST 2013
>From source with checksum 3eefc211a14ac7b6e764d6ded2eeeb26

Because the datanode is not able to write this file, it's excluded from
HBase, and things are going wrong after that.

Replication factor is setup to 1. I tried to touch the file and it's
working fine with the HDFS user. What's strange is that it's sometime
working fine and I'm able to fix the server and get everything right. But
then soon after that it's going bad again...

Logs from the namenode:
2013-05-02 14:02:41,321 WARN
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault:
Not able to place enough replicas, still in need of 1 to reach 1
For more information, please enable DEBUG log level on
org.apache.commons.logging.impl.Log4JLogger
2013-05-02 14:02:41,321 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hbase (auth:SIMPLE) cause:java.io.IOException: File
/hbase/events/d8215fe52cf86f91905f80b1817909df/recovered.edits/0000000000287157949.temp
could only be replicated to 0 nodes instead of minReplication (=1).  There
are 1 datanode(s) running and 1 node(s) are excluded in this operation.
2013-05-02 14:02:41,322 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 11 on 8020, call
org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from
10.238.38.193:33353: error: java.io.IOException: File
/hbase/events/d8215fe52cf86f91905f80b1817909df/recovered.edits/0000000000287157949.temp
could only be replicated to 0 nodes instead of minReplication (=1).  There
are 1 datanode(s) running and 1 node(s) are excluded in this operation.
java.io.IOException: File
/hbase/events/d8215fe52cf86f91905f80b1817909df/recovered.edits/0000000000287157949.temp
could only be replicated to 0 nodes instead of minReplication (=1).  There
are 1 datanode(s) running and 1 node(s) are excluded in this operation.

2013/5/2 Ted Yu <[EMAIL PROTECTED]>

> This seems to be hadoop issue.
>
> What is HBase / hadoop version you were using ?
>
> Thanks
>
> On Thu, May 2, 2013 at 12:27 PM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
> > Hi,
> >
> > Any idea what can be the cause of a "Premature EOF: no length prefix
> > available" error?
> >
> > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient: Exception
> in
> > createBlockOutputStream
> > java.io.EOFException: Premature EOF: no length prefix available
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1105)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487)
> > 2013-05-02 14:02:41,064 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
> > BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> > 2013-05-02 14:02:41,068 INFO org.apache.hadoop.hdfs.DFSClient: Excluding
> > datanode 10.238.38.193:50010
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB