Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # dev >> HLogSplit error with hadoop-2.0.3-alpha and hbase trunk


Copy link to this message
-
Re: HLogSplit error with hadoop-2.0.3-alpha and hbase trunk
if (length != intBytes.length) throw new IOException("Failed read of int
length " + length);
The length is from read call. This looks pretty suspicious, if the stream
is not EOF why would it return less bytes? I will try to repro today.

On Wed, May 8, 2013 at 5:46 AM, ramkrishna vasudevan <
[EMAIL PROTECTED]> wrote:

> On further debugging found that this issue happens with ProtoBufWriter and
> not with sequenceFileWriter.(atleast we could not reproduce it with
> different runs)
>
> We can see that the HLog has more data in it but while reading one of the
> lines in the HLog this error happens.  So pretty much sure that it is not
> EOF.
> Verified DFS logs but could not find any exceptions out there too.
>
> We will try to figure out more on this tomorrow.
>
> Regards
> Ram
>
>
> On Wed, May 8, 2013 at 11:34 AM, ramkrishna vasudevan <
> [EMAIL PROTECTED]> wrote:
>
> > Ok so i tried this out with hadoop 2.0.4 and also with Sergey's patch.
> >  The issue is reproducible in all version of hadoop but not always.
> > I am able to get the errors like this
> >
> > 2013-05-07 17:11:08,827 WARN  [SplitLogWorker-ram.sh.intel.com
> ,60020,1367961009182]
> > org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of
> .logs/
> > ram.sh.intel.com,60020,1367960957620-splitting/ram.sh.intel.com
> %2C60020%2C1367960957620.1367960993389
> > failed, returning error
> > java.io.IOException: Error  while reading 1 WAL KVs; started reading at
> > 589822 and read up to 589824
> > at
> >
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:162)
> >  at
> >
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:88)
> > at
> >
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:75)
> >  at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:775)
> > at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:459)
> >  at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:388)
> > at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:115)
> >  at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:278)
> > at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:199)
> >  at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:166)
> > at java.lang.Thread.run(Thread.java:662)
> > Caused by: java.io.IOException: Failed read of int length 2
> > at org.apache.hadoop.hbase.KeyValue.iscreate(KeyValue.java:2335)
> > at
> >
> org.apache.hadoop.hbase.codec.KeyValueCodec$KeyValueDecoder.parseCell(KeyValueCodec.java:66)
> >  at
> org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:46)
> > at
> >
> org.apache.hadoop.hbase.regionserver.wal.WALEdit.readFromCells(WALEdit.java:199)
> >  at
> >
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:143)
> > ... 10 more
> >
> >
> > and sometimes
> > java.io.IOException: Failed read of int length 1
> > at org.apache.hadoop.hbase.KeyValue.iscreate(KeyValue.java:2335)
> >  at
> >
> org.apache.hadoop.hbase.codec.KeyValueCodec$KeyValueDecoder.parseCell(KeyValueCodec.java:66)
> > at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:41)
> >  at
> >
> org.apache.hadoop.hbase.regionserver.wal.WALEdit.readFromCells(WALEdit.java:199)
> > at
> >
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:137)
> >  at
> >
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:88)
> > at
> >
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:75)
> >  at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:2837)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2755)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB