Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # user - Re: Accumulo on MapR - Compaction Test


Copy link to this message
-
Re: Accumulo on MapR - Compaction Test
Adam Fuchs 2012-05-02, 22:14
Keys,

There's not really a way to change the compression type used by MyMapFile
within Accumulo -- it always uses block compression with the DefaultCodec
(gzip). However, if you want to write a separate standalone test that tries
to mimic this behavior then you can use one of the MyMapFile.Writer
constructors to specify the type of compression you want. Incidentally,
MyMapFile is legacy code and we're getting rid of it in version 1.5, but it
is well tested and we wouldn't expect to see this type of problem.

Cheers,
Adam

On Wed, May 2, 2012 at 1:35 PM, Keys Botzum <[EMAIL PROTECTED]> wrote:

> I need a bit more help. I really appreciate the help already provided by
> Kevin and Eric.
>
> We've been testing Accumulo 1.4.0 on additional hardware platforms and
> have hit an unexpected issue. The compaction auto test (test/system/auth)
> fails. Interestingly, it fails every time on one machine and intermittently
> on another which makes me suspect it is some kind of race condition. At
> this point I can easily reproduce the problem and what I observe is that
> when the failure occurs, it always occurs in the same block of code but not
> on the same file.
>
> To be clear, when I run the following test:
>
> /run.py -t compact -d
>
> I get this exception in the tserver log:
>
> 02 08:41:15,944 [tabletserver.TabletServer] WARN : exception while
> scanning tablet 1<<
> java.io.IOException: invalid distance too far back
>         at
> org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native
> Method)
>         at
> org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>         at
> org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>         at
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>         at
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:63)
> *        at java.io.DataInputStream.readInt(DataInputStream.java:370)*
> *        at org.apache.accumulo.core.data.Value.readFields(Value.java:161)
> *
> *        at
> org.apache.accumulo.core.file.map.MySequenceFile$Reader.getCurrentValue(MySequenceFile.java:1773)
> *
> *        at
> org.apache.accumulo.core.file.map.MySequenceFile$Reader.next(MySequenceFile.java:1893)
> *
>         at
> org.apache.accumulo.core.file.map.MyMapFile$Reader.next(MyMapFile.java:678)
>         at
> org.apache.accumulo.core.file.map.MyMapFile$Reader.next(MyMapFile.java:799)
>         at
> org.apache.accumulo.core.file.map.MapFileOperations$RangeIterator.next(MapFileOperations.java:111)
>         at
> org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>         at
> org.apache.accumulo.core.iterators.SkippingIterator.next(SkippingIterator.java:29)
>         at
> org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>         at
> org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:88)
>         at
> org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>         at
> org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>         at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>         at
> org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>         at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>         at
> org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>         at
> org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>         at
> org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>         at
> org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>         at
> org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)