Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Custom HBase Filter : Error in readFields


+
Bryan Baugher 2013-02-20, 22:05
+
Ted Yu 2013-02-20, 23:32
+
Viral Bajaria 2013-02-20, 23:42
+
Ted Yu 2013-02-21, 00:29
+
Bryan Baugher 2013-02-21, 01:58
+
Ted Yu 2013-02-21, 02:48
+
Bryan Baugher 2013-02-21, 03:46
+
lars hofhansl 2013-02-21, 04:54
Copy link to this message
-
Re: Custom HBase Filter : Error in readFields
Ugh, yes you are correct. This fixed my issue. Thank you all for your help.
On Wed, Feb 20, 2013 at 10:54 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:

> You probably want your write() to be idempotent. Currently it will exhaust
> the iterator and not reset it.
> (Just guessing, though)
>
>
>
> ________________________________
>  From: Bryan Baugher <[EMAIL PROTECTED]>
> To: user <[EMAIL PROTECTED]>
> Sent: Wednesday, February 20, 2013 7:46 PM
> Subject: Re: Custom HBase Filter : Error in readFields
>
> I updated my code to use the Bytes class for serialization and added more
> log messages. I see this[1] now. It is able to create the filter the first
> time but when it gets to the second region (on the same region server) it
> attempts to create the filter again but the data read in from readFields
> seems corrupted.
>
> [1] - http://pastebin.com/TqNsUVSk
>
>
> On Wed, Feb 20, 2013 at 8:48 PM, Ted Yu <[EMAIL PROTECTED]> wrote:
>
> > Can you use code similar to the following for serialization ?
> >   public void readFields(DataInput in) throws IOException {
> >     this.prefix = Bytes.readByteArray(in);
> >   }
> >
> > See src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
> >
> > Thanks
> >
> > On Wed, Feb 20, 2013 at 5:58 PM, Bryan Baugher <[EMAIL PROTECTED]> wrote:
> >
> > > Here[1] is the code for the filter.
> > >
> > > -Bryan
> > >
> > > [1] - http://pastebin.com/5Qjas88z
> > >
> > > > Bryan:
> > > > Looks like you may have missed adding unit test for your filter.
> > > >
> > > > Unit test should have caught this situation much earlier.
> > > >
> > > > Cheers
> > > >
> > > > On Wed, Feb 20, 2013 at 3:42 PM, Viral Bajaria <
> > [EMAIL PROTECTED]
> > > >wrote:
> > > >
> > > > > Also the readFields is your implementation of how to read the byte
> > > array
> > > > > transferred from the client. So I think there has to be some issue
> in
> > > how
> > > > > you write the byte array to the network and what you are reading
> out
> > of
> > > > > that i.e. the size of arrays might not be identical.
> > > > >
> > > > > But as Ted mentioned, looking at the code will help troubleshoot it
> > > better.
> > > > >
> > > > > On Wed, Feb 20, 2013 at 3:32 PM, Ted Yu <[EMAIL PROTECTED]>
> wrote:
> > > > >
> > > > > > If you show us the code for RowRangeFilter, that would help us
> > > > > > troubleshoot.
> > > > > >
> > > > > > Cheers
> > > > > >
> > > > > > On Wed, Feb 20, 2013 at 2:05 PM, Bryan Baugher <[EMAIL PROTECTED]
> >
> > > wrote:
> > > > > >
> > > > > > > Hi everyone,
> > > > > > >
> > > > > > > I am trying to write my own custom Filter but I have been
> having
> > > > > issues.
> > > > > > > When there is only 1 region in my table the scan works as
> > expected
> > > but
> > > > > > when
> > > > > > > there is more, it attempts to create a new version of my filter
> > and
> > > > > > > deserialize the information again but the data seems to be
> gone.
> > I
> > > am
> > > > > > > running HBase 0.92.1-cdh4.1.1.
> > > > > > >
> > > > > > > 2013-02-20 15:39:53,220 DEBUG
> > > com.cerner.kepler.filters.RowRangeFilter:
> > > > > > > Reading fields
> > > > > > > 2013-02-20 15:40:08,612 WARN
> > org.apache.hadoop.hbase.util.Sleeper:
> > > We
> > > > > > slept
> > > > > > > 15346ms instead of 3000ms, this is likely due to a long garbage
> > > > > > collecting
> > > > > > > pause and it's usually bad, see
> > > > > > > http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> > > > > > > 2013-02-20 15:40:09,142 ERROR
> > > > > > > org.apache.hadoop.hbase.io.HbaseObjectWritable: Error in
> > readFields
> > > > > > > java.lang.ArrayIndexOutOfBoundsException
> > > > > > >         at java.lang.System.arraycopy(Native Method)
> > > > > > >         at
> > > > > > java.io.ByteArrayInputStream.read(ByteArrayInputStream.java:174)
> > > > > > >         at
> > > java.io.DataInputStream.readFully(DataInputStream.java:178)
> > > > > > >         at
> > > java.io.DataInputStream.readFully(DataInputStream.java:152)
> > > > > > >         at

-Bryan
+
lars hofhansl 2013-02-21, 06:13
+
Bryan Baugher 2013-02-21, 13:06
+
Ted Yu 2013-02-21, 18:27
+
Ted Yu 2013-02-21, 05:13
+
Ted Yu 2013-02-21, 04:16
+
Bryan Baugher 2013-02-21, 04:38
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB