Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> paging results filter


+
Toby Lazar 2013-01-24, 12:50
+
Mohammad Tariq 2013-01-24, 12:52
+
Toby Lazar 2013-01-24, 13:14
Copy link to this message
-
RE: paging results filter
@Toby

You mean to say that you need a mechanism for directly jumping to a page. Say you are in page#1 (1-20) now and you want to jump to page#4(61-80).. Yes this is not there in PageFilter...
The normal way of next page , next page will work fine as within the server the next() calls on the scanner works this way...

-Anoop-
________________________________________
From: Toby Lazar [[EMAIL PROTECTED]]
Sent: Thursday, January 24, 2013 6:44 PM
To: [EMAIL PROTECTED]
Subject: Re: paging results filter

I don't see a way of specifying which page of resluts I want.  For example,
if I want page 3 with page size of 20 (only results 41-60), I don't see how
PageFilter can be configued for that.  Am I missing the obvious?

Thanks,

Toby

On Thu, Jan 24, 2013 at 7:52 AM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:

> I think you need
> PageFilter<
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/PageFilter.html
> >
> .
>
> HTH
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Thu, Jan 24, 2013 at 6:20 PM, Toby Lazar <[EMAIL PROTECTED]> wrote:
>
> > Hi,
> >
> > I need to create a client function that allows paging of scan results
> > (initially return results 1-20, then click on page to to show results
> > 21-40, 41-60, etc.) without needing to remember the start rowkey.  I
> > beleive that a filter would be far more efficient than implementing the
> > logic client-side.  I couldn't find any OOTB filter for this
> functionality
> > so I wrote the class below.  It seems to work fine for me, but can anyone
> > comment if this approach makes sense?  Is there another OOTB filter that
> I
> > can use instead?
> >
> > Thank you,
> >
> > Toby
> >
> >
> >
> > import java.io.DataInput;
> > import java.io.DataOutput;
> > import java.io.IOException;
> > import org.apache.hadoop.hbase.filter.FilterBase;
> > public class PageOffsetFilter extends FilterBase {
> >  private long startRowCount;
> >  private long endRowCount;
> >
> >  private int count = 0;
> >  public PageOffsetFilter() {
> >  }
> >
> >  public PageOffsetFilter(long pageNumber, long pageSize) {
> >
> >   if(pageNumber<1)
> >    pageNumber=1;
> >
> >   startRowCount = (pageNumber - 1) * pageSize;
> >   endRowCount = (pageSize * pageNumber)-1;
> >  }
> >  @Override
> >  public boolean filterAllRemaining() {
> >   return count > endRowCount;
> >  }
> >  @Override
> >  public boolean filterRow() {
> >
> >   count++;
> >   if(count <= startRowCount) {
> >    return true;
> >   } else {
> >    return false;
> >   }
> >
> >  }
> >
> >  @Override
> >  public void readFields(DataInput dataInput) throws IOException {
> >
> >   this.startRowCount = dataInput.readLong();
> >   this.endRowCount = dataInput.readLong();
> >  }
> >  @Override
> >  public void write(DataOutput dataOutput) throws IOException {
> >   dataOutput.writeLong(startRowCount);
> >   dataOutput.writeLong(endRowCount);
> >  }
> >
> > }
> >
>
+
ramkrishna vasudevan 2013-01-25, 04:15
+
Mohammad Tariq 2013-01-25, 05:15
+
Toby Lazar 2013-01-25, 14:23