Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> paging results filter


Copy link to this message
-
Re: paging results filter
Hello Toby,

      Sorry for the late reply. But, you have got appropriate answers from
the pros :)

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Fri, Jan 25, 2013 at 9:45 AM, ramkrishna vasudevan <
[EMAIL PROTECTED]> wrote:

> @Toby
>
> If you wish to go the specified page you need to set the start row that
> needs to come as part of that page.
> So what i feel is implement a custom page filter and keep doing next() and
> display only those records that suits the page you clicked.
>  and send them back to the client.  Anyway the logic inside the filter
> should keep track of the number of records that passed by till you reach
> your concerned page and that should
> be  based on the number of records on a page.
>
> Regards
> Ram
>
> On Fri, Jan 25, 2013 at 9:04 AM, Anoop Sam John <[EMAIL PROTECTED]>
> wrote:
>
> > @Toby
> >
> > You mean to say that you need a mechanism for directly jumping to a page.
> > Say you are in page#1 (1-20) now and you want to jump to page#4(61-80)..
> > Yes this is not there in PageFilter...
> > The normal way of next page , next page will work fine as within the
> > server the next() calls on the scanner works this way...
> >
> > -Anoop-
> > ________________________________________
> > From: Toby Lazar [[EMAIL PROTECTED]]
> > Sent: Thursday, January 24, 2013 6:44 PM
> > To: [EMAIL PROTECTED]
> > Subject: Re: paging results filter
> >
> > I don't see a way of specifying which page of resluts I want.  For
> example,
> > if I want page 3 with page size of 20 (only results 41-60), I don't see
> how
> > PageFilter can be configued for that.  Am I missing the obvious?
> >
> > Thanks,
> >
> > Toby
> >
> > On Thu, Jan 24, 2013 at 7:52 AM, Mohammad Tariq <[EMAIL PROTECTED]>
> > wrote:
> >
> > > I think you need
> > > PageFilter<
> > >
> >
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/PageFilter.html
> > > >
> > > .
> > >
> > > HTH
> > >
> > > Warm Regards,
> > > Tariq
> > > https://mtariq.jux.com/
> > > cloudfront.blogspot.com
> > >
> > >
> > > On Thu, Jan 24, 2013 at 6:20 PM, Toby Lazar <[EMAIL PROTECTED]> wrote:
> > >
> > > > Hi,
> > > >
> > > > I need to create a client function that allows paging of scan results
> > > > (initially return results 1-20, then click on page to to show results
> > > > 21-40, 41-60, etc.) without needing to remember the start rowkey.  I
> > > > beleive that a filter would be far more efficient than implementing
> the
> > > > logic client-side.  I couldn't find any OOTB filter for this
> > > functionality
> > > > so I wrote the class below.  It seems to work fine for me, but can
> > anyone
> > > > comment if this approach makes sense?  Is there another OOTB filter
> > that
> > > I
> > > > can use instead?
> > > >
> > > > Thank you,
> > > >
> > > > Toby
> > > >
> > > >
> > > >
> > > > import java.io.DataInput;
> > > > import java.io.DataOutput;
> > > > import java.io.IOException;
> > > > import org.apache.hadoop.hbase.filter.FilterBase;
> > > > public class PageOffsetFilter extends FilterBase {
> > > >  private long startRowCount;
> > > >  private long endRowCount;
> > > >
> > > >  private int count = 0;
> > > >  public PageOffsetFilter() {
> > > >  }
> > > >
> > > >  public PageOffsetFilter(long pageNumber, long pageSize) {
> > > >
> > > >   if(pageNumber<1)
> > > >    pageNumber=1;
> > > >
> > > >   startRowCount = (pageNumber - 1) * pageSize;
> > > >   endRowCount = (pageSize * pageNumber)-1;
> > > >  }
> > > >  @Override
> > > >  public boolean filterAllRemaining() {
> > > >   return count > endRowCount;
> > > >  }
> > > >  @Override
> > > >  public boolean filterRow() {
> > > >
> > > >   count++;
> > > >   if(count <= startRowCount) {
> > > >    return true;
> > > >   } else {
> > > >    return false;
> > > >   }
> > > >
> > > >  }
> > > >
> > > >  @Override
> > > >  public void readFields(DataInput dataInput) throws IOException {
> > > >
> > > >   this.startRowCount = dataInput.readLong();
> > > >   this.endRowCount = dataInput.readLong();
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB