Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Pagination with HBase - getting previous page of data


Copy link to this message
-
Re: Pagination with HBase - getting previous page of data
Hi Anil,

The issue is that all the other sub-sequent page start should be moved too...

so if you want to jump directly to page n, you might be totally
shifted because of all the data inserted in the meantime...

If you want a real complete pagination feature, you might want to have
a coproccessor or a MR updating another table refering to the
pages....

JM

2013/1/25, anil gupta <[EMAIL PROTECTED]>:
> Hi Vijay,
>
> I've done paging in HBase by using Scan only(no pagination filter) as
> Mohammed has explained. However it was just an experimental stuff. It works
> but Jean raised a very good point.
> Find my answer inline to fix the problem that Jean reported.
>
>
> On Fri, Jan 25, 2013 at 4:38 AM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
>> Hi Vijay,
>>
>> If, while the user os scrolling forward, you store the key of each
>> page, then you will be able to go back to a specific page, and jump
>> forward back up to where he was.
>>
>> The only issue is that, if while the user is scrolling the table,
>> someone insert a row between the last of a page, and the first of the
>> next page, you will never see this row.
>>
>> Let's take this exemaple.
>>
>> You have 10 items per page.
>>
>> 010 020 030 040 050 060 070 080 090 100 is the first page.
>> 110 120 130 140 150 160 170 180 190 200 is the second one.
>>
>> Now, if someone insert 101... If will be just after 100 and before 110.
>>
> Anil: Instead of scanning from 010 to 100, scan from 010 to 110. Then we
> wont have this problem. So, i mean to say that
> startRow(firstRowKeyofPage(N)) and stopRow(firstRowKeyofPage(N+1)). This
> would fix it. Also, in that case number of results might exceed the
> pageSize. So you might need to handle this logic.
>
>>
>> When you will display 10 rows starting at 010 you will stop just
>> before 101... And for the next page you will start at 110... And 101
>> will never be displayed...
>>
>> HTH
>>
>> JM
>>
>> 2013/1/25, Mohammad Tariq <[EMAIL PROTECTED]>:
>> > Hello sir,
>> >
>> >       While paging through, store the startkey of the current page of
>> > 25
>> > rows
>> > in a separate byte[]. Now, if you want to come back to this page when
>> > you
>> > are at the next page do a range query where  startkey would be the
>> > rowkey
>> > you had stored earlier and the endkey would be the startrowkey  of
>>  current
>> > page. You have to store just one rowkey each time you show a page using
>> > which you could come back to this page when you are at the next page.
>> >
>> > However, this approach will fail in a case where your user would like
>> > to
>> go
>> > to a particular previous page.
>> >
>> > Warm Regards,
>> > Tariq
>> > https://mtariq.jux.com/
>> > cloudfront.blogspot.com
>> >
>> >
>> > On Fri, Jan 25, 2013 at 10:28 AM, Vijay Ganesan <[EMAIL PROTECTED]>
>> > wrote:
>> >
>> >> I'm displaying rows of data from a HBase table in a data grid UI. The
>> >> grid
>> >> shows 25 rows at a time i.e. it is paginated. User can click on
>> >> Next/Previous to paginate through the data 25 rows at a time. I can
>> >> implement Next easily by setting a HBase
>> >> org.apache.hadoop.hbase.filter.PageFilter and setting startRow on the
>> >> org.apache.hadoop.hbase.client.Scan to be the row id of the next
>> >> batch's
>> >> row that is sent to the UI with the previous batch. However, I can't
>> seem
>> >> to be able to do the same with Previous. I can set the endRow on the
>> Scan
>> >> to be the row id of the last row of the previous batch but since HBase
>> >> Scans are always in the forward direction, there is no way to set a
>> >> PageFilter that can get 25 rows ending at a particular row. The only
>> >> option
>> >> seems to be to get *all* rows up to the end row and filter out all but
>> >> the
>> >> last 25 in the caller, which seems very inefficient. Any ideas on how
>> >> this
>> >> can be done efficiently?
>> >>
>> >> --
>> >> -Vijay
>> >>
>> >
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>