Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> No of rows


Copy link to this message
-
Re: No of rows
But when resultscanner executes wouldn't it already query the servers for
all the rows matching the startkey? I am tyring to avoid reading all the
blocks from the file system that matches the keys.

On Wed, Sep 12, 2012 at 3:59 PM, Doug Meil <[EMAIL PROTECTED]>wrote:

>
> Hi there,
>
> If you're talking about stopping a scan after X rows (as opposed to the
> batching), but break out of the ResultScanner loop after X rows.
>
> http://hbase.apache.org/book.html#data_model_operations
>
> You can either add a ColumnFamily to a scan, or add specific attributes
> (I.e., "cf:column") to a scan.
>
>
>
>
> On 9/12/12 6:50 PM, "Mohit Anchlia" <[EMAIL PROTECTED]> wrote:
>
> >I am using client 0.90.5 jar
> >
> >Is there a way to limit how many rows can be fetched in one scan call?
> >
> >Similarly is there something for colums?
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB