Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Optimizing Multi Gets in hbase


Copy link to this message
-
Re: Optimizing Multi Gets in hbase
i) Yes, or, at least, of often yes.
II) You're right. It's difficult to guess how much it would improve the
performances (there is a lot of caching effect), but using a single scan
could be an interesting optimisation imho.

Nicolas
On Mon, Feb 18, 2013 at 10:57 AM, Varun Sharma <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I am trying to batched get(s) on a cluster. Here is the code:
>
> List<Get> gets = ...
> // Prepare my gets with the rows i need
> myHTable.get(gets);
>
> I have two questions about the above scenario:
> i) Is this the most optimal way to do this ?
> ii) I have a feeling that if there are multiple gets in this case, on the
> same region, then each one of those shall instantiate separate scan(s) over
> the region even though a single scan is sufficient. Am I mistaken here ?
>
> Thanks
> Varun
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB