Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Scanner Caching with wildly varying row widths


Copy link to this message
-
Re: Scanner Caching with wildly varying row widths
You can use scan.setBatch() to limit the number of columns returned.. Note that it will split up a row into multiple rows from a client's perspective and client code might need to be modified to make use of the setBatch feature
 
Regards,
Dhaval
________________________________
 From: Patrick Schless <[EMAIL PROTECTED]>
To: user <[EMAIL PROTECTED]>
Sent: Monday, 4 November 2013 6:03 PM
Subject: Scanner Caching with wildly varying row widths
 

We have an application where a row can contain anywhere between 1 and
3600000 cells (there's only 1 column family). In practice, most rows have
under 100 cells.

Now we want to run some mapreduce jobs that touch every cell within a range
(eg count how many cells we have).  With scanner caching set to something
like 250, the job will chug along for a long time, until it hits a row with
a lot of data, then it will die.  Setting the cache size down to 1 (row)
would presumably work, but take forever to run.

We have addressed this by writing some jobs that use coprocessors, which
allow us to pull back sets of cells instead of sets of rows, but this means
we can't use any of the built-in jobs that come with hbase (eg copyTable).
Is there any way around this? Have other people had to deal with such high
variability in their row sizes?
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB