Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Hbase write stream blocking and any solutions?

yun peng 2013-06-09, 13:28
Copy link to this message
Re: Hbase write stream blocking and any solutions?
One thing to keep in mind is that this typically happen when you write faster than your IO subsystems can support.
For a while HBase will absorb this by buffering in the memstore, but if you sustain the write load something will have to slow down the writers.

Granted, this could be done a bit more graceful.
-- Lars
From: yun peng <[EMAIL PROTECTED]>
Sent: Sunday, June 9, 2013 6:28 AM
Subject: Hbase write stream blocking and any solutions?
Hi, All

HBase could block the online write operations when there are too many data
in memstore (to be more efficient for the potential compaction incurred by
this flush when there're many files on disk). This blocking effect is also
observed by others (e.g.,

The solution come up with on the above web blog is to increase the Memstore
size with fewer # of flushes, and to tolerate bigger # of files on disk (by
increasing blockingStoreFiles). This is a kind of HBase tuning towards
write intensive workload.

My targeted application has dynamical workload which may changes from
write-intensive to read-intensive. Also there are peak hours (when blocking
is user perceivable and should not be invoked) and offpeak hours (when
blocking is tolerable). I am wondering if there is any more intelligent
solution (say a clever scheduling policy that blocks only at offpeak hours)
exist in the latest HBase version that could minimizes the effect of write
stream block?

yun peng 2013-06-10, 02:17
lars hofhansl 2013-06-10, 17:29
Kevin Odell 2013-06-10, 13:37
lars hofhansl 2013-06-10, 17:46