Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Possibility of using timestamp as row key in HBase


Copy link to this message
-
Re: Possibility of using timestamp as row key in HBase
Thanks Asaf and Anoop.

You are right, data in Memstore is already sorted so flush() would not
block too much with current write stream to another Memstore...

But wait,,,, flush() consumes disk IO, which I think would interferes with
WAL writes. Say we have two Memstore A and B on one node. A is done and
start flushing. so disk IO are mainly dedicated to A.flush(); while online
write to B needs to write WAL so it needs disk IO as well, which should
conflict a bit with A.flush()...

In general, my goal is try to not block the write stream. My assumption is
that write stream coming from client can be handled by a single RS, (when
there is no flush or compaction going on there)... And whenever there is a
flush (assuming my argument above is right) or compaction (minor/major)
going on, I can redirect write stream to other RS'es....

Yun
On Fri, Jun 21, 2013 at 1:26 AM, Asaf Mesika <[EMAIL PROTECTED]> wrote:

> On Thu, Jun 20, 2013 at 9:42 PM, yun peng <[EMAIL PROTECTED]> wrote:
>
> > Thanks Asaf, I made the response inline.
> >
> > On Thu, Jun 20, 2013 at 9:32 AM, Asaf Mesika <[EMAIL PROTECTED]>
> > wrote:
> >
> > > On Thu, Jun 20, 2013 at 12:59 AM, yun peng <[EMAIL PROTECTED]>
> > wrote:
> > >
> > > > Thanks for the reply. The idea is interesting, but in practice, our
> > > client
> > > > don't know in advance how many data should be put to one RS. The data
> > > write
> > > > is redirected to next RS, only when current RS is initialising a
> > flush()
> > > > and begins to block the stream..
> > > >
> > > > Can a single RS handle the load of the duration until HBase splits
> the
> > > region and load balancing kicks in and moves the region another server?
> > >
> > > Right, currently the timeseries data (i.e., with sequential rowkey) is
> > meta data in our system,
> > and is not that heavy weight... it can be handled by a single RS...
> >
> >
> >
> > > > The real problem is not about splitting existing region, but instead
> > > about
> > > > adding a new region (or new key range).
> > > > In the original example, before node n3 overflows, the system is like
> > > > n1 [0,4],
> > > > n2 [5,9],
> > > > n3 [10,14]
> > > > then n3 start to flush() (say Memstore.size = 5) which may block the
> > > write
> > > > stream to n3. We want the subsequent write stream to redirect back
> to,
> > > say
> > > > n1. so now n1 is accepting 15, 16... for range [15,19].
> > > >
> > > Flush does not block HTable.put() or HTable.batch(), unless your system
> > is
> > > not tuned and your flushes are slow.
> > >
> > > If I understand right, flush() need to sort data, build index and
> > sequentially write to disk.. which I think
> > should, if not block, atleast interfere a lot with the thread for
> in-memory
> > write (plus WAL). A drop in write
> > throughput can be expected.
> >
> > I think all those phases of sorting and index building are done per
> insertion of Put to the Memstore, thus the flush only dumps the bytes from
> memory to disk (network). It doesn't interfere with other write happening
> at the same time, since they open a new memstore and directs the write
> there, and asynchronously flush the old memstore to disk. They only if the
> new memstore if filled up very quickly before you finish flushing the first
> one.
> Regarding WAL, it happens before writing to the memstore. They first get an
> ack on writing to the WAL, then write to the memstore and then ack back to
> the client. I don't see any blocking here.
>
>
>
> > >
> > > > As I understand it right, the above behaviour should change HBase's
> > > normal
> > > > way to manage region-key mapping. And we want to know how much effort
> > to
> > > > put to change HBase?
> > > >
> > > Well, as I understand it - you write to n3, to a specific region (say
> > > 10,inf). Once you  pass the max size, it splits into (10,14) and
> > (15,inf).
> > > If now n3 RS has more than the average regions per RS, one region will
> > move
> > > to another RS. It may be (10,14) or (15,inf).