> > 1. How long does it take for edits to be propagated to a slave cluster?
> > As far as I understand from HBase Replication page
> > (http://hbase.apache.org/replication.html) there's a separate buffer held
> > each region server which accumulates data (edits which should be replicated
> > the edit log) before sending to Slave cluster's RSs. So basically data are
> > to slave cluster when:
> > * buffer is full. Is there an option to configure its size (as a way to
> > flushing frequency)?
> > * the end of edit log is reached by this "working thread". Does thread
> > the edit log periodically or is it watching for edit log to change and acts
> > "immediately"? If the former, what is the default interval between runs? Can
> > be configured?
> It acts as soon as the buffer is full or it reaches an EOF. The end of
> the file is determined by when the file was reopened *because there's
> no way to tail a file in HDFS without closing the reader, reopening
> the file and seeking to a certain position*. The end result is that
> replication cannot just fill for minutes before sending because it
> gets the EOF pretty quickly. Our replication stream almost always have
> sub-second lag. Only if it reaches the end and it didn't read anything
> new that it will wait.
> replication.source.size.capacity, default is 64MB but recently I saw
> some OOMEs issue and I'm starting to think that this is too big.
> replication.source.nb.capacity, default is 25k. The buffer is flushed
> when either size or capacity is reached. I'm thinking of deleting this
> second config because what's really important is the size.
> replication.source.maxretriesmultiplier, default is 10, so it retries
> up to 10 times with pauses that are currentIteration times
> replication.source.sleepforretries. By default it sleeps 1 sec, 2, 3,
> 4... 9, 10, 10, 10, 10 until it's able to replicate
> replication.source.sleepforretries, default is 1 second, see above.
So does this mean that if it's unable to replicate after some number of sleeps,
as the ones you've listed above, it gives up trying to replicate?
> > 2. How reliable is replication?
> > It looks like when there are some networking issues and slave cluster can't
> > reached, this is handled gracefully by replication mechanism. It sounds
> > this should also cover slave cluster going down for some reason. Are there
> > possible scenarios when replication can be broken?
> The biggest issue at the moment is (from the replication
> documentation): HBASE-3130, the master cluster needs to be restarted
> if its region servers lose their session with a slave cluster
OK. So sequentially restarting each RS on the master cluster should be OK and
the replication will/should continue where it left off?
> Also reliability in general in 0.90 has went down a bit because we
> were using 0.89 for a long time and just recently started using
> 0.90.1... there's still a few bugs I'm hunting.
> > 3. Replication of existing (and possibly big) cluster after the fact.
> > What are the options to replicate all existing data to a new (& empty)
> > cluster if replication wasn't configured from the start and keep
> > from that point? It seems that because edit logs on the master cluster get
> > cleaned this might not be possible?
> From the FAQ at the end of the replication documentation:
> Q. You need a bulk edit shipper? Something that allows you transfer
> 64MB of edits in one go?
> A. You can use the HBase-provided utility called CopyTable from the
> package org.apache.hadoop.hbase.mapreduce in order to have a
> discp-like tool to bulk copy data.
Right, right... http://blog.sematext.com/2011/03/11/hbase-backup-options/
OK, so if we have a *live* cluster and then one day we decide we want to start
replicating this cluster, we need to stop the cluster first, call CopyTable for
each table, start the slave cluster, restart the master cluster, and replication
should kick in and keep the 2 clusters in sync.
And then if the slave cluster goes down for a while one day, replication won't
be sufficient - one will need to repeat the above procedure again, right?
Aha, thanks for pointing it out.
This also means that one should really be using the latest and greatest about to
be released HBase in order to get this fix, which is good to know.
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/