Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Setting up NxN replication


Copy link to this message
-
Re: Setting up NxN replication
Ishan,

"Coming to Demai’s suggestion of M-M to 2 instead of 9, i still want to have
the data available from 1 to all clusters. How would I do it with your
setup?".

If I understand the requirement currently, your setup are almost here :
C1 <-> C2 <-> C3 <-> C4  and *C4<->C1*
Basically, a double-linked-list forming a cycle. In this way, no single
point of failure, writes on any of the cluster will eventually be
replicated to all the clusters. The good part is that for each write,
although the total # of the writes are the same as NXN, each cluster will
only need the handle at most 2. With this said, I never setup more than 3
clusters, and have to assume no other bugs similar of HBASE-7709(loop in
Master/Master Replication) coming out of this.

Still, I don't have a good solution for '..a row should be present in only
4/10 clusters..". One approach will use more than one columnfamily, +
either HBase-5002(control replication peer per column family) or
HBase-8751. Unfortunately, neither of the jira has been resolved yet. my 2
cents.

Demai
On Fri, Nov 8, 2013 at 4:38 PM, Ishan Chhabra <[EMAIL PROTECTED]>wrote:

> Demai, Ted:
>
> Thanks for the detailed answer.
>
> I should add some more context here. The underlying network is a NxN mesh.
> The “cost" for each link is same.
>
> Coming to Demai’s suggestion of M-M to 2 instead of 9, i still want to have
> the data available from 1 to all clusters. How would I do it with your
> setup?
>
> For the difference between MST and NxN:
> Consider the following example, with 4 clusters: C1, C2, C3, C4, and write
> going to C1.
>
> In NxN mesh, the write will be propagated as:
> C1 -> C2
> C1 -> C3
> C1 -> C4
>
> network cost: 3, writes to wal: 3
>
> MST with tree as C1 <-> C2 <-> C3 <-> C4, the write will be propagated as:
> C1 -> C2
> C2 -> C3
> C3 -> C4
>
> network cost: 3, writes to wal: 3
>
> Both approaches have the same network and wal cost. The only difference is
> that in MST, if C2 fails, writes from C1 will not go to C3 and C4, where as
> in NxN case, the writes will still happen.
>
> Also, (1) and (3) are not an issue for us.
>
> Having said that, I do realize that adding more clusters is increasing the
> load quadratically, and that does worry me. Our actual use case is that a
> row should be present in only 4/10 clusters, but it varies based on the row
> and not on the cluster. So I cannot come up with a static replication
> configuration that will handle that. I am looking into per row replication,
> but will start that a separate discussion and share my ideas there.
>
> I hope this makes more sense now.
>
>
> On Fri, Nov 8, 2013 at 3:47 PM, Ted Yu <[EMAIL PROTECTED]> wrote:
>
> > bq. how about your company have a new office in the 11th locations?
> >
> > With minimum spanning tree approach, the increase in load wouldn't be
> > exponential.
> >
> >
> > On Fri, Nov 8, 2013 at 2:58 PM, Demai Ni <[EMAIL PROTECTED]> wrote:
> >
> > > Ishan,
> > >
> > > have to admit that I am a bit surprise about the need of have data
> center
> > > in 10 different locations. Well, I guess I shouldn't be, as every
> company
> > > is global now(anyone from Mars yet?)
> > >
> > > In your case, since there is only one column family. The headache is
> not
> > as
> > > bad. Let's call your clusters as C1, C2, ... C10
> > >
> > > The safest way for your most critical data is still have setup the M-M
> > > replication by 1 to N-1. That is every cluster add the rest of clusters
> > as
> > > its peer. For example C1 will have C2, C3...C10 as its peers; C2 will
> > have
> > > C1, C3.. C10. Well, that will be a lot of data over the network.
> Although
> > > it is the best/fast way to get all the cluster sync-up. I don't like
> the
> > > idea at all(too expensive for one).
> > >
> > > Now, let's improve it a bit. C1 will setup M-M to 2 of the rest 9, and
> > > carefully planned the distribution so that all the clusters will get
> > equal
> > > load. Well, a system administrator has to do it manually.
> > >