On Mon, May 20, 2013 at 12:35 AM, Varun Sharma <[EMAIL PROTECTED]> wrote:
> Hi Lars,
> Thanks for the response.
> Regarding #2 again, so if RS1 failed, then the following happens...
> 1) RS2 takes over its logs...
> 2) Master renames the log containing directory to have a -splitting in the
> 3) Does RS2 already know about the "-splitting" path ?
It will look at all the possible locations. See ReplicationSource.openReader
> Also on a related note, was there a reason that we have all region servers
> watching all other region server's queue of logs. Otherwise, couldn't the
> master have done the reassignment of outstanding logs to other region
> servers more fairly upon failure ?
I think I did it like that because it was easier since the region
server has to be told to grab the queue(s) anyway.
> On Sun, May 19, 2013 at 8:49 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:
>> #1 yes
>> #2 no
>> Now, there are scenarios where inconsistencies can happen. The edits are
>> not necessarily shipped in order when there are failures.
>> So it is possible to have some Puts at T1 and some Deletes at T2 (T1 <
>> T2), and end up with the deletes shipped first.
>> Now imagine a compaction happens at the slave after the Deletes are
>> shipped to the slave, but before the Puts are shipped... The Puts will
>> -- Lars
>> From: Varun Sharma <[EMAIL PROTECTED]>
>> To: [EMAIL PROTECTED]
>> Sent: Sunday, May 19, 2013 12:13 PM
>> Subject: Questions about HBase replication
>> I have a couple of questions about HBase replication...
>> 1) When we ship edits to slave cluster - do we retain the timestamps in the
>> edits - if we don't, I can imagine hitting some inconsistencies ?
>> 2) When a region server fails, the master renames the directory containing
>> WAL(s). Does this impact reading of those logs for replication ?