Ok I misunderstood the current direction of the merge.
On the review request:
> we don't deal with the case where OP_ADD
> contains blocks on a new file -- this is a case that doesn't happen on
> real clusters, but currently happens with synthetic logs generated
> from the CreateEditLogs tool.
I intentionally did not remove handling of new files with blocks (non
empty). The reason is potential issues with backward compatibility. If
any HDFS version in the past produced such transactions the new
version must be able to read them. I thought it is easier to retain
the support for this type of transactions rather than checking all
I would not recommend restricting OP_ADD in the way that it requires
new files to be empty.
On Thu, Feb 9, 2012 at 10:57 AM, Todd Lipcon <[EMAIL PROTECTED]> wrote:
> Hi Konstantin,
> To be clear, this review request is a merge from the trunk branch into
> the HA branch (NOT a merge INTO trunk) - we've been doing these pretty
> much daily since the project began, so that we track trunk closely.
> The idea is so that we don't have unexpected integration issues when
> we do the merge the _other_ way when it's ready.
> When we propose the merge *into* trunk we'll definitely address your
> questions below. We are indeed already running cluster tests at
> 100-node scale with failovers and both MR and HBase workloads, though
> have not done a formal performance comparison at this point.
> On Thu, Feb 9, 2012 at 10:54 AM, Konstantin Shvachko
> <[EMAIL PROTECTED]> wrote:
>> I was wondering
>> 1. What was the test plan that has been executed for testing this
>> implementation of HA? Besides unit tests.
>> 2. Have you done any benchmarks, comparing current cluster performance
>> against the branch. Would be good to have numbers for both cases with
>> HA off and HA on.
>> I'll post these questions to the jira as well.
>> On Wed, Feb 8, 2012 at 9:08 PM, Todd Lipcon <[EMAIL PROTECTED]> wrote:
>>> The branch developed some new conflicts due to recent changes in trunk
>>> affecting the RPC between the DN and the NN (the "StorageReport"
>>> stuff). I've done a new merge to address these conflicts here:
>>> I've also addressed Aaron's comments in the thread above.
>>> I ran the unit tests on the branch and they passed.
>>> On Fri, Feb 3, 2012 at 4:44 PM, Aaron T. Myers <[EMAIL PROTECTED]> wrote:
>>>> Hey Todd,
>>>> The merge largely looks good. I agree with the general approach you took. A
>>>> few small comments:
>>>> 1. There's a comment in the OP_ADD case blockabout handling OP_CLOSE. This
>>>> makes sense in 0.22/0.23/0.24, but in the HA branch the OP_ADD and OP_CLOSE
>>>> cases are completely separate case blocks. I actually find this whole
>>>> comment a little confusing, since it numbers the cases we have to handle,
>>>> but those numbers aren't referenced anywhere else.
>>>> 2. You mentioned in your message that you don't handle the (invalid) case
>>>> of OP_ADD on a new file containing updated blocks, but it looks like the
>>>> code actually does, though the code also mentions that we should add a
>>>> sanity check that this is actually can't occur. Seems like we should clean
>>>> up this inconsistency. I agree that adding asserting this case doesn't
>>>> occur is the right way to go.
>>>> 3. If we go with my suggestion in (2), we can also move the call to
>>>> FSEditLogLoader#updateBlocks to only the case of OP_ADD for an existing
>>>> file, and then get rid of the "INodeFile newFile = oldFile" assignment,
>>>> which I found kind of confusing at first. (Though I do see why it's correct
>>>> as-implemented.) If you don't go with my suggestion in (2), please add a
>>>> comment explaining the assignment.
>>>> Otherwise looks good. Merge away.
>>>> Aaron T. Myers
>>>> Software Engineer, Cloudera