Pull requests would be an awesome development for bigtop - it would make it really easy to test and review patches (etc etc I'm sure you all know the benefits :)) ....
so ... Should I get started down the road of investigating how to get pull requests enabled for apache projects ? I'm not sure what it involves, maybe some kind of two way mirroring, but I'm sure it will be a good thing to have.
If agreement I'll file a JIRA and track progress there.
Is that really illegal and if so - are the spark folks just given a pass to expedite things? If so … shouldn't we ask for the same pass ?
I think this functionality alone could grow the community around bigtop more than any other single action which we could undertake. On Jun 29, 2014, at 7:07 PM, Roman Shaposhnik <[EMAIL PROTECTED]> wrote:
On Mon, Jun 30, 2014 at 10:10 AM, Jay Vyas <[EMAIL PROTECTED]> wrote:
Can you elaborate on what do you mean by full integration? So there's github.com/apache/spark. If I fork that and send a pull request it will get forwarded to the [EMAIL PROTECTED]. So far exactly the same thing is happening in Bigtop as well.
Now, are you saying that a committer can send click on 'merge pull request' button in GitHub and the commit actually end up in ASF git repo?
If not, what *does* happen? Lets figure out what it is, first ;-)
Sorry for the lack of clarity --- as i recently learned of this process. I've gotten up to speed on the details: Here they are:
I'll walk through the way this script https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py is used….. 1) Rather than actually submitting a patch, the user submits a pull request and a branch is created by the commiter, automatically using the script: run_cmd("git fetch %s pull/%s/head:%s" % (PR_REMOTE_NAME, pr_num, pr_branch_name))
2) Then, the script checks via github api if the patch is mergeable.
Screw gerrit, we've already got the Jenkins Enterprise GitHub pull request builder plugin enabled on builds.apache.org - it needs some validation still and hooks to be set up by Infra on the GitHub repos, but once that's done, you can have a job that watches for pull requests, builds them, and comments on the pull request with the build results.
As to that script - how is it being run? I assume by hand by Spark committers?
On Mon, Jun 30, 2014 at 1:30 PM, Jay Vyas <[EMAIL PROTECTED]> wrote:
On Mon, Jun 30, 2014 at 10:08 PM, Jay Vyas <[EMAIL PROTECTED]> wrote:
This makes sense now. Well, whatever automation we create to make the patch validation and acceptance experience smoother is great.
That said, personally, I'd like to maintain two constraints: 1. all non-trivial changes MUST have a JIRA ID associated with them 2. any fixed for JIRAs that go into our code-base MUST have a patch attached to a JIRA and be explicitly reviewed and +1ed.