(This follows on from the "Merging your own PRs discussion"
I've started a new thread to keep this slightly separate and raise
TL;DR: I've been thinking on the points raised in the above thread for
a while now, and I want to suggest a plan of action. We've been here
before over the years, and all we do is talk hot-air and then do
nothing. This time, I'd like to propose some concrete action, and a
I'd like as much input as possible - even a simple +1 will help
show who's in favour. Input on the metrics, and how we could design
some even better ones, would also be helpful.
Either not enough reviewers, or blindness to certain repositories
among existing reviewers, is leading to Pull Requests being allowed to
go unreviewed for weeks or months at a time. For existing core
developers, this is demoralising. For new contributors, it's
off-putting - we may loose them entirely (which leads to a cycle of
not having any new reviewers either)
- Lack of reviews on significant pieces of work is disheartening
- Merging own PRs rewards unmergable code with a merge - lowers code quality
- Reviewer pool needs to repriorise and/or grow to match volume
- Nagging alone doesn't often work
- Nagging with deadlines is an empty threat if we don't merge our own work
- GitHub labels are invisibile outside of GitHub itself
Ensuring all (or nearly all, say 95%?) pull requests are looked at in
a timely manner. It's not easy to put an exact time on this - a 5 line
PR is clearly easier to review than some of the multi-thousand line
monsters we've had. However, it feels like 2 weeks is a fair maximum
for a PR to go without any comments at all, if we're not completely
out of reviewers.
Steps to take
- Build some review metrics
- Age of a PR:
This could be defined as the time since the last comment by the
author, on the principle that the typical workflow is: Open PR >
Reviewer Comments > Author fixes/replies > repeat. If the last comment
(by date) on the PR is not the author, we can assume it is waiting for
the author to reply/fix. If the last comment is the author, we can
assume the PR is again waiting on a re-review.
- Reviews per person:
Defining a "reviewer" is hard, since not every comment is a review. I
can think of two options:
- Assume every non-author comment is a review - this will produce
some false positives, but looking at some sample PRs, it doesn't seem
too high. Main downside is that it can be gamed - if people want
artificially high review metrics, they just need to comment on
everything. However, I suspect we'd notice people like that and
blacklist them from the stats.
- Use the "assigned to" section of a GitHub PR to declaratively state
the reviewer. Downside here is that I think a committer has to set
this, thus we're putting even more steps on to people with merge
My gut feeling is to go with (1) - a few false positives seems better
than asking committers to do more work.
Both of these should be obtainable from the GitHub API, with some
work. Armed with these two metrics, we can produce some interesting
- Build three public web pages (redmine plugin? little sinatra app?):
- Stats on the number of open PRs, the number of PRs reviewed, and the
number of "old" PRs going over 2 weeks without review.
This enables us to know if our strategy is working over time.
- A list of open PRs sorted by review-age, across all our repos
This has the advantage of being in one place - as Lukas says, GitHub labels are
invisible by email, and only useful if you're already looking at a repo. It
also avoid repo-blindness, since it's a single list. However, we do have 102
repos under theforeman namespace, so we may have to either whitelist or
blacklist some repos - I can forsee open PRs on single-author plugins under our
namesapce clogging up the list otherwise.
It might even be worth embedding the top 5 open PRs in a sidebar on
theforeman.org frontpage? More visibility should mean more reviews
- A "leaderboard" of reviewers for the past week/month
This is a small amount of "gamification" here (ugh, I hate that word…) - the
idea being that seeing just how many reviews other people are doing might
encourage people to do some of their own. There's still the challenge of
getting people to see the stats though. I think a monthly mail of some of these
stats (and others) would be a nice shout-out/thanks to the people doing
- Regularly remind existing devs to do reviews
I'm more than happy to send out reminders to the dev list - the question is;
what frequency? I don't want to become the community nagger
Also, people with commit access are (with some exceptions) generally expected
to do some reviews. In addition to all the above, is there more we can do at
the committer level to encourage review?
- Encourage new devs to review
Reviews are a useful way to learn the code. While a commit-enabled reviewer
will still have check the code before merging, a newer dev can easily work
through the review guidelines and take ~70% of the load off a committer.
This requires some promotion on my part, as well as better development docs -
happily, Daniel is already working on a developer guidebook. We'll need to make
some small changes to the Community tab of the site to promote the new handbook.
So, there's some steps there to get us rolling with trying to improve the
situation. I'm happy to take on those actions, since metrics, stats, and
general cheerleading is my role now. However, if anyone wants to help, I'll be
very grateful - everyone is aware of my abilities around UIs, right?
I'm also proposing that we review the situation again in 3-4 months time (say,
around about FOSDEM?), to see if things are better. I'd rather not let us get
back to a point where people are seriously upset before we notice, this time.
If things are not better, we'll need some more input and suggestions - because
I'm out of ideas now
Dmitri, Dominic - you were the main commenters on the other thread. Will this
satisfy you both for now? Everyone else, thanks for reading this far!