[Request for Comments] Increasing code reviews, action plan

(This follows on from the "Merging your own PRs discussion"
[https://groups.google.com/forum/#!topic/foreman-dev/GbJNyZ-Go0Q] but
I've started a new thread to keep this slightly separate and raise
awareness)

TL;DR: I've been thinking on the points raised in the above thread for
a while now, and I want to suggest a plan of action. We've been here
before over the years, and all we do is talk hot-air and then do
nothing. This time, I'd like to propose some concrete action, and a
followup period.

I'd like as much input as possible - even a simple +1 will help
show who's in favour. Input on the metrics, and how we could design
some even better ones, would also be helpful.

The problem

Either not enough reviewers, or blindness to certain repositories
among existing reviewers, is leading to Pull Requests being allowed to
go unreviewed for weeks or months at a time. For existing core
developers, this is demoralising. For new contributors, it's
off-putting - we may loose them entirely (which leads to a cycle of
not having any new reviewers either)

Concerns heard

  • Lack of reviews on significant pieces of work is disheartening
  • Merging own PRs rewards unmergable code with a merge - lowers code quality
  • Reviewer pool needs to repriorise and/or grow to match volume
  • Nagging alone doesn't often work
  • Nagging with deadlines is an empty threat if we don't merge our own work
  • GitHub labels are invisibile outside of GitHub itself

The goal

Ensuring all (or nearly all, say 95%?) pull requests are looked at in
a timely manner. It's not easy to put an exact time on this - a 5 line
PR is clearly easier to review than some of the multi-thousand line
monsters we've had. However, it feels like 2 weeks is a fair maximum
for a PR to go without any comments at all, if we're not completely
out of reviewers.

Steps to take

  1. Build some review metrics
  • Age of a PR:

This could be defined as the time since the last comment by the
author, on the principle that the typical workflow is: Open PR >
Reviewer Comments > Author fixes/replies > repeat. If the last comment
(by date) on the PR is not the author, we can assume it is waiting for
the author to reply/fix. If the last comment is the author, we can
assume the PR is again waiting on a re-review.

  • Reviews per person:

Defining a "reviewer" is hard, since not every comment is a review. I
can think of two options:

  1. Assume every non-author comment is a review - this will produce
    some false positives, but looking at some sample PRs, it doesn't seem
    too high. Main downside is that it can be gamed - if people want
    artificially high review metrics, they just need to comment on
    everything. However, I suspect we'd notice people like that and
    blacklist them from the stats.
  2. Use the "assigned to" section of a GitHub PR to declaratively state
    the reviewer. Downside here is that I think a committer has to set
    this, thus we're putting even more steps on to people with merge
    rights.

My gut feeling is to go with (1) - a few false positives seems better
than asking committers to do more work.

Both of these should be obtainable from the GitHub API, with some
work. Armed with these two metrics, we can produce some interesting
things.

  1. Build three public web pages (redmine plugin? little sinatra app?):
  • Stats on the number of open PRs, the number of PRs reviewed, and the
    number of "old" PRs going over 2 weeks without review.

This enables us to know if our strategy is working over time.

  • A list of open PRs sorted by review-age, across all our repos

This has the advantage of being in one place - as Lukas says, GitHub labels are
invisible by email, and only useful if you're already looking at a repo. It
also avoid repo-blindness, since it's a single list. However, we do have 102
repos under theforeman namespace, so we may have to either whitelist or
blacklist some repos - I can forsee open PRs on single-author plugins under our
namesapce clogging up the list otherwise.

It might even be worth embedding the top 5 open PRs in a sidebar on
theforeman.org frontpage? More visibility should mean more reviews

  • A "leaderboard" of reviewers for the past week/month

This is a small amount of "gamification" here (ugh, I hate that word…) - the
idea being that seeing just how many reviews other people are doing might
encourage people to do some of their own. There's still the challenge of
getting people to see the stats though. I think a monthly mail of some of these
stats (and others) would be a nice shout-out/thanks to the people doing
reviews.

  1. Regularly remind existing devs to do reviews

I'm more than happy to send out reminders to the dev list - the question is;
what frequency? I don't want to become the community nagger :wink:

Also, people with commit access are (with some exceptions) generally expected
to do some reviews. In addition to all the above, is there more we can do at
the committer level to encourage review?

  1. Encourage new devs to review

Reviews are a useful way to learn the code. While a commit-enabled reviewer
will still have check the code before merging, a newer dev can easily work
through the review guidelines and take ~70% of the load off a committer.

This requires some promotion on my part, as well as better development docs -
happily, Daniel is already working on a developer guidebook. We'll need to make
some small changes to the Community tab of the site to promote the new handbook.

Summary

So, there's some steps there to get us rolling with trying to improve the
situation. I'm happy to take on those actions, since metrics, stats, and
general cheerleading is my role now. However, if anyone wants to help, I'll be
very grateful - everyone is aware of my abilities around UIs, right? :slight_smile:

I'm also proposing that we review the situation again in 3-4 months time (say,
around about FOSDEM?), to see if things are better. I'd rather not let us get
back to a point where people are seriously upset before we notice, this time.
If things are not better, we'll need some more input and suggestions - because
I'm out of ideas now :wink:

Dmitri, Dominic - you were the main commenters on the other thread. Will this
satisfy you both for now? Everyone else, thanks for reading this far!

··· -- Greg IRC: gwmngilfen

> (This follows on from the "Merging your own PRs discussion"
> [https://groups.google.com/forum/#!topic/foreman-dev/GbJNyZ-Go0Q] but
> I've started a new thread to keep this slightly separate and raise
> awareness)
>
> TL;DR: I've been thinking on the points raised in the above thread for
> a while now, and I want to suggest a plan of action. We've been here
> before over the years, and all we do is talk hot-air and then do
> nothing. This time, I'd like to propose some concrete action, and a
> followup period.
>
> I'd like as much input as possible - even a simple +1 will help
> show who's in favour. Input on the metrics, and how we could design
> some even better ones, would also be helpful.
>
> # The problem
>
> Either not enough reviewers, or blindness to certain repositories
> among existing reviewers, is leading to Pull Requests being allowed to
> go unreviewed for weeks or months at a time. For existing core
> developers, this is demoralising. For new contributors, it's
> off-putting - we may loose them entirely (which leads to a cycle of
> not having any new reviewers either)
>
> # Concerns heard
>
> * Lack of reviews on significant pieces of work is disheartening
> * Merging own PRs rewards unmergable code with a merge - lowers code
> quality
> * Reviewer pool needs to repriorise and/or grow to match volume
> * Nagging alone doesn't often work
> * Nagging with deadlines is an empty threat if we don't merge our own work
> * GitHub labels are invisibile outside of GitHub itself
>
> # The goal
>
> Ensuring all (or nearly all, say 95%?) pull requests are looked at in
> a timely manner. It's not easy to put an exact time on this - a 5 line
> PR is clearly easier to review than some of the multi-thousand line
> monsters we've had. However, it feels like 2 weeks is a fair maximum
> for a PR to go without any comments at all, if we're not completely
> out of reviewers.
>
> # Steps to take
>
> 1) Build some review metrics
>
> * Age of a PR:
>
> This could be defined as the time since the last comment by the
> author, on the principle that the typical workflow is: Open PR >
> Reviewer Comments > Author fixes/replies > repeat. If the last comment
> (by date) on the PR is not the author, we can assume it is waiting for
> the author to reply/fix. If the last comment is the author, we can
> assume the PR is again waiting on a re-review.

Might be good to measure both ages and display them. +1

> * Reviews per person:
>
> Defining a "reviewer" is hard, since not every comment is a review. I
> can think of two options:
>
> 1) Assume every non-author comment is a review - this will produce
> some false positives, but looking at some sample PRs, it doesn't seem
> too high. Main downside is that it can be gamed - if people want
> artificially high review metrics, they just need to comment on
> everything. However, I suspect we'd notice people like that and
> blacklist them from the stats.
>

+1

> 2) Use the "assigned to" section of a GitHub PR to declaratively state
> the reviewer. Downside here is that I think a committer has to set
> this, thus we're putting even more steps on to people with merge
> rights.
>
> My gut feeling is to go with (1) - a few false positives seems better
> than asking committers to do more work.
>
> Both of these should be obtainable from the GitHub API, with some
> work. Armed with these two metrics, we can produce some interesting
> things.
>
> 3) Build three public web pages (redmine plugin? little sinatra app?):
>

Redmine could work but I also have a little dashboard I had started a while
back based on prprocessor work we had done that could be the starting place
for it (http://dashboard-ehelms.rhcloud.com/dashboard). More than happy to
take a crack at it.

>
> * Stats on the number of open PRs, the number of PRs reviewed, and the
> number of "old" PRs going over 2 weeks without review.

+1

>

> This enables us to know if our strategy is working over time.
>
> * A list of open PRs sorted by review-age, across all our repos
>
> This has the advantage of being in one place - as Lukas says, GitHub
> labels are
> invisible by email, and only useful if you're already looking at a repo. It
> also avoid repo-blindness, since it's a single list. However, we do have
> 102
> repos under theforeman namespace, so we may have to either whitelist or
> blacklist some repos - I can forsee open PRs on single-author plugins
> under our
> namesapce clogging up the list otherwise.
>

+1

>
> It might even be worth embedding the top 5 open PRs in a sidebar on
> theforeman.org frontpage? More visibility should mean more reviews
>
> * A "leaderboard" of reviewers for the past week/month
>
> This is a small amount of "gamification" here (ugh, I hate that word…) -
> the
> idea being that seeing just how many reviews other people are doing might
> encourage people to do some of their own. There's still the challenge of
> getting people to see the stats though. I think a monthly mail of some of
> these
> stats (and others) would be a nice shout-out/thanks to the people doing
> reviews.
>

Unless there is a "prize", this may have the inverse effect of folks
thinking reviews are covered.

>
> 4) Regularly remind existing devs to do reviews
>

Opening community demos with statistics would be great for this I think.

>
> I'm more than happy to send out reminders to the dev list - the question
> is;
> what frequency? I don't want to become the community nagger :wink:
>
> Also, people with commit access are (with some exceptions) generally
> expected
> to do some reviews. In addition to all the above, is there more we can do
> at
> the committer level to encourage review?
>
> 5) Encourage new devs to review
>
> Reviews are a useful way to learn the code. While a commit-enabled reviewer
> will still have check the code before merging, a newer dev can easily work
> through the review guidelines and take ~70% of the load off a committer.
>

As Tomer mentioned, a sponsored learning process for reviews could help. We
have tried that recently with some folks in the Katello repository with
good success. The process was a committer asking the person to review a PR,
then reviewing both the review itself and reviewing the code to point out
to the person what they missed. For us, this ensured that we were
instilling our knowledge in new reviewers as well as having confidence
after iterating through multiple PRs that the person was reviewing at an
appropriate level.

··· On Tue, Nov 17, 2015 at 7:21 AM, Greg Sutcliffe wrote:

This requires some promotion on my part, as well as better development
docs -
happily, Daniel is already working on a developer guidebook. We’ll need to
make
some small changes to the Community tab of the site to promote the new
handbook.

Summary

So, there’s some steps there to get us rolling with trying to improve the
situation. I’m happy to take on those actions, since metrics, stats, and
general cheerleading is my role now. However, if anyone wants to help,
I’ll be
very grateful - everyone is aware of my abilities around UIs, right? :slight_smile:

I’m also proposing that we review the situation again in 3-4 months time
(say,
around about FOSDEM?), to see if things are better. I’d rather not let us
get
back to a point where people are seriously upset before we notice, this
time.
If things are not better, we’ll need some more input and suggestions -
because
I’m out of ideas now :wink:

Dmitri, Dominic - you were the main commenters on the other thread. Will
this
satisfy you both for now? Everyone else, thanks for reading this far!

Greg
IRC: gwmngilfen


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

> 3) Build three public web pages (redmine plugin? little sinatra app?):
>
> * Stats on the number of open PRs, the number of PRs reviewed, and the
> number of "old" PRs going over 2 weeks without review.
>
> This enables us to know if our strategy is working over time.
>
> * A list of open PRs sorted by review-age, across all our repos
>
> This has the advantage of being in one place - as Lukas says, GitHub labels are
> invisible by email, and only useful if you're already looking at a repo. It
> also avoid repo-blindness, since it's a single list. However, we do have 102
> repos under theforeman namespace, so we may have to either whitelist or
> blacklist some repos - I can forsee open PRs on single-author plugins under our
> namesapce clogging up the list otherwise.

https://github.com/pulls?q=is%3Aopen+is%3Apr+user%3Atheforeman+sort%3Acreated-asc
could be a start, but you'd want more complex sorting. I'd also want to
filter on repositories I have commit access to since that's plenty of
PRs to review for me. Since there are already teams, could we reuse
those?

> I'm also proposing that we review the situation again in 3-4 months time (say,
> around about FOSDEM?), to see if things are better. I'd rather not let us get
> back to a point where people are seriously upset before we notice, this time.
> If things are not better, we'll need some more input and suggestions - because
> I'm out of ideas now :wink:

How about a talk (short introduction into the problem, current status)
followed by a discussion at cfgmgmtcamp?

··· On Tue, Nov 17, 2015 at 12:21:30PM +0000, Greg Sutcliffe wrote:

> * Age of a PR:
>
> This could be defined as the time since the last comment by the
> author, on the principle that the typical workflow is: Open PR >
> Reviewer Comments > Author fixes/replies > repeat. If the last comment
> (by date) on the PR is not the author, we can assume it is waiting for
> the author to reply/fix. If the last comment is the author, we can
> assume the PR is again waiting on a re-review.
>
> * Reviews per person:
>
> Defining a "reviewer" is hard, since not every comment is a review. I
> can think of two options:
>
> 1) Assume every non-author comment is a review - this will produce
> some false positives, but looking at some sample PRs, it doesn't seem
> too high. Main downside is that it can be gamed - if people want
> artificially high review metrics, they just need to comment on
> everything. However, I suspect we'd notice people like that and
> blacklist them from the stats.
> 2) Use the "assigned to" section of a GitHub PR to declaratively state
> the reviewer. Downside here is that I think a committer has to set
> this, thus we're putting even more steps on to people with merge
> rights.
>
> My gut feeling is to go with (1) - a few false positives seems better
> than asking committers to do more work.
>
> Both of these should be obtainable from the GitHub API, with some
> work. Armed with these two metrics, we can produce some interesting
> things.

I like #1. Rename “comments" and “reviews" to “feedback" and have a
cutoff below certain number of characters? My assumption here is that
short feedback is trivial to respond to, and it shouldn’t affect stats
much. I wouldn’t worry about gaming the system, at least not at this
point — I think we can always find some way to counter abuse.

>
> 3) Build three public web pages (redmine plugin? little sinatra app?):
>
> * Stats on the number of open PRs, the number of PRs reviewed, and the
> number of "old" PRs going over 2 weeks without review.

Maybe email a list of PRs that need attention (we’d need to define at
what point a PR enters “needs attention” state though) to foreman-dev?

>
> This enables us to know if our strategy is working over time.
>
> * A list of open PRs sorted by review-age, across all our repos
>
> This has the advantage of being in one place - as Lukas says, GitHub labels are
> invisible by email, and only useful if you're already looking at a repo. It
> also avoid repo-blindness, since it's a single list. However, we do have 102
> repos under theforeman namespace, so we may have to either whitelist or
> blacklist some repos - I can forsee open PRs on single-author plugins under our
> namesapce clogging up the list otherwise.
>
> It might even be worth embedding the top 5 open PRs in a sidebar on
> theforeman.org frontpage? More visibility should mean more reviews

>
> * A "leaderboard" of reviewers for the past week/month
>
> This is a small amount of "gamification" here (ugh, I hate that word…) - the
> idea being that seeing just how many reviews other people are doing might
> encourage people to do some of their own. There's still the challenge of
> getting people to see the stats though. I think a monthly mail of some of these
> stats (and others) would be a nice shout-out/thanks to the people doing
> reviews.
>
> 4) Regularly remind existing devs to do reviews
>
> I'm more than happy to send out reminders to the dev list - the question is;
> what frequency? I don't want to become the community nagger :wink:
>
> Also, people with commit access are (with some exceptions) generally expected
> to do some reviews. In addition to all the above, is there more we can do at
> the committer level to encourage review?
>
> 5) Encourage new devs to review
>
> Reviews are a useful way to learn the code. While a commit-enabled reviewer
> will still have check the code before merging, a newer dev can easily work
> through the review guidelines and take ~70% of the load off a committer.
>
> This requires some promotion on my part, as well as better development docs -
> happily, Daniel is already working on a developer guidebook. We'll need to make
> some small changes to the Community tab of the site to promote the new handbook.
>

Personally I like the idea of code-walkthrough, when the authors
highlights “interesting” parts of the code and how they fit together.
Probably not worth it for smaller PRs, but can be helpful for more
complex contributions.

> # Summary
>
> So, there's some steps there to get us rolling with trying to improve the
> situation. I'm happy to take on those actions, since metrics, stats, and
> general cheerleading is my role now. However, if anyone wants to help, I'll be
> very grateful - everyone is aware of my abilities around UIs, right? :slight_smile:
>
> I'm also proposing that we review the situation again in 3-4 months time (say,
> around about FOSDEM?), to see if things are better. I'd rather not let us get
> back to a point where people are seriously upset before we notice, this time.
> If things are not better, we'll need some more input and suggestions - because
> I'm out of ideas now :wink:
>
> Dmitri, Dominic - you were the main commenters on the other thread. Will this
> satisfy you both for now? Everyone else, thanks for reading this far!

I think this is a pretty good start. Might be useful to establish a
venue to (periodically) evaluate our progress and suggest changes, as
3-4 months is significant amount of time to go without any feedback.

-d

First off, +1 for trying to turn this discussion into operative actions. I
added a couple of comments inline below.

> (This follows on from the "Merging your own PRs discussion"
> [https://groups.google.com/forum/#!topic/foreman-dev/GbJNyZ-Go0Q] but
> I've started a new thread to keep this slightly separate and raise
> awareness)
>
> TL;DR: I've been thinking on the points raised in the above thread for
> a while now, and I want to suggest a plan of action. We've been here
> before over the years, and all we do is talk hot-air and then do
> nothing. This time, I'd like to propose some concrete action, and a
> followup period.
>
> I'd like as much input as possible - even a simple +1 will help
> show who's in favour. Input on the metrics, and how we could design
> some even better ones, would also be helpful.
>
> # The problem
>
> Either not enough reviewers, or blindness to certain repositories
> among existing reviewers, is leading to Pull Requests being allowed to
> go unreviewed for weeks or months at a time. For existing core
> developers, this is demoralising. For new contributors, it's
> off-putting - we may loose them entirely (which leads to a cycle of
> not having any new reviewers either)
>
> # Concerns heard
>
> * Lack of reviews on significant pieces of work is disheartening
> * Merging own PRs rewards unmergable code with a merge - lowers code
> quality
> * Reviewer pool needs to repriorise and/or grow to match volume
> * Nagging alone doesn't often work
> * Nagging with deadlines is an empty threat if we don't merge our own work
> * GitHub labels are invisibile outside of GitHub itself
>
> # The goal
>
> Ensuring all (or nearly all, say 95%?) pull requests are looked at in
> a timely manner. It's not easy to put an exact time on this - a 5 line
> PR is clearly easier to review than some of the multi-thousand line
> monsters we've had. However, it feels like 2 weeks is a fair maximum
> for a PR to go without any comments at all, if we're not completely
> out of reviewers.
>
> # Steps to take
>
> 1) Build some review metrics
>
> * Age of a PR:
>
> This could be defined as the time since the last comment by the
> author, on the principle that the typical workflow is: Open PR >
> Reviewer Comments > Author fixes/replies > repeat. If the last comment
> (by date) on the PR is not the author, we can assume it is waiting for
> the author to reply/fix. If the last comment is the author, we can
> assume the PR is again waiting on a re-review.
>

​There is another, more subtle age problem hidden here.
Often times a PR gets reviewed within a week or so ​
of the last change/response by author.
However,



​ many times a PR enters a review->repair->repeat cycle that can take
several months.
If every time the author addresses concerns raised by the reviewer it leads
to a week of waiting for the next review, only to get a set new concerns
raised, this quickly gets to the point of forgetting what you did in the
first place and having to spend more and more time to dive back into the
code, leading to less and less motivation on the author's part to fix the
issues. I know I ran into this quite a few times and it is very difficult
to pick up code you started
​working on over a month ago and that has already gone through several
rounds of comments.
I'm not sure what would be a good way to improve this, but I think maybe we
should try having "live" code reviews, where the reviewer and the author go
over the code together. That way, the author can answer any questions the
reviewer has on the spot, saving at least the time wasted in the cycle on
"could you clarify why you did this in this way" type comments, ending the
review with concrete "to-do"s for the author before the next review can
start.
It would also be helpful if the author has feedback from the reviewer
saying "now you have some issues to take care of, let go over this again in
X days"

Perhaps
we should also have a metric that​ measures the percentage of time a PR
has been waiting for contributor vs. waiting for [re]review? Part of the
frustration, at least for me, also has to do with me fixing the code within
a day or two of the review and then waiting another week+ for feedback.

> * Reviews per person:
>
> Defining a "reviewer" is hard, since not every comment is a review. I
> can think of two options:
>
> 1) Assume every non-author comment is a review - this will produce
> some false positives, but looking at some sample PRs, it doesn't seem
> too high. Main downside is that it can be gamed - if people want
> artificially high review metrics, they just need to comment on
> everything. However, I suspect we'd notice people like that and
> blacklist them from the stats.
>

​I don't think ​

​"gaming the system" ​is a big concern here.
I don't see anyone wasting their time on making useless comments just to
get a good score.
If that becomes an issue we could come up with a system to avoid it but I
doubt we need to be concerned with this at this point.

> 2) Use the "assigned to" section of a GitHub PR to declaratively state
> the reviewer. Downside here is that I think a committer has to set
> this, thus we're putting even more steps on to people with merge
> rights.
>
> My gut feeling is to go with (1) - a few false positives seems better
> than asking committers to do more work.

> Both of these should be obtainable from the GitHub API, with some
> work. Armed with these two metrics, we can produce some interesting
> things.

​I think having a reviewer being assigned to a PR is actually a Good Idea​

​™. ​
That person would be responsible for reviewing the PR from request to
merge/reject.
Sometimes it is not clear who is reviewing a certain PR, so you don't even
know who to nag when it is not being reviewed.
In certain cases I have seen one person who started reviewing a PR, made
some comments, then after those were addressed a different reviewer came in
and made completely opposite comments, disregarding the previous reviews
and leading to much frustration.
I think the cost of a couple extra clicks on behalf of a committer is not a
high price to pay for this, and could likely be even done by the pr
processor bot (have a command like [reviewer:tbrisker] in a comment assign
the PR automatically when needed).

> 3) Build three public web pages (redmine plugin? little sinatra app?):
>
> * Stats on the number of open PRs, the number of PRs reviewed, and the
> number of "old" PRs going over 2 weeks without review.
>
> This enables us to know if our strategy is working over time.
>
> * A list of open PRs sorted by review-age, across all our repos
>
> This has the advantage of being in one place - as Lukas says, GitHub
> labels are
> invisible by email, and only useful if you're already looking at a repo. It
> also avoid repo-blindness, since it's a single list. However, we do have
> 102
> repos under theforeman namespace, so we may have to either whitelist or
> blacklist some repos - I can forsee open PRs on single-author plugins
> under our
> namesapce clogging up the list otherwise.
>
> It might even be worth embedding the top 5 open PRs in a sidebar on
> theforeman.org frontpage? More visibility should mean more reviews
>
> * A "leaderboard" of reviewers for the past week/month
>
> This is a small amount of "gamification" here (ugh, I hate that word…) -
> the
> idea being that seeing just how many reviews other people are doing might
> encourage people to do some of their own. There's still the challenge of
> getting people to see the stats though. I think a monthly mail of some of
> these
> stats (and others) would be a nice shout-out/thanks to the people doing
> reviews.
>
>
​+1. Reviews
are currently ​an unmeasured contribution, vs. commits which have very nice
public stats on github.
Giving reviewers something to "boast" increases motivation for reviews.

  1. Regularly remind existing devs to do reviews
    >
    > I'm more than happy to send out reminders to the dev list - the question
    > is;
    > what frequency? I don't want to become the community nagger :wink:
    >
    > Also, people with commit access are (with some exceptions) generally
    > expected
    > to do some reviews. In addition to all the above, is there more we can do
    > at
    > the committer level to encourage review?
    >
    > 5) Encourage new devs to review
    >
    > Reviews are a useful way to learn the code. While a commit-enabled reviewer
    > will still have check the code before merging, a newer dev can easily work
    > through the review guidelines and take ~70% of the load off a committer.
    >
    > This requires some promotion on my part, as well as better development
    > docs -
    > happily, Daniel is already working on a developer guidebook. We'll need to
    > make
    > some small changes to the Community tab of the site to promote the new
    > handbook.
    >

​+1. Perhaps ​even have a review-mentoring process where a new reviewer is
assigned to a veteran one who checks on their reviews (or even joins them
for pair-reviewing) and helps them improve.

>
> # Summary
>
> So, there's some steps there to get us rolling with trying to improve the
> situation. I'm happy to take on those actions, since metrics, stats, and
> general cheerleading is my role now. However, if anyone wants to help,
> I'll be
> very grateful - everyone is aware of my abilities around UIs, right? :slight_smile:
>
> I'm also proposing that we review the situation again in 3-4 months time
> (say,
> around about FOSDEM?), to see if things are better. I'd rather not let us
> get
> back to a point where people are seriously upset before we notice, this
> time.
> If things are not better, we'll need some more input and suggestions -
> because
> I'm out of ideas now :wink:
>
> Dmitri, Dominic - you were the main commenters on the other thread. Will
> this
> satisfy you both for now? Everyone else, thanks for reading this far!
> –
> Greg
> IRC: gwmngilfen
>
> –
> You received this message because you are subscribed to the Google Groups
> "foreman-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to foreman-dev+unsubscribe@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

One more suggestion I have is that reviewers should not be afraid to say
"This will not be merged".
​Of course, this has to come with justification as to why the PR should be
rejected and a way of appealing the decision.
Our PR backlog is filled with old PRs that were abandoned for various
reasons, and I believe this is one of the reasons.
I think in some cases, instead of outright rejecting a PR, reviewers would
go on finding more and more issues in a PR to fix until it is nothing like
the original, leading to multiple iterations of reviews and rewrites and
demotivating everyone involved.
A better course of action would be outright saying:
"I think this PR should not be merged because X,Y.
I would like it if you could create a new PR that only does Z, and I
believe that W could be added as a plugin but should not be part of the
[core|proxy|whatever].
As to S, I think it is against the best practices to do so, etc.
If you think I am mistaken, you are welcome to proceed about it in such and
such way."

··· On Tue, Nov 17, 2015 at 2:21 PM, Greg Sutcliffe wrote:


Have a nice day,
Tomer Brisker
Red Hat Engineering

> > 2) Use the "assigned to" section of a GitHub PR to declaratively
> state
> > the reviewer. Downside here is that I think a committer has to set
> > this, thus we're putting even more steps on to people with merge
> > rights.
> >
> > My gut feeling is to go with (1) - a few false positives seems better
> > than asking committers to do more work.
> >
> >
> > Both of these should be obtainable from the GitHub API, with some
> > work. Armed with these two metrics, we can produce some interesting
> > things.
> >
> >
> > ​I think having a reviewer being assigned to a PR is actually a Good
> Idea​
> >
> > ​™. ​
> > That person would be responsible for reviewing the PR from request to
> > merge/reject.
> > Sometimes it is not clear who is reviewing a certain PR, so you don't
> > even know who to nag when it is not being reviewed.
> > In certain cases I have seen one person who started reviewing a PR, made
> > some comments, then after those were addressed a different reviewer came
> > in and made completely opposite comments, disregarding the previous
> > reviews and leading to much frustration.
>
> Yeah, I try not to do this for this very reason. Feel free to point it
> out at the time if I do. If another reviewer comments on a PR I started
> on, I'll tend not participate any further and leave it to them to reduce
> confusion, but would prefer for this not to happen.
>

Let me ask this, as I have heard vague answers but never anything direct.
For the committers, that being across core repository, puppet, smart proxy,
etc., what is your preferred method for new people to get in to reviewing?
I think that would help people like myself a lot who aren't sure where to
start. I ask that because it's unclear if you all prefer non-committers to:

  • review only un-reviewed PRs
  • to review previously reviewed commits and +1 the PR
  • to test previously reviewed commits
  • to ask for PRs that we can review ahead of time
  • Other

Eric

··· On Tue, Nov 17, 2015 at 10:11 AM, Dominic Cleal wrote: > On 17/11/15 14:46, Tomer Brisker wrote: > > On Tue, Nov 17, 2015 at 2:21 PM, Greg Sutcliffe > > <greg.sutcliffe@gmail.com > wrote:

I’m quite happy to self-assign PRs I’m reviewing if other reviewers
abide by the same.


Dominic Cleal
dominic@cleal.org


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

> > 3) Build three public web pages (redmine plugin? little sinatra app?):
> >
> > * Stats on the number of open PRs, the number of PRs reviewed, and the
> > number of "old" PRs going over 2 weeks without review.
> >
> > This enables us to know if our strategy is working over time.
> >
> > * A list of open PRs sorted by review-age, across all our repos
> >
> > This has the advantage of being in one place - as Lukas says, GitHub
> labels are
> > invisible by email, and only useful if you're already looking at a repo.
> It
> > also avoid repo-blindness, since it's a single list. However, we do have
> 102
> > repos under theforeman namespace, so we may have to either whitelist or
> > blacklist some repos - I can forsee open PRs on single-author plugins
> under our
> > namesapce clogging up the list otherwise.
>
>
> https://github.com/pulls?q=is%3Aopen+is%3Apr+user%3Atheforeman+sort%3Acreated-asc
> could be a start, but you'd want more complex sorting. I'd also want to
> filter on repositories I have commit access to since that's plenty of
> PRs to review for me. Since there are already teams, could we reuse
> those?
>

These types of links get linked often when these discussions arise, and
experience has taught me "power" users do use them and those are typically
current reviewers. But those trying to become reviewers and committers
don't. In part due to knowledge gaps, in part due to not being in the
routine in my opinion and forgetting about those types of links. One
suggestion would be to curate and label those links in a singular place
(e.g. the dashboard) so you only have to remember to go to X for
information and from there pick the appropriate link.

Eric

··· On Tue, Nov 17, 2015 at 7:59 AM, Ewoud Kohl van Wijngaarden < ewoud@kohlvanwijngaarden.nl> wrote: > On Tue, Nov 17, 2015 at 12:21:30PM +0000, Greg Sutcliffe wrote:

I’m also proposing that we review the situation again in 3-4 months time
(say,
around about FOSDEM?), to see if things are better. I’d rather not let
us get
back to a point where people are seriously upset before we notice, this
time.
If things are not better, we’ll need some more input and suggestions -
because
I’m out of ideas now :wink:

How about a talk (short introduction into the problem, current status)
followed by a discussion at cfgmgmtcamp?


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

Yeah, I try not to do this for this very reason. Feel free to point it
out at the time if I do. If another reviewer comments on a PR I started
on, I'll tend not participate any further and leave it to them to reduce
confusion, but would prefer for this not to happen.

I'm quite happy to self-assign PRs I'm reviewing if other reviewers
abide by the same.

··· On 17/11/15 14:46, Tomer Brisker wrote: > On Tue, Nov 17, 2015 at 2:21 PM, Greg Sutcliffe > <greg.sutcliffe@gmail.com > wrote: > 2) Use the "assigned to" section of a GitHub PR to declaratively state > the reviewer. Downside here is that I think a committer has to set > this, thus we're putting even more steps on to people with merge > rights. > > My gut feeling is to go with (1) - a few false positives seems better > than asking committers to do more work. > > > Both of these should be obtainable from the GitHub API, with some > work. Armed with these two metrics, we can produce some interesting > things. > > > ​I think having a reviewer being assigned to a PR is actually a Good Idea​ > > ​(tm). ​ > That person would be responsible for reviewing the PR from request to > merge/reject. > Sometimes it is not clear who is reviewing a certain PR, so you don't > even know who to nag when it is not being reviewed. > In certain cases I have seen one person who started reviewing a PR, made > some comments, then after those were addressed a different reviewer came > in and made completely opposite comments, disregarding the previous > reviews and leading to much frustration.


Dominic Cleal
dominic@cleal.org

Thanks for raising such an important topic:

> > * Age of a PR:
> >
> > This could be defined as the time since the last comment by the
> > author, on the principle that the typical workflow is: Open PR >
> > Reviewer Comments > Author fixes/replies > repeat. If the last comment
> > (by date) on the PR is not the author, we can assume it is waiting for
> > the author to reply/fix. If the last comment is the author, we can
> > assume the PR is again waiting on a re-review.

We have http://gitofdespair.herokuapp.com/theforeman. I think setting
labels automatically through prprocessor (1 week old, green)
(2 weeks old, yellow) etc… would be more practical. I rely on labels
a lot and that kind of warning would be nice.

> many times a PR enters a review->repair->repeat cycle that can take
> several months.
> If every time the author addresses concerns raised by the reviewer it leads
> to a week of waiting for the next review, only to get a set new concerns
> raised, this quickly gets to the point of forgetting what you did in the
> first place and having to spend more and more time to dive back into the
> code, leading to less and less motivation on the author's part to fix the
> issues. I know I ran into this quite a few times and it is very difficult
> to pick up code you started working on over a month ago and that has
> already gone through several rounds of comments.

Not ideal, I agree, hopefully with more reviewers the cycle is shorter.
I'd worry if we don't have such a review cycle anymore.

> I'm not sure what would be a good way to improve this, but I think maybe we
> should try having "live" code reviews, where the reviewer and the author go
> over the code together. That way, the author can answer any questions the
> reviewer has on the spot, saving at least the time wasted in the cycle on
> "could you clarify why you did this in this way" type comments, ending the
> review with concrete "to-do"s for the author before the next review can
> start.

We can't do this with community PRs. In any case 'why' comments are
often an indicator of code not being very self explanatory or missing
comments in the code, I believe.

We have sometimes PRs with hundreds of lines of code w/o any explanation
of what they do and that takes me (dont know about other reviewers) a
long time to understand before I can make any comments.

It would help if PR authors go through the list in
http://projects.theforeman.org/projects/foreman/wiki/Reviewing_patches
, or in Foreman handbook by dLobatog · Pull Request #479 · theforeman/theforeman.org · GitHub after
submitting a PR. Also detailed commit messages or going through your own
PR and commenting in line is something I started to do recently and I
believe it helps.

> It would also be helpful if the author has feedback from the reviewer
> saying "now you have some issues to take care of, let go over this again in
> X days"

Speaking for myself, I don't know exactly how long will it take until I can
come back to review a particular PR. I try to balance already open
reviews (that I get emails from Github about) with newly open PRs.

> we should also have a metric that​ measures the percentage of time a PR
> has been waiting for contributor vs. waiting for [re]review? Part of the
> frustration, at least for me, also has to do with me fixing the code within
> a day or two of the review and then waiting another week+ for feedback.

What would we do with that metric?

> > 1) Assume every non-author comment is a review - this will produce
> > some false positives, but looking at some sample PRs, it doesn't seem
> > too high. Main downside is that it can be gamed - if people want
> > artificially high review metrics, they just need to comment on
> > everything. However, I suspect we'd notice people like that and
> > blacklist them from the stats.

I think legitimate reviews can be discerned easily. I trust commenters
with regards to testing. However when it comes to reviewing code I
normally re-review unless I see some good comments by another
"established" reviewer. In that case I might just take a glance.

The reason is that it's hard to believe 'it's OK' when you haven't seen
any critical comments from a new reviewer.

I think it would be beneficial for current code reviewers to try to
distill what they think they find most useful in the Foreman handbook.

I lean towards 1 in this case.

> > 3) Build three public web pages (redmine plugin? little sinatra app?):
> >
> > * Stats on the number of open PRs, the number of PRs reviewed, and the
> > number of "old" PRs going over 2 weeks without review.
> >
> > This enables us to know if our strategy is working over time.

Ohad maintains some graphite instance with this data.

> > It might even be worth embedding the top 5 open PRs in a sidebar on
> > theforeman.org frontpage? More visibility should mean more reviews

If you can do this in a not very invasive and ugly way (up for a
challenge? (: ) I think it'd be useful yes. Maybe you can just put it
somewhere under "contributing".

> > * A "leaderboard" of reviewers for the past week/month
> >
> > This is a small amount of "gamification" here (ugh, I hate that word…) -
> > the
> > idea being that seeing just how many reviews other people are doing might
> > encourage people to do some of their own. There's still the challenge of
> > getting people to see the stats though. I think a monthly mail of some of
> > these
> > stats (and others) would be a nice shout-out/thanks to the people doing
> > reviews.
> >
> >
> ​+1. Reviews
> are currently ​an unmeasured contribution, vs. commits which have very nice
> public stats on github.
> Giving reviewers something to "boast" increases motivation for reviews.

I tried nagging GitHub employees about this but they didn't really listen -_-.

> 4) Regularly remind existing devs to do reviews
> >
> > I'm more than happy to send out reminders to the dev list - the question
> > is;
> > what frequency? I don't want to become the community nagger :wink:
> >
> > Also, people with commit access are (with some exceptions) generally
> > expected
> > to do some reviews. In addition to all the above, is there more we can do
> > at
> > the committer level to encourage review?

-1, sounds like an empty threat like you said before. I'd prefer to
leave that reponsibility to PR authors.

> > 5) Encourage new devs to review
> >
> > Reviews are a useful way to learn the code. While a commit-enabled reviewer
> > will still have check the code before merging, a newer dev can easily work
> > through the review guidelines and take ~70% of the load off a committer.
> >
> > This requires some promotion on my part, as well as better development docs -
> > happily, Daniel is already working on a developer guidebook. We'll need to
> > make some small changes to the Community tab of the site to promote the new
> > handbook.
>
> ​+1. Perhaps ​even have a review-mentoring process where a new reviewer is
> assigned to a veteran one who checks on their reviews (or even joins them
> for pair-reviewing) and helps them improve.

In the end that's the only think that would make a difference to the
number of involved reviewers. Rubocop, hound, and other static analyzers
are of great help too.

One of the companies I respect the most when it comes to reviewing
standards, Thoughtbot has laid down a simple guide of how they do it:

They also have courses I've been following to learn & remember about
certain refactoring principles in http://upcase.com/.

In any case I'm at Rails Israel next week where they have 2 speakers and I'll
try to raise this topic with others see how some companies that invest
on open source do it.

··· -- Daniel Lobato Garcia

@dLobatog

GPG: http://keys.gnupg.net/pks/lookup?op=get&search=0x7A92D6DD38D6DE30
Keybase: elobato (Daniel Lobato Garcia) | Keybase

Just a quick note to say thanks for the replies and to keep them
coming - some really good stuff in here. I'll try and summarise on
Friday :slight_smile:

Greg

Thanks everyone for the feedback so far. Here's my replies to various
collated comments:

> ewoud: How about a talk (short introduction into the problem, current status)

I have already submitted a "Community Panel" talk, and Ohad has a
"State of Foreman" talk, which might end up with me participating -
I'll be sure to include this in one of them - thanks!

> ewoud: I'd also want to filter on repositories I have commit access to … could we reuse teams?
> ehelms: One suggestion would be to curate and label those links in a singular place

Eric, I agree entirely about your power-user comment, and this is
exactly the reason I'm suggesting some kind of simple dashboard.
Pulling the GitHub labels in with it should be possible from the API.

Ewoud: I like the idea, but sounds like it might be tricky. We could
certainly add repo filtering though, so if the reviews at the top of
the list aren't suitable, you could filter those out with a click. We
can revisit teams for version 2 :wink:

> tbrisker: times a PR enters a review->repair->repeat cycle that can take several months.

I don't really know what to say here - I don't think we can capture
that metric easily (eg a review that waited a month and then was
reviewed in a few days would also have a high percentage, even without
much cycling). However, if we improve the general state of reviews
being picked up quicker, my gut feeling is that this should improve as
well. Something to keep an eye on, certainly.

> tbrisker: I think maybe we should try having "live" code reviews

I like it, but we can't mandate it - timezone issues alone will get in
the way. We could add a section to the Developer Handbook about
requesting/offering "pair-review" time though?

> multiple people: re: gaming the comment system

Seems we all agree it's not a big issue, which makes me feel much better :wink:

> tbrisker: assigned-to could be done by pr-processor

I like that. We'd have to restrict it to team-members, obviously, but
otherwise +1. For brand-new reviewers, a committer could do it very
quickly on request.

> tbrisker: One more suggestion I have is that reviewers should not be afraid to say "This will not be merged".

I agree, it's actually more respectful in some ways, as we're not
wasting people's time. Being clear about our expectations is a good
thing. I suggest we get this added to the Developers Handobook once
it's launched.

> dcleal: I'm quite happy to self-assign PRs I'm reviewing if other reviewers abide by the same.

Seems we're leaning that way. If we can upgrade pr-processor to handle
the ones who can't assign themselves, I think it will work well.

> ehelms: I ask that because it's unclear if you all prefer non-committers to …

[1,2,3,4] I think, if we drop the word 'only' - in other words, jump
in. Testing a PR is always a welcome thing. Reviewing something that
has not yet been reviewed takes load of the committers. ACKing a
previously reviewed commit is useful but probably the least of the
contributions. Asking what to work on never hurts, especially once
we've got the PR dashboard.

In all, sounds like yet more material for the Handbook :slight_smile:

> dlobatog: … gitofdespair.heroku.com

Sadly, GitOfDespair doesn't actually work with theforeman/foreman -
too many PRs or something. I think we want something of our own, built
on the GitHub API.

> dlobatog: setting labels for 1/2/3 week warnings

I have no objection to adding labels, but it doesn't solve the
fragmentation problem - Dmitri's original issue was that no-one was
reviewing smart-proxy code. You can add all the labels you like to the
repo, but if no one opens that repo up, it's useless. Having a central
overview helps, and of course, we can colour it however we like.

> dlobatog: It would help if PR authors go through the list in <wiki>

I agree, there's a certain amount of responsibility on authors too.
The Handbook will help clarify that.

> dlobatog: The reason is that it's hard to believe 'it's OK' when you haven't seenanything critical from a new reviewer

I agree, things will always need to be reviewed by someone with merge
rights, ultimately. However, newer reviewers can make that final
review task easier by working through the review guidelines in the
Handbook; and in the process, earn themselves commit rights as well.

> dlobtog: put the leaderboard somewhere under "contributing".

I think I agree - if people are looking to contribute, thats the place
to convince them that reviewing is contributing :wink:

> dlobatog: -1, sounds like an empty threat like you said before.

It's not about any kind of threat, there's no suggestion here of
adding minimum review requirements to commit access - I'm just asking
if we can make reviewing code more attractive to people who already
have commit access? Maybe the notes above about recognising and
thanking reviewers in various places is all we need.

> ehelms: Unless there is a "prize", this may have the inverse effect of folks thinking reviews are covered.

The only prize would be including the names on the contributor list
along with everyone else. The aim here is to recognise time spent on
decidely un-glamorous things.

Point taken though - I think so long as we always display the current
open reviews before the people who've done reviews, it should be clear
that there's more work to do :wink:

> ehelms: Opening community demos with statistics would be great for this I think.

Noted, good idea.

··· ---

All in all, the updated action plan looks like this:

  1. Lots more material for the Developer Handbook.

Mostly this can probably wait until the first iteration goes in -
don’t want to delay the current PR any more than we have to. Once it’s
live we can add:

  • Expectations of authors
  • Self-assigning review status on a PR
  • Expectations for new reviewers (c.f. ehelms comment)
  • Reviewer-to-reviewer mentoring
  • Reviewer-to-author live review
  • Links to PR dashboard / stats
  1. Metrics

We all seem happy with my definition of PR age, but we can try to
gather some metrics on PR cycles (per tbrisker) as well and see if
it’s useful

Reviewer metrics seem to be leaning to using the Assigned-to field,
but we can use the other method as a fallback where no reviwer is
assigned.

  1. Dashboard

Thanks to ehelms for offering to help - I’ll ping you offlist for
getting started next week :wink:

Seems we’re all happy with what to display:

  • PR stats - open / needs review / stale
  • Central sortable PR list for reviewers to use
  • Reviewer stats

Ewoud makes good points about filtering the list that will necessary
to make it usable. Labels should probably be retrieved and shown as
well, possibly as a sortable field.

Display of the open PRs and reviewer stats seems to fit nicely with
some kind of display on the Contributor page. Will see what I can do
once the dashboard is up and running. Some stats during demos is also
an option.

  1. Feeback loop

Dmitri points out 3-4 months is too long, so we’ll go for 2-3. Thats
basically some ramp-up time and then a report at CfgMgmtCamp (thanks
Ewoud).


Next steps - I’m running out of time for today, but assuming no
game-changing feedback in the next few days… I’ll write this up as a
blog post next week. That will commit me to making this happen in the
eyes of wider community, as well as (usefully) raising the idea of
doing reviews as a way of contributing to the user community. Sneaky!
:stuck_out_tongue:

Thanks everyone!

Greg

> Reviewer metrics seem to be leaning to using the Assigned-to field,
> but we can use the other method as a fallback where no reviwer is
> assigned.

Regarding this point - the sooner we start doing it, the better. If
existing committers don't mind starting to use it, I can start testing
the metrics based on it next week. Thanks!

> > dcleal: I'm quite happy to self-assign PRs I'm reviewing if other reviewers abide by the same.
>
> Seems we're leaning that way. If we can upgrade pr-processor to handle
> the ones who can't assign themselves, I think it will work well.

I very much like the idea of using GH assignment, but only in case we
have some integration with our processor because this is again not too
transparent for e-mail GH users.

I can imagine:

  • the first comment auto assigns the reviewer (or alternatively a string
    match "REVIEW" will do the same)

Nice to have:

  • when author pushes in assigned PR, I get an instant email
  • I keep emailed once a week with a list of assigned PRs (but if we have
    a dashboard this is not necessary I guess)

> > dlobatog: setting labels for 1/2/3 week warnings
>
> I have no objection to adding labels, but it doesn't solve the
> fragmentation problem - Dmitri's original issue was that no-one was
> reviewing smart-proxy code. You can add all the labels you like to the
> repo, but if no one opens that repo up, it's useless. Having a central
> overview helps, and of course, we can colour it however we like.

I'd rather see e-mail with a summary each week. That's not too big noise
given amount of e-mails we need to process every single day.

Could also include some statistics, although I am not big fan of any
kind of gamification at all in our case. Also prizes does not look like
the best idea to me. We should automatically give away swag to our best
community contributors anyway :slight_smile:

> * Expectations of authors
> * Self-assigning review status on a PR
> * Expectations for new reviewers (c.f. ehelms comment)
> * Reviewer-to-reviewer mentoring
> * Reviewer-to-author live review
> * Links to PR dashboard / stats

  • Author live code walkthrough (on demo)
··· -- Later, Lukas #lzap Zapletal

>> > dcleal: I'm quite happy to self-assign PRs I'm reviewing if other reviewers abide by the same.
>>
>> Seems we're leaning that way. If we can upgrade pr-processor to handle
>> the ones who can't assign themselves, I think it will work well.
>
> I very much like the idea of using GH assignment, but only in case we
> have some integration with our processor because this is again not too
> transparent for e-mail GH users.
>
> I can imagine:
>
> - the first comment auto assigns the reviewer (or alternatively a string
> match "REVIEW" will do the same)
>
> Nice to have:
>
> - when author pushes in assigned PR, I get an instant email
> - I keep emailed once a week with a list of assigned PRs (but if we have
> a dashboard this is not necessary I guess)

+1 - this is my plan as well. We use PrProcessor quite well already,
so I'm working with ehelms on what we can use it for in this regard :wink:

> I'd rather see e-mail with a summary each week. That's not too big noise
> given amount of e-mails we need to process every single day.

I'm sure we can add an option to the dashboard we're building - I'm
not keen on making it the default, as personally I already get a ton
of email, and would worry about it getting lost.

> Could also include some statistics, although I am not big fan of any
> kind of gamification at all in our case. Also prizes does not look like
> the best idea to me. We should automatically give away swag to our best
> community contributors anyway :slight_smile:

Sorry, I've probably not been clear enough here. The intention is not
to give anything physical for this (you are 100% correct about swag
going to all, regardless of the type of contribution). The aim is
correct the oversight that means reviewers get no recoginition at
all
for their efforts. It's just a mention in an appropriate place,
just like we do for translators, code contributors, docs, etc.

The "gamification" comment refers to the psychological effect of a
list of reviews / reviewers - people may want to get their name on
that list, and thus do more reviews. It's an experiment - if it
doesn't work (c.f. ehelms' point) then we'll take it away again :wink:

> * Author live code walkthrough (on demo)

+1 - please do add it to the handbook once it's up :slight_smile:

I'll work up a post to the blog on this now then :slight_smile:

··· On 23 November 2015 at 11:34, Lukas Zapletal wrote:

Update on this:

There's a blog post written and waiting for review which commits me
(us :P) to giving this a go - reviews welcome :stuck_out_tongue:

I'm currently building out the metrics as a JSON api so we can build a
dash on top of it. Looks good so far for the PR age - reviewer
contribution is not done yet. Eric is helping with the UI side
(thanks!)

If people could start using the Assign field on PRs as previously
discussed, that'd be great. I'll get a PR to pr-processor sometime
soon to help auto-set that, but for now having some sample data where
PRs have assignees would be great.

Thanks!
Greg