Standardizing Code Reviews: Checklist for reviewers

During a LSST meeting at the University of Washington, we discussed possible variations in the practice of code review, from one reviewer to another. For example, there are probably code reviewers who stress documentation as essential for the code and others who don’t stress the code documentation as much as other aspects. It would be useful to make sure that certain minimal criteria are met by every code review.

Currently, our guidelines explain the need and basic concepts related to code review. It would be great if we could further make this more specific in the form of a checklist out of our common experience of things that every reviewer should check (and by extension every developer should self-check at the submission stage).

The purpose of making this a community post is to solicit ideas from the community on proposed items in such a checklist 'ie. what aspects of the code must be checked in a code review.

The review guidelines :

  • Is the code well commented, structured for clarity, and consistent with DM’s code style?
  • Is there adequate unit test coverage for the code?
  • Is the documentation augmented or updated to be consistent with the code changes?
  • Are the Git commits well organized and well annotated to help future developers understand the code development?

Code reviews should also address whether the code fulfills design and performance requirements.

I think this is a fine checklist, and I object to adding more overhead for the reviewer. I would suggest that having variety in emphases across reviewers is a good thing, so long as reviews are distributed appropriately.

1 Like

We are hoping to remove most of this type of checking from the reviewer now that we are edging ever closer to PEP-8 style. Developers should use flake8 themselves and work is ongoing to generate automated reports for pull requests (DM-7600).

This is why some companies require at least 2 reviewers to sign off on each ticket. That gives you two perspectives that might differ on review approach. The downside being you are using twice as many people.

I tend to agree with Paul. The level of code review is a function not just of the process but of the technical level of the two people involved. For example, in the case of a senior talent code reviewing a junior talent, it becomes a “pick your battles” scenario - you don’t want to demoralize the person with “here is 100 things that are wrong with the code”.

In the particular case of “is the code documented”, we are hoping to add static analysis checking once we have documentation templates in.

1 Like

I’ll jump in to say that there seemed to be lots of interest, at least during the UW group meeting, in having a more thorough review checklist. It doesn’t necessarily need to be an absolute requirement or add extra overhead for reviewers to use. If it is just a collected list of more detailed checks that reviewers can use as a reference tool, there’s no harm in having something like that available for a reviewer, especially a new one.


I can’t think of many good additions to the current checklist myself, and I’d like to see some examples of what people would like to add before forming a generic opinion about whether we need more or not. It sounds like perhaps the UW team has some things in mind?

I think I should clarify the proposal of the checklist a bit : The suggestion is not that items be added to the list of 4 items in the guidelines, but to expand the meaning of the items in the guidelines into a checklist.

For example, the question of whether a piece of code is well commented is open to interpretation. If SQUARE can say that documentation/comment should have the following

  • doc string explaining the action of the code
  • parameters with types, units and conventions if any
  • Optional parameters and their default values
  • return type description
  • Actions (for example an output file is created) if any

then the developer can check that the code being submitted has these, as can the reviewer.
I suppose that SQUARE intends reviewers to do all of this anyway, so this should not be more onerous for reviewers, but the checklist ensures that developers/reviewers are not missing an item (either due to interpretation of the rules or simply forgetfulness).
Obviously, it is impossible to have a perfect checklist for all scenarios, but it might still be helpful.

@frossie Maybe some of these things will be possible to implement in some kind of automated checking as you suggest … and, of course, that would be awesome. But I would guess that implementing that would involve trying to come up with a checklist and turning each item on the list into a test.

There’s an entire document dedicated to python documentation practises. Your example checklist already has five items covering docstrings for functions/methods but doesn’t include placement of the docstring, placement of the code after the docstring, indentation of the docstring, length of the docstring, delimiters of the docstring, etc. To that we would need to add a similar checklist for class and module docstrings; and we haven’t even begun to talk about C++ documentation or python and C++ coding styles, the guides for which are much more voluminous. The length of the checklist would be prohibitive.

I think the main difference between our viewpoints is that you seem to be considering the role of the reviewer as that of a policeman or judge, and his job is to prevent anything contravening the Standards from getting in. I think such a view is unrealistic because there are so many guidelines to enforce and there’s so much code wanting to get in (and let’s not forget all the code that’s in already that violates our standards). Instead, I see the role of the reviewer as a friend coming alongside and saying, “I know about and have been trying to follow these particular standards recently and I think your code would be better if you did too”; or “here’s an issue I don’t think you’ve thought about, you know, I think it would be better if you would do it differently because …”. I don’t think the main job of the reviewer is guarding the codebase, but improving each other.

It’s not essential that each and every review catch all the problems (whether in style or implementation) in a submission, but what is important is the interaction between people with different experiences. By “different experiences”, I don’t mean “different experience levels” — it’s not a matter of junior and senior. I’ve been involved with LSST for several years now, but my knowledge of the standard isn’t ironclad and I appreciate being called on things I’ve forgotten, or bad habits I’ve developed, or things I thought were in the standard but actually aren’t, or issues I just haven’t thought of because I’m in a rush.

Reviews aren’t principally concerned with making the code better (though that’s an important part), but with making us better. That’s why I encourage you to move your reviews around. Don’t just ask people at your institution, but get reviews across institutional lines and, if it makes sense, ask someone in Middleware or Database to review your Science Pipelines work — you may well learn something new that will make you better. If nothing else, you’ll learn what that person knows and cares about and his particular strengths so that when you have a review that needs those particular gifts you can take advantage of them.

Don’t be afraid to disagree with your reviewer, but do so slowly and respectfully. Respectful disagreement is helpful to everyone because then both people (and onlookers!) have to go back to first principles and think about the issue, and hopefully both will be the better for it. Those times when I’ve just wanted to strangle the reviewer have been the times that I’ve learned the most. Our standards are not verbum dei, and hence there may be some bad ideas in there, but we have a process for changing them and so if you find something that you don’t understand or agree with, start a discussion and we’ll all learn something one way or the other.

Reviews may feel like a burden, a requirement to make all your failures public (either as the submitter or the reviewer), but don’t think of them that way. They are wonderful opportunities for growth, both individually and as a team. And don’t think of our standards as some legal document that must be enforced and violators punished, but as a helpful guide to what our community views as good practises that we all want to attempt to emulate.


OK, that got away from me a bit.

The point is, you don’t need a checklist. When you’re asked to review, it’s because someone wants to learn from your particular experiences, so just share what you know and don’t worry about getting every nitty-gritty detail of the submission in perfect compliance with the standards. If you know and care about documentation, then pay attention to the documentation. Then the submitter will learn how to do documentation correctly, and when he reviews someone else he’ll correct them, and over time the whole team will be doing documentation better. You can change the world with one review, just not in the way you think.


What I can do is add some hyperlinks from the code review themes list to concrete docs on how things are done (for example, to the Python and C++ code and doc style guide pages).

(The Workflow page was one of the first things I assembled for LSST [I needed to figure out how to make contributions myself!], and so there wasn’t much supporting documentation. Now that we’re building up a constellation of topics, it is a good idea to make sure I/we are making appropriate links and de-duplicating content to make an effective information architecture.)

1 Like

I like the suggestion by @timj about having two reviewers (and I already often assign multiple reviewers if the code changes are big). But perhaps it would be good practice to have one of them chosen and one assigned automatically at random (by the system) to ensure varied comments? It is hard to pick at random by hand without introducing some form of bias :slight_smile:


It’s hard to pick at random by machine too, because people have schedules the machine doesn’t know about.

While I’m sure having two assigned reviewers would improve the code quality, I’m not sure I would like to be involved in twice as many reviews as I currently am. And I suspect that at least some code is already getting two (or more!) reviewers because, in addition to the regularly scheduled reviewer, there are people who follow every PR and sometimes comment even when they’re not the reviewer.

1 Like