There’s an entire document dedicated to python documentation practises. Your example checklist already has five items covering docstrings for functions/methods but doesn’t include placement of the docstring, placement of the code after the docstring, indentation of the docstring, length of the docstring, delimiters of the docstring, etc. To that we would need to add a similar checklist for class and module docstrings; and we haven’t even begun to talk about C++ documentation or python and C++ coding styles, the guides for which are much more voluminous. The length of the checklist would be prohibitive.
I think the main difference between our viewpoints is that you seem to be considering the role of the reviewer as that of a policeman or judge, and his job is to prevent anything contravening the Standards from getting in. I think such a view is unrealistic because there are so many guidelines to enforce and there’s so much code wanting to get in (and let’s not forget all the code that’s in already that violates our standards). Instead, I see the role of the reviewer as a friend coming alongside and saying, “I know about and have been trying to follow these particular standards recently and I think your code would be better if you did too”; or “here’s an issue I don’t think you’ve thought about, you know, I think it would be better if you would do it differently because …”. I don’t think the main job of the reviewer is guarding the codebase, but improving each other.
It’s not essential that each and every review catch all the problems (whether in style or implementation) in a submission, but what is important is the interaction between people with different experiences. By “different experiences”, I don’t mean “different experience levels” — it’s not a matter of junior and senior. I’ve been involved with LSST for several years now, but my knowledge of the standard isn’t ironclad and I appreciate being called on things I’ve forgotten, or bad habits I’ve developed, or things I thought were in the standard but actually aren’t, or issues I just haven’t thought of because I’m in a rush.
Reviews aren’t principally concerned with making the code better (though that’s an important part), but with making us better. That’s why I encourage you to move your reviews around. Don’t just ask people at your institution, but get reviews across institutional lines and, if it makes sense, ask someone in Middleware or Database to review your Science Pipelines work — you may well learn something new that will make you better. If nothing else, you’ll learn what that person knows and cares about and his particular strengths so that when you have a review that needs those particular gifts you can take advantage of them.
Don’t be afraid to disagree with your reviewer, but do so slowly and respectfully. Respectful disagreement is helpful to everyone because then both people (and onlookers!) have to go back to first principles and think about the issue, and hopefully both will be the better for it. Those times when I’ve just wanted to strangle the reviewer have been the times that I’ve learned the most. Our standards are not verbum dei, and hence there may be some bad ideas in there, but we have a process for changing them and so if you find something that you don’t understand or agree with, start a discussion and we’ll all learn something one way or the other.
Reviews may feel like a burden, a requirement to make all your failures public (either as the submitter or the reviewer), but don’t think of them that way. They are wonderful opportunities for growth, both individually and as a team. And don’t think of our standards as some legal document that must be enforced and violators punished, but as a helpful guide to what our community views as good practises that we all want to attempt to emulate.