|WikiProject Computing||(Rated Start-class)|
Merge with software inspection
- Nevermind. There appears to be some differentiation. --Stevietheman 18:01, 28 Dec 2004 (UTC)
Code review bias
This page is very pro code review. Whilst code review by the community is a key element of open source production, line-by-line code review is not considered an efficient method of defect removal in closed source projects. A likely reason for the disparity is that paid programmers would find this kind of process laborious and boring, whereas an interested third party looking at an open source project is likely to be more focused. DanielVale 02:54, 7 November 2006 (UTC)
- I would say instead that it's biased against code review. The "criticism" section is longer than the "examples" section. Furthermore, even if code review is not the most efficient method of finding defects, my personal experience is that it is good at finding code which is not well-written, and improving the structure makes future maintenance easier and less risky. This web page lists several articles supporting the efficacy of code reviews. Once I pass my next deadline, I will take a look at the articles on both sides and see what I can do to expand and improve this article. (I will note here my personal experience that code review is an effective way of finding bugs that are unlikely to be discovered through validation (particularly in extremely complex systems, where it is not practical to test all paths through the code.) Matchups 15:13, 15 February 2007 (UTC)
- The statement "line-by-line code review is not considered an efficient method of defect removal in closed source projects' begs a reference. Anecdotal accounts of inefficiency frequently find root cause in poor execution and/or measurement - though measures rarely exist in these cases. All the objective literature I've read make the exact opposite point: some examples are reported in detail at the SEI's site (e.g., Raytheon Electronic Systems Experience in Software Process Improvement), and others available through subscription-only at IEEE Explore. One could reasonably argue that claims of inefficiency are usually opinion and rarely, if ever, accompanied by any objective measures of the inefficiency.RobR 03:33, 27 July 2010 (UTC)
Hello. The second paragraph of the 'Criticism' section - "Use of code analysis tools can support this activity. Especially tools that..." doesn't belong there. We could add a 'Tooling'-section for that, if you want?! — Preceding unsigned comment added by 188.8.131.52 (talk) 09:19, 16 January 2013 (UTC)
This diff removed "idealistic wording" which previously stated that code review had improved certain projects and replaced it with a statement that it is claimed that code review improved the projects. The revised text claims that it is impossible to tell with certainty that code review improved the projects, as they were only implemented once and thus there is no means of comparison.
I disagree. Having performed many code reviews, I know that the changes I have proposed have improved the code, and I am confident that if any uninvolved person had looked at the before and after versions, they would agree. There's no need for an external control; these two versions of the project are adequate to demonstrate the value of code review.
However, I do agree with Derek that there was a problem with the article, as the articles on the two purported example projects do not mention "code review," much less provide any justification for the claim that code review has improved them. I believe that what should be done is to find RS's with specific documented examples of projects which have been improved by code review and use those in this article. We should also remove Blender and Linux as examples, unless somebody with knowledge of those projects is able to edit their main articles to add a discussion of code review. Matchups 16:43, 8 July 2007 (UTC)
Facts and figures from research
When visiting this article, I expected to find reference to the research actually performed in the area and the key metrics that the different studies and experiments came up with. I my humble view, it's less interesting to hear that code review is either good or bad, compared to e.g. a table with the different studies' defect rates to conclude that an hour spent means 15 bugs dead. To me, it's obvious that eye balling the code means finding defects, means a chance to remove defects, means more stable code. The main issue to me is under what circumstances is code review the better spent hour if I want to spend an hour to make the code more stable (compared to testing, or even refactoring).
The article seems to be focused on code review as a means of finding "defects". As a 25-years-experienced developer, I would say that is only a secondary purpose of code review. Code review is largely about honing a uniform coding culture within an enterprise, about making code more readable, etc. That is to say, it is not at all a substitute for testing: it is much more tied to the art of programming than the science of programming. - Jmabel | Talk 19:35, 25 February 2008 (UTC)
While I agree that code reviews are very effective in achieving the results you suggest, the primary justification and benefit of code review is finding and fixing defects. In addition to the obvious functional defects found through review, violations of coding standards and poor readability are, in fact, also defects. If you lose sight of this, it would be easy to rationalize dropping code review in stabilized teams, and to disregard the fact that developers are still human and make mistakes. RobR (talk) 02:59, 27 July 2010 (UTC)
Merge with Code audit
I think this article needs to be combined with Code audit 20:32, 22 Aug 2008 (UTC)
- I'm not opposed to this, but I'd like to point out that there is a pretty significant distinction between the two terms (at least how I understand it): code review usually takes place incrementally, reviewing all individual changes that are made to the code base. Code audit on the other hand usually takes place on a consistent snapshot of the code page to systematically check everything. -- intgr [talk] 14:49, 23 August 2008 (UTC)
Types discussion of "Lightweight" is too general/biased
The description of Lightweight code review (section Types) is vague and wildly generalized, and includes some errors.
First, there's no definition. The discussion seems to lump all of the review that aren't formal (I.e., Fagan Inspection, CMMI Peer Review, &c) under the umbrella term "Lightweight". Some example methods are suggested but not explained or referenced. This adds no value to the discussion.
It's unclear which of these is referred to in the statement that these "can be equally effective when done properly", with no suggestion as to what constitutes proper.
The reference to Walkthrough is also in error: "Some of these may also be labeled a "Walkthrough" (informal)" - see Walkthrough.
Finally, the whole section is slanted in favor of "lightweight" versus "formal", e.g., the second and last paragraph. The latter cites a book by Jason Cohen, but Amazon.com shows only 4 mediocre reviews of the book. When I look at the author's four other works, I see nothing related to software at all - I'm hard pressed to lend any trust to his credentials!
At best, I would suggest subsections, or internal references, that covers each type - Inspections, Walkthroughs and Pair Programming are already defined, leaving "Over-the-shoulder", "Email pass-around", and "Tool-assisted code review"
This link at the bottom: Security Code Review FAQs - Is actually a link to an IBM product page and is not a Security code review FAQ —Preceding unsigned comment added by 184.108.40.206 (talk) 16:14, 12 November 2010 (UTC)