|WikiProject Computing||(Rated Start-class)|
...copied from article, "Please expand this article. These random notes should be changed to a more coherent article." Badanedwa 22:52, May 1, 2004 (UTC)
This article its somewhat obsolete-ish, and feel too unix oriented. Maybe sould be edited to leverage this unix oriented style, then adding nowdays-sche stuff. Ok, maybe I am wrong. What other people think about the quality of this article? --Tei 15:09, 19 July 2005 (UTC)
How does Defensive Programming differ from Standards of Good Software Development? Now I have been a computer programmer for over 40 years and fully agree that there is an enormous volume of crud out there, as we can see with the volume of patches coming from major software suppliers to fix problems that should never have been there in the first place, but most everything I am seeing in this article is talking about standards that most every programmer should adhere to, with variations on how to implement them by actual computer language, but few do. User:AlMac|(talk) 18:11, 18 January 2006 (UTC)
I do NOT adhere to these rules
I see defensive programming a method that can be used when it is reasonable to both expect and possible to counteract intentional and accidental misuse of the code. The problem is that it is easy to hide the problem without alerting anyone about it. I it a design choice among others. Another approach if defensive programming is not used is to let the application crash, or in some other less drastic way make the call invalid. In other words enforce the correct use of the piece of software in question.
The idea that defensive programming is good programming is wrong, but one way of designing the code. The question is how to define the term properly. —Preceding unsigned comment added by 184.108.40.206 (talk • contribs)
- You didn't specify the programming language you use. In languages where an exception would be thrown and caught, and the traceback would be logged - sure. But in lower-level languages, most notably C, ignoring error returns will often lead to hard-to-debug situations much later in code, having possibly overwritten stack in some situations. Of course it depends on how well thought-out your error handling policy is; I personally consider it easier to drop log a message and attempt to abort the current operation as gracefully as is realistic. I realize the error handling codepaths can much more likely have bugs as they're not tested very often, but it's not a much worse situation. -- intgr 17:14, 3 October 2006 (UTC)
I've removed the "Other examples" section, introduced by Beno1000 on 01:24, 29 September 2008 in this edit, since, besides being rather awkwardly tacked onto the end of the article, it actually contains some factual errors. (In particular, division by zero does not normally produce buffer overruns.) In case someone wants to try making something usable out of it, here's the content of the section as it was when I removed it:
|“||A rather common and infamous programming error is division by zero. Normally, this will cause a buffer overrun as the program tries to calculate to infinity. Some programs will detect this abnormality and quit gracefully, while others will hang or crash outright and others still will continue to run, but abnormally. However, this can be prevented by a simple if statement which will return an error message to the user, as in the pseudocode below.
If inputted data is zero then display an error message saying "Cannot divide by zero" Else Divide the inputted data by the number stored in the number buffer (variable)
Citing Murphy's Law as justification for defensive programming is ridiculous. It is not a physical law, it's a humourous observation on life. The case can be argued much better using terms like risk. Nczempin (talk) 16:15, 21 October 2008 (UTC)
When to error out
I consider the function 'high_quality_programming' worse than 'low_quality_programming' which is very bad too. If the second function is called with a string longer that LENGTH characters, than it will SILENTLY truncate it. I assume here that the function is supposed to work on the complete string and not to just work on the first characters in case the string is too long. If it is allowed to pass a string longer than LENGTH, than the first function has random behavior and a crash somewhere else is hard to track down. The second function does the wrong thing and could have even more subtile changes in your program and with no crash or other symptom is most likely even harder to debug. A real defensive function would error out if it is called with arguments that it cannot handle. Initializing the target is also not necessary. It would be enough to make sure the result string is null terminated. strncpy also would be called with LENGTH+1 to make sure that the terminator is copied for strings of length LENGTH. The result of strncpy can be used to check if the string was null terminated. The function has to error out if truncating is not what the function is specified to do. Sconden (talk) 20:28, 11 October 2010 (UTC)
I don't mind the canon strcpy()/strncpy() example (though I think there are much less subtle examples of defensive programming), but does someone mind explaining to me the significance of the obfuscated memset() call versus a much clearer "str = '\0'"? strncpy already pads with null, so memset() is not only vague and obfuscating, it's redundant. Honestly, I think the null bounding could be omitted entirely since it's irrelevant to the example's main point. Jrajav (talk) 03:54, 13 December 2011 (UTC)
Vague and Off-track
I just came across this article and have some significant problems with it. Though there are some good things about it, in general it is vague and even incorrect in places. Towards the end it just becomes a succession of rambling statements, mostly irrelevant.
Take the first paragraph -- what does "ensure the continuing function" mean exactly? How can you cater for "unforeseeable usage" since by definition you can't foresee it - "unexpected" would be a better word here. The mention of Murphy's Law makes me think that the rest of an article is actually a joke. Emotive words like "mischievously" and "catastrophic" are also inappropriate.
The next bit is even worse. The true definition of defensive programming is open to debate (see below) but it is definitely not about removing bugs (rather about lessening their effect), or making code comprehensible. (Also why mention readable and understandable after the word comprehensible - all three words means the same thing in this context. And what do "code audits" have to do with it?) The last sentence about "making the software behave predictably" is getting closer to the meaning but not really true. Also what is the difference between "inputs" and "user actions"?
The code example is a reasonable example of defensive programming. One confusing thing is why the buffer "str" is declared to be 1001 characters in the first snippet and only 1000 in the second. Also (as pointed out above) the memset is unnecessary since clearing the last byte is sufficient.
I think a simpler example would be a for loop. It also avoids the confusion between defensive programming and secure programming, since buffer overruns are a major aspect of secure programming but not so much in defensive programming.
However, my main problem with the article is that it does not describe the most commonly accepted definition of defensive programmming, which is simply code that recovers from unexpected situations. This can be useful in "secure programming" but more useful for that is input validation.
It also does not mention the problems with defensive programming which is that it tends to hide the presence of bugs. — Preceding unsigned comment added by 220.127.116.11 (talk) 02:36, 27 March 2012 (UTC)