Jump to content

Talk:Test-driven development: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
SineBot (talk | contribs)
m Signing comment by 93.192.8.123 - "→‎Code visibility: "
Line 130: Line 130:
Many articles have a "Criticisms" section, why Test-Drive-Development doesn't? <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/8.7.228.252|8.7.228.252]] ([[User talk:8.7.228.252|talk]]) 22:19, 10 June 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
Many articles have a "Criticisms" section, why Test-Drive-Development doesn't? <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/8.7.228.252|8.7.228.252]] ([[User talk:8.7.228.252|talk]]) 22:19, 10 June 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
:See [[WP:STRUCTURE]] - it is usually best to "fold[] debates into the narrative, rather than distilling them into separate sections", so any ingrelevant, well-sourced criticisms of TDD should be mentioned, as they arise, in the normal text rather than separated out into a "Criticisms" section. --[[User:Nigelj|Nigelj]] ([[User talk:Nigelj|talk]]) 08:53, 11 June 2010 (UTC)
:See [[WP:STRUCTURE]] - it is usually best to "fold[] debates into the narrative, rather than distilling them into separate sections", so any ingrelevant, well-sourced criticisms of TDD should be mentioned, as they arise, in the normal text rather than separated out into a "Criticisms" section. --[[User:Nigelj|Nigelj]] ([[User talk:Nigelj|talk]]) 08:53, 11 June 2010 (UTC)

== Blurry flowchart ==

What's with the blurry flowchart? Why is it png instead of pure svg?
[[Special:Contributions/69.120.152.184|69.120.152.184]] ([[User talk:69.120.152.184|talk]]) 02:15, 26 September 2011 (UTC)

Revision as of 02:15, 26 September 2011

Limitations section

The "Limitations" section says that TDD "is currently immature". I don't understand what is meant by that.

Maybe a better term would be "Premature for the guy who wrote the Limitations section". I think we can all agree it's not for everyone. Slackmaster K 18:49, 18 July 2008 (UTC) —Preceding unsigned comment added by Tsilb (talkcontribs)

It also lists "GUI" as one of the limitations. As someone who works on complex Swing applications for a living and even has written a whole book chapter about TDDing Swing GUIs, I guess I simply disagree. It would be nice if the original author could clarify why he thinks it is a limitation.

Thanks!

Ilja

I would like to hear your thoughts on testing GUIs. Are you talking about abstracting functionallity away from the GUI into a business logic layer, and just using the GUI as a thin interface layer on top? If so, technically you still aren't testing the GUI. Testing GUIs and graphics in general just isn't very feasible right now. Computers get lost in not "seeing the forest for the trees". One can use tests to check pixel values on the screen, but what good is it if the test can't determine *what* the picture shows. capnmidnight 16:18, 30 April 2006 (UTC)[reply]
I partially agree. Regardless of your level of abstraction, the Event Handlers are almost always in the inline code, scripting, or code behind. IMHO "GUI testing" is a test of whether or not the GUI reacts as expected. For example, if clicking a button instantiates an object in the DAL and it results in a NullReferenceException, it could be a GUI bug (i.e. Session cleared between Postbacks) or a DAL bug (i.e. no constructor, returned null, etc). Slackmaster K 18:55, 18 July 2008 (UTC) —Preceding unsigned comment added by Tsilb (talkcontribs)

In limitation #4, mention is made of fixing tests when refactoring invalidates them. I have run into this problem myself. I have looked for information on how to handle this situation and found nothing. This may be one of the things that make TDD immature. I would very much like to see details of how one can fix tests. --NathanHoltBeader (talk) 22:23, 25 August 2009 (UTC)[reply]

The only advice I know is, "Carefully, very carefully". I have become aware how 'precious' TDD tests are, because unless you are willing to scrap the whole piece of code and start TDDing again, they can never be replaced. Not everyone you may work with will be aware of that as they maintain the module! Tools like NCover do not solve the problem - a regex test may get used by the tests, but not necessarily in every boundary case that is important. I wonder, do we need something on the joys and limitations of NCover and its brethren here? --Nigelj (talk) 19:38, 26 August 2009 (UTC)[reply]

The link to "Test-driven Development using NUnit" at in the References is non-functional in my browser (Firefox 1.0, Windows 2000). I suggest this link be eliminated, and along with it the Jason Gorman vanity page as non-encyclopedic. --Ghewgill 18:35, 13 Jan 2005 (UTC)

Works fine on my Linux box with Firefox. He probably didn't have a pdf viewer installed. --Nigelj 20:34, 30 August 2006 (UTC)[reply]

Merge

A merge with Test Driven Development is in order. Actually that article frames the generic issues better, though it's a short stub, and this article seems (in the section Test Driven Development Cycle) to imply that there's a proper-named methodology here.

Ummm, the Test Driven Development page just is re-direct to this page: they are the same article.--Nigelj 20:31, 30 August 2006 (UTC)[reply]

Where is "Test Driven Development" from?

Please, Who is the author of first article about "Test Driven Development"??? Who defends "Test Driven Development"? Etc... I think it would be very helpful for everyone is researching about it. --200.144.39.146 17:16, 10 March 2006 (UTC)[reply]

From the article: "the book Test-Driven Development by Example [Beck, K., Addison Wesley, 2003], which many consider to be the original source text on the concept in its modern form."--Nigelj 20:31, 30 August 2006 (UTC)[reply]
It predates 2003. We used the phrase in Java Development with Ant in 2002, and took it from one of Kent's earlier XP books. I think I first saw it in 2000, associated with JUnit work by Kent and Erich Gamma. SteveLoughran 10:41, 17 April 2007 (UTC)[reply]

Merge with Tester Driven Development

I read Tester Driven Development and it seems appropriate to move it into a "criticisms" section of this page. --Chris Pickett 22:32, 30 November 2006 (UTC)[reply]

No, I think that it is a play on similar words, but as a concept it is entirely different. I've never heard what that article describes called 'tester driven development', but I have heard it called 'feature-creep', 'buggy spec' etc. It's an example of one way a project manager can begin to lose control of the project, not anything to do with the development methodology the developers may or may not be using to produce the actual code. --Nigelj 22:04, 30 November 2006 (UTC)[reply]
I realize that "Tester Driven Development" is not the same thing as TDD. But it seems to me like this anti-pattern might actually describe TDD done badly. "It refers to any software development project where the software testing phase is too long."---clearly that includes TDD, since you can't get "longer" than "since before the project begins"! :) Well, at the very least, there should be a disambiguation page or "not to be confused with" bit or something, so that people looking for Tester Driven Development when they mean TDD don't get the wrong impression. --Chris Pickett 22:32, 30 November 2006 (UTC)[reply]
Tester-Driven Development is clearly a pun on TDD, but a different concept. I think Tester-Driven-Development could be pulled into an anti-patterns in testing article, which could include other critiques of TDD, and of common mistakes in testing (like not spending any time on it). SteveLoughran 10:42, 17 April 2007 (UTC)[reply]

Limits of computer science

Automated testing may not be feasible at the limits of computer science (cryptography, robustness, artificial intelligence, computer security etc).

What's the problem with automated testing of cryptography? This sentence is weird and needs to be clarified. — ciphergoth 11:08, 19 April 2007 (UTC)

Automated testing is pretty tricky today with testing that a web site looks ok on mobile phones, especially if its a limited run phone in a different country/network from where dev team is; I'd worry more about that than AI limitations. Security is a hard one because one single defect can make a system insecure; testing tries to reassure through statistics (most configurations appear to work), but can never be used to guarantee correctness of a protocol or implementation. 'Robustness' is getting easier to test with tools like VMWare and Xen...you can create virtual networks with virtual hardware and simulate network failures. SteveLoughran 21:24, 19 April 2007 (UTC)[reply]

Benefits need some citation

Test driven development is a great idea. But some of the claims in the Benefits section are unsupported. I think at the least they need to be cited, or downgrade the claims to be opinions (and reference those who hold these opinions). For example, the claim that even though more code is written, that a project can be completed more quickly. Has anyone documented an experiment with two identical teams one using TDD and the other not?

Similarly, the claim that TDD programmers hardy ever need to use a debugger sounds ludicrous to me (how about debugging the tests?). As well as the claim that it's more productive to delete code that fails a test and rewrite it, than it is to debug and fix it??? Here's the current text from the article:

Programmers using pure TDD on new ("greenfield") projects report they only rarely feel the need to invoke a debugger. Used in conjunction with a Version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests is almost always more productive than debugging.

Mike Koss 21:57, 28 July 2007 (UTC)[reply]

I agree, the claim that "reverting" is the best approach to a test failure is, to use a UK technical term, bollocks. To roll back all changes the moment a test fails implies you cannot add new features to a program. Because it is inevitable that changes break tests -regresssion testing is one of their values. Once a test fails, you have a number of options
  • roll back the change, hit the person who made it with a rolled up copy of a WS-* specification
  • run the tests with extra diagnostics turned on, and try and infer why the recent change(s) are breaking it.
  • attach a debugger to the test case and see what happens when the test runs.
Debugging a test run is a fantastic way of finding out why a test fails. The test setup puts you in to the right state for the probem arising, and once you've found the problem, you can use the test outside the debugger to verify it has gone away. This is another reason why you should turn every bugrep into a test case -it should be the first step to tracking down the problem. The only times you wouldnt use a debugger are when logging and diagnostics are sufficient to identify the problem, or when you are in a situation where you can't debug the test run (its remote, embedded or the debugger affects the outcome).
I would argue strongly for removing that claimed benefit entirely. SteveLoughran 14:27, 18 October 2007 (UTC)[reply]
In fact, modern revision control systems are adding features to make this process more convenient. For example, the git revision control system contains a git-bisect command which takes you on a binary search between the current broken revision and a known working revision, driven by the results of your unit tests to find the exact commit where your test failed. --IanOsgood 19:43, 28 September 2007 (UTC)[reply]
Nice. Continuous Integratin tools do a good job of catching problems early, provided they are running regularly enough, and developers check in regularly, instead of committing a weeks worth of changes on a friday. SteveLoughran 14:29, 18 October 2007 (UTC)[reply]


I'm pretty unimpressed with the quality of half the external links here...they primarily point to various blog entries in the .NET land. Those aren't the in-depth references that wikipedia likes to link to. I've gone through and (a) cut out anything that wasnt very educational or required logins to read and (b) limited links to one per blog. Furthermore I'm not convinced by the referenced articles that VS2005 is capable of test-first development, at least if you use the built in framework, but I left the articles there in.

Given that TDD originated in Smalltalk and Java, I'd expect to see some more there. Here's some options

  • Push all product listings to the separate List of unit testing frameworks
  • Split stuff up by .NET, Java, Ruby, other platforms, and try and keep coverage of all of them.
  • Create separate pages for testing in Java, testing in .net, where we can cover the controversy (The junit4 problem, testng vs junit, NUnit versus the MS Test tools...)

If there is value in all the remaining blog entries, this content could be used as the inspiration for a better wikipedia article. —Preceding unsigned comment added by SteveLoughran (talkcontribs) 20:46, 17 January 2008 (UTC)[reply]

The links havent improved much. Unless anyone else wants to, I'm going to take a sharp knife to the links. Those that make good articles to reference, they should become references. Those that dont get cut. I don't want to impose a 'no blog' rule because it is how some of the best TDD proponents get their message out. But we have to have high standards: it has to be something on the class of Martin Fowler's blog to get a mention. SteveLoughran (talk) 21:32, 25 May 2008 (UTC)[reply]
I've just moved some of the links around and purged any that weren't current or particularly good. Another iteration trimming out the .NET articles is needed. SteveLoughran (talk) 22:12, 25 May 2008 (UTC)[reply]
I cleaned out quite a lot of them. Let's start fresh and add back valuable stuff one at a time as needed (if in fact any are needed). - MrOllie (talk) 01:10, 13 June 2009 (UTC)[reply]

Code visibility

In this edit an anonymous user at 65.57.245.11 completely re-wrote the Code visibility section without comment or discussion. S/he introduced a discussion of black-box, white-box and glass-box testing. As far as I know, these concepts relate to testing, but are quite alien to test-driven development. In my experience, if you are unit-testing as you develop, you often have to test the details of code whose function is vital, but that will end up 'private' in the final deployment.

I have been re-reading some of 'TDD by example' by Kent Beck, but can't find any discussion of these issues. Have other editors got some reputable references that can help us get to the bottom of what should be in this section? —Preceding unsigned comment added by Nigelj (talkcontribs) 23:04, 25 January 2008 (UTC)[reply]

Certainly Black Box and White Box is used in testing, but not usually in TDD, where the tests are written before the code. That said. there is always the issue of whether to test the internals of code or just the public API. Internals: more detailed checking of state. Externals: guarantee stability of public API, provides good code examples for others. SteveLoughran (talk) 22:10, 25 May 2008 (UTC)[reply]

I think that the discussion of black-box, white-box and glass-box testing is out of place here as it has no relevance to TDD. In my experience, you have to write tests at potentially every level of the code, depending on your confidence level and therefore the size of the TDD 'steps' that you are taking at the time. If that means you have to end up making encapsulated members public just to test them, it can ruin the proper encapsulation of the actual design. That's why I originally wrote this section, as that's important, and there are tricks that help with it that are non-obvious at first. The section's fairly meaningless now, but now that every entry in WP has to be referenced, I'm afraid I don't have time at the moment to track down references for all that was there before this edit. It's a shame that the section's currently full of such irrelevant and useless tosh at the moment, though. --Nigelj (talk) 17:21, 26 May 2008 (UTC)[reply]
Having read it again, it reads like almost academic paper-style definitions of things, apparently not written by a practitioner of TDD; some projects clearly scope test cases to have access to internal state of the classes. As you say, it needs to be massively reworked. We can put the x-references in after. Overall, I'm worried about this article...it seems to have got a bit long and become a place for everyone with a test framework or blog on testing to put a link. If we trim the core text, we'd be making a start at improving it. SteveLoughran (talk) 09:51, 27 May 2008 (UTC)[reply]

Ja test-Driven Development consist of many prototypes Ja —Preceding unsigned comment added by 196.21.61.112 (talk) 07:45, 1 September 2008 (UTC)[reply]

I made this edit for the Code Visibility para, which has been reverted.. incorrectly IMHO. I'd be willing to discuss it if the person who reverted it back wishes to.. or can cite sources which support that section. I already linked to a mailing list post by Dave Astels, who wrote the other TDD book Gishu Pillai (talk) 16:36, 6 December 2008 (UTC)[reply]

If you think something is worth to be tested and still it should be private, this is a code smell. You should consider to move this code to a new class where it is legitimately public and create a private instance of this class at the original place. — Preceding unsigned comment added by 93.192.8.123 (talk) 07:48, 30 May 2011 (UTC)[reply]

What Does This Mean

I can't figure out what this sentence is supposed to mean: "To achieve some advanced design concept (such as a design pattern), tests are written that will generate that design." What does it mean for tests to generate a design concept? —Preceding unsigned comment added by 203.45.67.213 (talk) 00:52, 5 October 2009 (UTC) oluwaseun —Preceding unsigned comment added by 81.199.145.162 (talk) 10:20, 9 October 2009 (UTC)[reply]

Likewise, this quotation in the Development style section:

 In Test-Driven Development by Example Kent Beck also suggests the principle "Fake it till you make it".

What does that mean in the context of TDD? —Preceding unsigned comment added by 142.244.167.122 (talk) 00:18, 8 April 2011 (UTC)[reply]

Flowchart

"Test fails" and "Test succeeds" appear to be backwards in the flow chart. Rewriting a test if it succeeds, and writing production code if it fails doesn't make sense. —Preceding unsigned comment added by 134.167.1.1 (talk) 20:41, 2 December 2009 (UTC)[reply]

No, it is right. If a new test passes straight away, it isn't going to 'drive' any new code development, so you re-write the test until it needs some new code to be written to make it pass. (This is an unusual requirement, usually a new test fails straight away and you can proceed, but it can happen) Then, you have a failing test and you write new, production code until it passes. (That's the bit where the test 'drives' the development) Then refactor the code, keeping the pass. And then repeat. Red-Green-Refactor. --Nigelj (talk) 20:56, 2 December 2009 (UTC)[reply]

Criticisms

Many articles have a "Criticisms" section, why Test-Drive-Development doesn't? —Preceding unsigned comment added by 8.7.228.252 (talk) 22:19, 10 June 2010 (UTC)[reply]

See WP:STRUCTURE - it is usually best to "fold[] debates into the narrative, rather than distilling them into separate sections", so any ingrelevant, well-sourced criticisms of TDD should be mentioned, as they arise, in the normal text rather than separated out into a "Criticisms" section. --Nigelj (talk) 08:53, 11 June 2010 (UTC)[reply]

Blurry flowchart

What's with the blurry flowchart? Why is it png instead of pure svg? 69.120.152.184 (talk) 02:15, 26 September 2011 (UTC)[reply]