Continuous testing

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[1][2]

For Continuous Testing, the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[3]

Adoption drivers[edit]

In the 2010s, software has become a key business differentiator.[4] As a result, organizations now expect software development teams to deliver more, and more innovative, software within shorter delivery cycles.[5][6] To meet these demands, teams have turned to lean approaches, such as Agile, DevOps, and Continuous Delivery, to try to speed up the SDLC.[7] After accelerating other aspects of the delivery pipeline, teams typically find that their testing process is preventing them from achieving the expected benefits of their SDLC acceleration initiative.[8] Testing and the overall quality process remain problematic for several key reasons.[9]

  • Traditional testing processes are too slow. Iteration length has changed from months to weeks or days with the rising popularity of Agile, DevOps, and Continuous Delivery. Traditional methods of testing, which rely heavily on manual testing and automated GUI tests that require frequent updating, cannot keep pace.[8][10] At this point, organizations tend to recognize the need to extend their test automation efforts.[1][11]
  • Even after more automation is added to the existing test process, managers still lack adequate insight into the level of risk associated with an application at any given point in time.[2] Understanding these risks is critical for making the rapid go/no go decisions involved in Continuous Delivery processes.[12] If tests are developed without an understanding of what the business considers to be an acceptable level of risk, it is possible to have a release candidate that passes all the available tests, but which the business leaders would not consider to be ready for release.[13] For the test results to accurately indicate whether each release candidate meets business expectations, the approach to designing tests must be based on the business's tolerance for risks related to security, performance, reliability, and compliance.[4] In addition to having unit tests that check code at a very granular bottom-up level, there is a need for a broader suite of tests to provide a top-down assessment of the release candidate's business risk.[3]
  • Even if testing is automated and tests effectively measure the level of business risk, teams without a coordinated end-to-end quality process tend to have trouble satisfying the business expectations within today's compressed delivery cycles.[3] Trying to remove risks at the end of each iteration has been shown to be significantly slower and more resource-intensive than building quality into the product through defect prevention strategies such as development testing.[14][15]

Organizations adopt Continuous Testing because they recognize that these problems are preventing them from delivering quality software at the desired speed. They recognize the growing importance of software as well as the rising cost of software failure, and they are no longer willing to make a tradeoff between time, scope, and quality.[2][16] [17]

Goals and benefits[edit]

The goal of continuous testing is to provide fast and continuous feedback regarding the level of business risk in the latest build or release candidate.[2] This information can then be used to determine if the software is ready to progress through the delivery pipeline at any given time.[1][4][12][18]

Since testing begins early and is executed continuously, application risks are exposed soon after they are introduced.[5] Development teams can then prevent those problems from progressing to the next stage of the SDLC. This reduces the time and effort that need to be spent finding and fixing defects. As a result, it is possible to increase the speed and frequency at which quality software (software that meets expectations for an acceptable level of risk) is delivered, as well as decrease technical debt.[3][9][19]

Moreover, when software quality efforts and testing are aligned with business expectations, test execution produces a prioritized list of actionable tasks (rather than a potentially overwhelming number of findings that require manual review). This helps teams focus their efforts on the quality tasks that will have the greatest impact, based on their organization's goals and priorities.[2]

Additionally, when teams are continuously executing a broad set of continuous tests throughout the SDLC, they amass metrics regarding the quality of the process as well as the state of the software. The resulting metrics can be used to re-examine and optimize the process itself, including the effectiveness of those tests. This information can be used to establish a feedback loop that helps teams incrementally improve the process.[3][9] Frequent measurement, tight feedback loops, and continuous improvement are key principles of DevOps.[20]

Scope of testing[edit]

Continuous testing includes the validation of both functional requirements and non-functional requirements.

For testing functional requirements (functional testing), Continuous Testing often involves unit tests, API testing, integration testing, and system testing. For testing non-functional requirements (non-functional testing - to determine if the application meets expectations around performance, security, compliance, etc.), it involves practices such as static code analysis, security testing, performance testing, etc. [8][19] Tests should be designed to provide the earliest possible detection (or prevention) of the risks that are most critical for the business or organization that is releasing the software.[5]

Teams often find that in order to ensure that test suite can run continuously and effectively assesses the level of risk, it's necessary to shift focus from GUI testing to API testing because 1) APIs (the "transaction layer") are considered the most stable interface to the system under test, and 2) GUI tests require considerable rework to keep pace with the frequent changes typical of accelerated release processes; tests at the API layer are less brittle and easier to maintain.[10][21][22]

Tests are executed during or alongside continuous integration—at least daily.[23] For teams practicing continuous delivery, tests are commonly executed many times a day, every time that the application is updated in to the version control system.[8]

Ideally, all tests are executed across all non-production test environments. To ensure accuracy and consistency, testing should be performed in the most complete, production-like environment possible. Strategies for increasing test environment stability include virtualization software (for dependencies your organization can control and image) service virtualization (for dependencies beyond your scope of control or unsuitable for imaging), and test data management.[1][3][9][24]

Best practices[edit]

  • Tests should be logically-componentized, incremental, and repeatable; results must be deterministic and meaningful.[1][3]
  • All tests need to be run at some point in the build pipeline, but not all tests need be run all the time.[1][8]
  • Eliminate test data and environment constraints so that tests can run constantly and consistently in production-like environments.[1][3][8]
  • To minimize false positives, minimize test maintenance, and more effectively validate use cases across modern systems with multitier architectures, teams should emphasize API testing over GUI testing.[3][10][11]

Challenges/roadblocks[edit]

Since modern applications are highly-distributed, test suites that exercise them typically require access to a dependencies that are not readily available for testing (e.g., third-party services, mainframes that are available for testing only in limited capacity or at inconvenient times, etc.) Moreover, with the growing adoption of Agile and parallel development processes, it is common for end-to-end functional tests to require access to dependencies that are still evolving or not yet implemented. This problem can be addressed by using service virtualization to simulate the application under test's (AUT's) interactions with the missing or unavailable dependencies. It can also be used to ensure that data, performance, and behavior is consistent across the various test runs.[1][6][9]

One reason teams avoid continuous testing is that their infrastructure is not scalable enough to continuously execute the test suite. This problem can be addressed by focusing the tests on the business's priorities, splitting the test base, and parallelizing the testing with application release automation tools.[23]

Continuous Testing vs automated testing[edit]

The goal of Continuous Testing is to apply "extreme automation" to a stable, production-like test environments. Automation is essential for Continuous Testing.[26] But automated testing is not the same as Continuous Testing.[3]

Automated testing involves automated, CI-driven execution of whatever set of tests the team has accumulated. Moving from automated testing to continuous testing involves executing a set of tests that is specifically designed to assess the business risks associated with a release candidate, and to regularly execute these tests in the context of stable, production-like test environments. Some differences between automated and continuous testing:

  • With automated testing, a test failure may indicate anything from a critical issue to a violation of a trivial naming standard. With continuous testing, a test failure always indicates a critical business risk.
  • With continuous testing, a test failure is addressed via a clear workflow for prioritizing defects vs. business risks and addressing the most critical ones first.
  • With continuous testing, each time a risk is identified, there is a process for exposing all similar defects that might already have been introduced, as well as preventing this same problem from recurring in the future.[2][4]

Predecessors[edit]

Since the 1990s, Continuous test-driven development has been used to provide give programmers rapid feedback on whether the code they added a) functioned properly and b) unintentionally changed or broke existing functionality. This testing, which was a key component of Extreme Programming, involves automatically executing unit tests (and sometimes acceptance tests or smoke tests) as part of the automated build, often many times a day. These tests are written prior to implementation; passing tests indicate that implementation is successful.[12][27]

See also[edit]

Further reading[edit]

References[edit]

  1. ^ a b c d e f g h i Part of the Pipeline: Why Continuous Testing Is Essential, by Adam Auerbach, TechWell Insights August 2015
  2. ^ a b c d e f The Relationship between Risk and Continuous Testing: An Interview with Wayne Ariola, by Cameron Philipp-Edmonds, Stickyminds December 2015
  3. ^ a b c d e f g h i j k DevOps: Are You Pushing Bugs to Clients Faster, by Wayne Ariola and Cynthia Dunlop, PNSQC October 2015
  4. ^ a b c d DevOps and QA: What’s the real cost of quality?, by Ericka Chickowski, DevOps.com June 2015
  5. ^ a b c The Importance of Shifting Right in DevOps, by Bob Aiello, CM Crossroads December 2014
  6. ^ a b Kinks persist in Continuous Workflows, by Lisa Morgan, SD Times September 2014
  7. ^ Continuous Testing: Think Different, by Ian Davis, Visual Studio Magazine September 2011
  8. ^ a b c d e f Testing in a Continuous Delivery World, by Rob Marvin, SD Times June 2014
  9. ^ a b c d e f Shift Left and Put Quality First, by Adam Auerbach, TechWell Insights October 2014
  10. ^ a b c The Forrester Wave™ Evaluation Of Functional Test Automation (FTA) Is Out And It's All About Going Beyond GUI Testing, by Diego Lo Giudice, Forrester Research April 23, 2015
  11. ^ a b Continuous Development Brings Changes for Software Testers, by Amy Reichert, SearchSoftwareQuality September 2014
  12. ^ a b c Zeichick’s Take: Forget 'Continuous Integration'—the Buzzword is now 'Continuous Testing', by Alan Zeichick, SD Times February 2014
  13. ^ Buy the Wrong Software? A Fix Can Cost $700,000 A Conversation with voke’s Theresa Lanowitz, by Dom Nicastro , CMS Wire October 2014
  14. ^ Jones, Capers; Bonsignour, Olivier (2011). The Economics of Software Quality. Addison-Wesley Professional. ISBN 978-0132582209. 
  15. ^ Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. p. 73. ISBN 0-470-04212-5. 
  16. ^ a b Theresa Lanowitz Talks Extreme Test Automation at STAREAST 2014, by Beth Romanik, TechWell Insights May 2014
  17. ^ Guest View: What’s keeping you from Continuous?, by Noel Wurst, SD Times November 2015
  18. ^ Manage the Business Risks of Application Development with Continuous Testing, by Wayne Ariola, CM Crossroads September 2014
  19. ^ a b The Power of Continuous Performance Testing, by Don Prather, Stickyminds August 2015
  20. ^ Practices for DevOps and Continuous Delivery, by Ben Linders, InfoQ July 2015
  21. ^ Produce Better Software by Using a Layered Testing Strategy, by Sean Kenefick, Gartner January 7, 2014
  22. ^ Cohn, Mike (2009). Succeeding with Agile: Software Development Using Scrum. Addison-Wesley Professional. p. 312. ISBN 978-0321579362. 
  23. ^ a b Experiences from Continuous Testing at Siemens Healthcare, by Ben Linders, InfoQ February 2015
  24. ^ DevOps- Not a Market, but a Tool-Centric Philosophy That Supports a Continuous Delivery Value Chain, by Laurie F. Wurster, Ronni J. Colville, Jim Duggan, Gartner February, 2015
  25. ^ Keep your Software Healthy During Agile Development, by Adrian Bridgwater, ComputerWeekly November 2013
  26. ^ Extreme automation, meet the pre-production life cycle, by Alexandra Weber Morales, SD Times January 2014
  27. ^ Continuous Integration (original version), by Martin Fowler, DevOps.com September 2000