Jump to content

User:J-at-ywalters-dot-net/sandbox

From Wikipedia, the free encyclopedia
Illustration of Closed Box Testing discipline

Closed box testing (also known as black box testing, behavioral testing and specification-based testing[1]) describes a software validation and verification discipline that is typically performed by the software testing professionals that are not directly accessing or contributing to the source code that comprises a project. The various types of testing activities within this discipline mainly focus on the objective analysis of inputs and outputs of software applications based on software requirements and specifications of what the project is expected to do.

Since closed box testing practitioners are well aware of what the software is supposed to do but is not focused upon how it is produced, these individuals are considered to be interacting with "a closed box" system. Software testing professionals working within this discipline would know that a particular input has an expected output but would not be fully aware of how the software produces that output.[2]

Since the specified, expected functionality of software is being tested without having knowledge of internal code structure or implementation details there are various cognitive biases, particularly confirmation bias, that are minimized and objective, third party analysis is enabled.

The term closed box testing is an attempt to provide a more practical description and illustration of this software testing discipline as well as an intentional departure from the racial connotations of the long used "black box" and "white box" terminology.

Disciplines within closed box testing

[edit]

The following, commonly referenced types of functional and non-functional testing activities often comprise closed box testing disciplines within organizations that are following modern software engineering practices:

Types of functional testing

[edit]

Functional testing deals with the functional requirements or specifications of a program. Different actions or functions of the system are being tested by exercising the application's inputs and comparing the actual results with the expected outputs. An example of functional testing would be verifying that an application's login functionality is working correctly.

Among the types of testing that occur within this discipline are:

Types of non-functional testing

[edit]

Non-functional testing checks non-functional aspects of a program that include the software's performance, usability, reliability, etc. These disciplines validate and verify the readiness of a system as per nonfunctional parameters which are not addressed by functional testing. An example of non-functional testing would be verifying that all functionality within an application's login process consistently occurs within two seconds.

Among the types of testing that occur within this discipline are:

Common closed box testing techniques

[edit]

Equivalence Partitioning

[edit]

In this technique, also known as Equivalence Class Partitioning (ECP) input values to the system are divided into different partitions or groups, based on their similarity within the expected outcome. From this test cases can be derived and instead of using each and every possible input value, practitioners can use any one value from the partition/group to verify the expected outcome. In this way, testers can maintain appropriate test coverage while reducing time spent and rework costs.

Boundary Value Analysis

[edit]

Tests within the boundary value analysis technique are designed to include representatives of boundary values in a range. Both the valid inputs and invalid inputs are tested to verify that the software is working as expected.

Using this technique to test an input where values from 1 to 100 are expected to work correctly, values on the minimum and maximum edges of an equivalence partition would be tested. In this example testers would use the values of 0, 1, 2, 99, 100, and 101 (1-1, 1 and 1+1 from the minimum edge as well as 100-1, 100, and 100+1 from the maximum edge) instead of using all the values from 1 to 100.

Decision Table Testing

[edit]

The decision table technique, also known as the cause-effect table, is used for testing the system behavior for different input combinations. This systematic approach is where the different input combinations and their corresponding system behavior are captured in the form of a table.

Using a tabular form helps testers deal with different combination inputs and their associated expected outputs. It is considered especially helpful in test design as it focuses on logical diagramming and supports considering the various effects of different input combinations.

State Transition Testing

[edit]

This technique is used to verify the different states of the system under test and analyzes how changes made in input conditions cause state changes or output changes in the running software. This technique enables testers to analyze behavior of an application for different input conditions. Testers can provide positive and negative input test values and record the overall system behavior.

Error Guessing

[edit]

Using this technique, testers use their experience and expertise about an application's behavior and functionality to guess what error-prone areas might be impacted by code changes made to the project. Many defects can be found using error guessing where most of the developers usually make mistakes.

Common mistakes that application engineers forget to handle from time-to-time are:

  • Division by zero
  • Not properly handling null values within text fields
  • User initiated file uploads processes without attachment being provided
  • File uploads with less than or more than the limit size supported by the software

Graph-Based Testing

[edit]

Software testing professionals using this technique, which is also known as state based testing, to first build a graph model for the program under test and then try to cover certain elements in the graph model with valuable test cases. From this object graph, each object relationship is identified and test cases are written accordingly to discover potential errors.

Use case testing

[edit]

The use case software testing technique is based on the identification of test cases that cover entire system, from start to end, on a transaction by transaction basis. Use cases are the interactions between users and software application so it is considered ‘user-oriented’ not ‘system-oriented’. Use case testing helps to identify gaps in software application that might not be found by testing individual software components.

Comprehensive use of this discipline involves using both "happy path"/"sunny day" positive test cases as well as "unhappy path"/"rainy day" negative test cases that ensure all aspects of the software are working as intended.

Happy path / sunny day use cases

[edit]
  • These are the primary cases that are most likely to happen when everything does well within the project. These positive use cases are typically given a higher priority than the other cases.

Unhappy path / rainy day use cases

[edit]
  • These are often defined as the various edge cases that exist within the operation of the software. The priority of these typically come after the positive test cases.

User story testing

[edit]

The User story testing discipline is based on knowing what the product's users will be experiencing in the real world. Requirements for functionality within the software are written down into "user stories" that are typically one or two lines long. A user story is intended to be the simplest statement possible about a single function or feature to be performed within the running application. A simple example of a user story is:

As a (user role/customer), I want to (goal to be accomplished) so that I can (reason of the goal).

Benefits of the discipline

[edit]

Practitioners of closed box testing disciplines are typically removed from directly contributing to, reading or having an in-depth understanding the source code that comprises the project that they are testing. Within this regimen, testers have the freedom to assess the reliability of project independently from the knowledge of specific programming languages and source code of the project.

Third party, unprejudiced validations of the work of others is a long determined best practice throughout many industries.

Biases within the disciplines

[edit]

There are many well documented articles on the various cognitive biases [3][4][5] that impact the various software testing disciplines, especially as internet-enabled businesses are moving at speeds that did not exist prior to the commercialization of the global computer network.

The entirety of the Software Development Life Cycle (SDLC) is comprised of human beings so it is inherently impossible to certify any non-trivial software project as 100% bug-free. test automation is a tool that can provide cost efficiencies and provide confidence that previously developed and tested software continues to perform correctly after an addition or change to the project.

Using code to methodically and highly repetitively test code can be a highly effective way to exclude personal opinions and judgments, whether perceived or previously observed, about who developed the functionality to be tested, which part of the product they believe contains the most challenges, or other factors that can impact objective evaluations of a project. What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make.[6]

Importance of discipline

[edit]

When malfunctions are discovered in the later stages of a software project, it is usually significantly more expensive to fix them. Advocates of these practices understand the significant cost savings over time associated with identifying and locating as many issues as possible as early as possible.

Closed box testing disciplines look into real-world, runtime use of a product once it has been installed and made available for software testing professionals to verify. When combined with the shift left enabling open box testing and the code quality disciplines that use static analysis and pre-installation verification techniques, teams are able to share the overall product quality responsibility and use the underlying project to empower collaboration.

Exploratory testing

[edit]

This discipline is widely used within progressive agile software development methodologies. Practitioners of it are typically passionate about the aspects of discovery, investigation, learning, personal freedom and responsibility that comprise this thinking-based approach. Whereas the execution of previously scripted tests are more repetitive, non-thinking activities to compare actual results with expected results, exploratory testing recognizes that highly repetitive tasks and automation have distinct limits.

Among the commonly realized benefits of exploratory testing, also known as session based testing, is that the investigative process helps find more bugs than standard testing, bugs that are normally ignored by other testing techniques are often uncovered, the imaginations of testers are often expanded, and it can overcome limitations of scripted testing.[7]

Agile software practices

[edit]

Within Agile software development enabled projects, teams are asked to reduce the length of time of software delivery while continuously improving quality of each release of their software. At the same time, there is typically an increased pressure to reduce overall testing costs.

Agile testing promotes that all members of a cross-functional teams, with special expertise contributed by testers, are working at a sustainable pace and delivering the desired business value at frequent intervals. Investments in these disciplines provide for the long term reliability and maintainability of the overall software project by emphasizing the "whole-team" approach to "baking quality in".

Illustration of Closed Box and Open Box Testing disciplines
Illustration of Open Box and Closed Box Testing disciplines

See also

[edit]

References

[edit]

Open box testing

[edit]
Illustration of Open Box Testing discipline

Open box testing (also known as white box testing, clear box testing, glass box testing, transparent box testing, and structural testing) describes a software validation and verification discipline that is typically performed by the application engineers that have directly contributed to a project's source code. Individuals that have direct access to source code are considered to be able to "see within the open box" and with that ability comes a significant understanding of the computer programming that comprises the project.

This term is an attempt to provide a more practical description and illustration of this software testing discipline as well as an intentional departure from the racial connotations of the long used "white box" and "black box" terminology.

Types of Open box testing

[edit]

The following, commonly referenced types of testing activities often comprise open box testing disciplines within organizations that are following modern software engineering practices:

  • A focus on and ability to verify the smallest piece of code that can be logically isolated is consistently working as designed and expected. Usually unit testing verifies that a single, cohesive function accepts the planned input and consistently produces the expected output.
  • This discipline, also known as program or module testing, involves verifying that larger, individual pieces of a program that make use of multiple, smaller units of work are working correctly without running the entirety of the overall project. This practice is performed after the smaller and often considerably faster unit tests.
  • Multiple, individual software modules are combined and tested as a group. This type of testing occurs after unit testing and before the application is fully built and installed for other types of testing activities. Integration testing takes as its input modules that have been unit and component tested, groups them in larger aggregates, and performs more comprehensive and extensive system verifications.

Biases within discipline

[edit]

The ability to directly read and understand the source code enables certain activities but also creates some bias challenges.

Practitioners of open box testing disciplines are subject to various cognitive biases, particularly confirmation bias, as it is impractical to have people objectively evaluate the quality of work that they have done. Third party, unprejudiced validations of the work of others is a long determined best practice throughout many industries.

We suck at testing our own code. We suck so badly at it that it has led to entire professions like as Quality Assurance analysts and test engineers. We're not necessarily bad at coding, but we're notoriously bad at finding our own bugs.
– Confirmation Bias: How your brain wants to wreck your code [9]

Importance of discipline

[edit]

When malfunctions are discovered in the later stages of a software project, it is usually significantly more expensive to fix them. Advocates of these practices understand the significant cost savings over time encourage developers to identify and locate as many issues as they can at an early stage of development and then to automate the process for validating every change in code going forward. Articles like "Unit Testing: Time Consuming but Product Saving"[10] illustrate the importance and potential significant savings of early error identification.

Open box testing disciplines, when done well, can also make it significantly easier for developers to deal with a relatively unfamiliar piece of code. Code that is written by other programmers becomes more manageable as highly effective open box tests will short circuit inadvertently introduced problems from continuing within the development process.

By writing open box tests, code creators can communicate the intent of the functionality they have created. By reading previously created tests, others get to see how the author expected the code to be used, and possibly more importantly, how it was intended not to be used.

Code that is easy to test can also be easier to understand. Succinct tests can and often do lead to succinct code. When done well, open box tests enable the software development process to become far more predictable and repeatable over time but they are not the end-all be-all of any comprehensive approach to software quality.

Shift-left testing

[edit]

The open box testing disciplines are among the central tenets of the Shift-left testing approach to software development. As part of this "test early and often" modern and progressive approach, developer responsibilities are incorporated into the overall testing cycle earlier than ever before. Focusing on finding and remediating software defects as early as possible within the Software Development Life Cycle (SDLC) has profound benefits to organizations that support it because quality is clearly acknowledged as a shared responsibility and testing is prevalent throughout the process. Significant cost savings over time are often the return on investment achieved for teams that are highly successful at these practices but it is important that people do not try to “boil the ocean” and achieve unreasonable levels of test coverage for their projects.

Quality assurance practices involve identifying bugs, fixing them and ensuring that previously working functionality has not been inadvertently changed as significant issues can dramatically damage a company’s reputation. For example, many car companies have had to bear reputation and financial damages because of recalled vehicles whose parts were not properly tested. The use of open box testing practices within the shift left testing disciplines are proactive investments in a product's quality.

Illustration of Closed Box Testing discipline

Open box testing activities are typically performed prior to an official, fully integrated installation of a piece of software and the start of an official, objective testing cycle begins. Post-installation, a wide variety of closed box testing disciplines are performed by a variety of different types of software testing professionals.

Agile software practices

[edit]

Within Agile software development enabled projects, teams are asked to reduce the length of time of software delivery while continuously improving quality of each release of their software. At the same time, there is typically an increased pressure to reduce overall testing costs.

Highly capable application engineers that demonstrate significant open box testing skills typically produce well designed units of work that are well covered with meaningful, open box type tests. Investments in these types of tests provide for the long term reliability, maintainability and comprehensive documentation of the expected functionality within the project by ensuring that the units of work continue to function correctly over time.

Numerous surveys and studies over the past decade illustrate that software engineers frequently spend large portions of their time working with, maintaining and needing to improving existing code. [11] Software maintenance related code changes are particularly risky when project contributors do not have the appropriate level of open box type tests in place that demonstrate that software functionality that was working correctly before the change continues to work correctly after the change.

The ability to review the specifications, designs and coding implementations that comprise the internal logic and structure of the underlying application enables teams to create sustainable projects that support the addition of, or transition to, future project contributors but

All software testing disciplines involve identifying flaws and errors in the application code that must be fixed. The processes that are undertaken provide for confidence that the functionality and correctness of a software has been analyzed and verified to be correct.

Continuous integration

[edit]

In software engineering, continuous integration (CI) means the repeated application of quality control processes on every small discrete change or addition. A fundamental tenet of this practice is that the project's automated tests are first run locally then run on a remote system to provide fast feedback to the project's contributors and prevent the advancement of code that is known to be functioning incorrectly.

Open box tests are the straight-forward first line of defense that ensures that code changes do not introduce unintended consequences within the project. The intertwined nature of open box testing disciplines and all derivatives of continuous integration practices have led to a wide variety of articles about how essential open box type tests are. Articles like "Continuous Integration is Absurd without Unit Testing"[12] and "Unit Tests, How to Write Testable Code and Why it Matters"[13] illustrate how these two disciplines are inextricably linked.

Illustration of Open Box and Closed Box Testing disciplines
Illustration of Closed Box and Open Box Testing disciplines

See also

[edit]

References

[edit]
  1. ^ Jerry Gao; H.-S. J. Tsao; Ye Wu (2003). Testing and Quality Assurance for Component-based Software. Artech House. pp. 170–. ISBN 978-1-58053-735-3.
  2. ^ Patton, Ron (2005). Software Testing (2nd ed.). Indianapolis: Sams Publishing. ISBN 978-0672327988.
  3. ^ "Cognitive Bias In Software Testing: Why Do Testers Miss Bugs?". Retrieved 1 September 2020.
  4. ^ Salman, Iflaah; Turhan, Burak; Vegas, Sira. "A controlled experiment on time pressure and confirmation bias in functional software testing". Retrieved 18 December 2018.
  5. ^ Ben Salem, Malek. "Cognitive biases in software testing and quality assurance". Retrieved 26 June 2019.
  6. ^ Brian Marick. "When Should a Test Be Automated?". StickyMinds.com. Retrieved 2009-08-20.
  7. ^ "What is Exploratory Testing? Techniques with Examples".
  8. ^ Bach, Jonathan (November 2000). "Session-Based Test Management" (PDF).
  9. ^ Eland, Matt. "Confirmation Bias: How your brain wants to wreck your code". Retrieved 12 September 2019.
  10. ^ Riggins, Jennifer. "Unit Testing: Time Consuming but Product Saving". Retrieved 22 December 2017.
  11. ^ Grams, Chris. "How Much Time Do Developers Spend Actually Writing Code?". Retrieved 15 October 2019.
  12. ^ Mackay, Adam. "Continuous Integration is Absurd without Unit Testing". Retrieved 16 July 2019.
  13. ^ Kolodiy, Sergey. "Unit Tests, How to Write Testable Code and Why it Matters". Retrieved 14 January 2020.