# Software testability

Software testability is the degree to which a software artifact (i.e. a software system, software module, requirements- or design document) supports testing in a given test context.

Testability is not an intrinsic property of a software artifact and can not be measured directly (such as software size). Instead testability is an extrinsic property which results from interdependency of the software to be tested and the test goals, test methods used, and test resources (i.e., the test context).

A lower degree of testability results in increased test effort. In extreme cases a lack of testability may hinder testing parts of the software or software requirements at all.

## Background

Testability, a property applying to empirical hypothesis, involves two components. The effort and effectiveness of software tests depends on numerous factors including:

• properties of the software requirements
• properties of the software itself (such as size, complexity and testability)
• properties of the test methods used
• properties of the development- and testing processes
• qualification and motivation of the persons involved in the test process

## Testability of Software Components

The testability of software components (modules, classes) is determined by factors such as:

• controllability: The degree to which it is possible to control the state of the component under test (CUT) as required for testing.
• observability: The degree to which it is possible to observe (intermediate and final) test results.
• isolateability: The degree to which the component under test (CUT) can be tested in isolation.
• separation of concerns: The degree to which the component under test has a single, well defined responsibility.
• understandability: The degree to which the component under test is documented or self-explaining.
• automatability: The degree to which it is possible to automate testing of the component under test.
• heterogeneity: The degree to which the use of diverse technologies requires to use diverse test methods and tools in parallel.

The testability of software components can be improved by:

## Testability of Requirements

Requirements need to fulfill the following criteria in order to be testable:

• consistent
• complete
• unambiguous
• quantitative (a requirement like "fast response time" can not be Verification/verified)
• Verification/verifiable in practice (a test is feasible not only in theory but also in practice with limited resources)

Treating the requirement as axioms, testability can be treated via asserting existence of a function $F_S$ (software) such that input $I_k$ generates output $O_k$, therefore $F_S : I \to O$. Therefore, the ideal software generates the tuple $(I_k,O_k)$ which is the input-output set $\Sigma$, standing for specification.

Now, take a test input $I_t$, which generates the output $I_t$, that is the test tuple $\tau = (I_t,O_t)$. Now, the question is whether or not $\tau \in \Sigma$ or $\tau \not \in \Sigma$. If it is in the set, the test tuple $\tau$ passes, else the system fails the test input. Therefore, it is of imperative importance to figure out : can we or can we not create a function that effectively translates into the notion of the set Indicator function for the specification set $\Sigma$.

By the notion, $1_{\Sigma}$ is the testability function for the specification $\Sigma$. The existence should not merely be asserted, should be proven rigorously. Therefore, obviously without algebraic consistency, no such function can be found, and therefore, the specification cease to be termed as testable.