Jump to content

Lazy systematic unit testing

From Wikipedia, the free encyclopedia

Lazy Systematic Unit Testing[1] is a software unit testing method based on the two notions of lazy specification, the ability to infer the evolving specification of a unit on-the-fly by dynamic analysis, and systematic testing, the ability to explore and test the unit's state space exhaustively to bounded depths. A testing toolkit JWalk exists to support lazy systematic unit testing in the Java programming language.[2]

Lazy Specification

[edit]

Lazy specification refers to a flexible approach to software specification, in which a specification evolves rapidly in parallel with frequently modified code.[1] The specification is inferred by a semi-automatic analysis of a prototype software unit. This can include static analysis (of the unit's interface) and dynamic analysis (of the unit's behaviour). The dynamic analysis is usually supplemented by limited interaction with the programmer.

The term Lazy specification is coined by analogy with lazy evaluation in functional programming. The latter describes the delayed evaluation of sub-expressions, which are only evaluated on demand. The analogy is with the late stabilization of the specification, which evolves in parallel with the changing code, until this is deemed stable.

Systematic Testing

[edit]

Systematic testing refers to a complete, conformance testing approach to software testing, in which the tested unit is shown to conform exhaustively to a specification, up to the testing assumptions.[3] This contrasts with exploratory, incomplete or random forms of testing. The aim is to provide repeatable guarantees of correctness after testing is finished.

Examples of systematic testing methods include the Stream X-Machine testing method[4] and equivalence partition testing with full boundary value analysis.

References

[edit]
  1. ^ a b A J H Simons, JWalk: Lazy systematic unit testing of Java classes by design introspection and user interaction, Automated Software Engineering, 14 (4), December, ed. B. Nuseibeh, (Boston: Springer, 2007), 369-418.
  2. ^ The JWalk Home Page, http://www.dcs.shef.ac.uk/~ajhs/jwalk/
  3. ^ A J H Simons, A theory of regression testing for behaviourally compatible object types, Software Testing, Verification and Reliability, 16 (3), UKTest 2005 Special Issue, September, eds. M Woodward, P McMinn, M Holcombe and R Hierons (Chichester: John Wiley, 2006), 133-156.
  4. ^ F Ipate and W M L Holcombe, Specification and testing using generalised machines: a presentation and a case study, Software Testing, Verification and Reliability, 8 (2), (Chichester: John Wiley, 1998), 61-81.