Test oracle

From Wikipedia, the free encyclopedia
  (Redirected from Oracle (software testing))
Jump to navigation Jump to search

In computing, software engineering and software testing a test oracle, or just oracle, is a mechanism for determining whether a test has passed or failed.[1] The use of oracles involves comparing the output(s) of the system under test, for a given test-case input, to the output(s) that the oracle determines that product should have. The term "test oracle" was first introduced in a paper by William E. Howden.[2] Additional work on different kinds of oracles was explored by Elaine Weyuker.[3]

Oracles often operate separately from the system under test.[4] However, method postconditions are part of the system under test, as automated oracles in design by contract models.[5] Determining the correct output for a given input (and a set of program/system states) is known as the oracle problem or test oracle problem[6]:507, which is a much harder problem than it seems, and involves working with problems related to controllability and observability.[7] Various methods have been proposed to alleviate the test oracle problem. A popular technique is metamorphic testing.[8][9]


A research literature survey covering 1978 to 2012[6] found several potential categorisations for test oracles.


These oracles are typically associated with formalised approaches to software modelling and software code construction. They are connected to formal specification,[10] model-based design which may be used to generate test oracles,[11] state transition specification for which oracles can be derived to aid model-based testing[12] and protocol conformance testing,[13] and design by contract for which the equivalent test oracle is an assertion.

Specified Test Oracles have a number of challenges. Formal specification relies on abstraction, which in turn may naturally have an element of imprecision as all models cannot capture all behaviour.[6]:514


A derived test oracle differentiates correct and incorrect behaviour by using information derived from artefacts of the system. These may include documentation, system execution results and characteristics of versions of the system under test[6]:514. Regression test suites (or reports) are an example of a derived test oracle - they are built on the assumption that the result from a previous system version can be used as aid (oracle) for a future system version. Previously measured performance characteristics may be used as an oracle for future system versions, for example, to trigger a question about observed potential performance degradation. Textual documentation from previous system versions may be used as a basis to guide expectations in future system versions.

A pseudo-oracle falls into the category[6]:515 of derived test oracle. A pseudo-oracle, as defined by Weyuker,[14] is a separately written program which can take the same input as the program/system under test so that their outputs may be compared to understand if there might be a problem to investigate.


An implicit test oracle relies on implied information and assumptions[6]:518. For example, there may be some implied conclusion from a program crash, i.e. unwanted behaviour - an oracle to determine that there may be a problem. There are a number of ways to search and test for unwanted behaviour, whether some call it negative testing, where there are specialized subsets such as fuzzing.

There are limitations in implicit test oracles - as they rely on implied conclusions and assumptions. For example, a program/process crash may not be a priority issue if the system is a fault-tolerant system and so operating under a form of self-healing/self-management. Implicit test oracles may be susceptible to false positives due to environment dependencies.


When specified, derived or implicit test oracles cannot be used, then human input to determine the test oracles is required. These can be thought of as quantitative and qualitative approaches.[6]:519–520

  • A quantitative approach aims to find the right amount of information to gather on a system under test (e.g. test results) for a stakeholder to be able to make decisions on fit-for-purpose / release of the software.
  • A qualitative approach aims to find the representativeness and suitability of the input test data and context of the output from the system under test. An example is using realistic and representative test data and making sense of the results (if they are realistic).

These can be guided by heuristic approaches, i.e. gut instinct, rule of thumb, checklist aids and experience to help tailor the specific combination selected for the program/system under test.


Common oracles include:

  • specifications and documentation.[15][16] A formal specification used as input to model-based design and model-based testing would be an example of a specified test oracle. Documentation that was not a formal specification of the product would typically be a derived test oracle, e.g. a usage or installation guide, or a record of performance characteristics or minimum machine requirements for the software.
  • other products (for instance, an oracle for a software program might be a second program that uses a different algorithm to evaluate the same mathematical expression as the product under test). This is an example of a derived test oracle, a pseudo-oracle.[14]:466
  • a heuristic oracle that provides approximate results or exact results for a set of a few test inputs[17]
  • a statistical oracle that uses statistical characteristics,[18] for example with image analysis where a range of certainty/uncertainty is defined for the test oracle to pronounce a match or not. This would be an example of a human test oracle.
  • a consistency oracle that compares the results of one test execution to another for similarity.[19] This is an example of a derived test oracle.
  • a model-based oracle that uses the same model to generate and verify system behavior,[20] an example of a specified test oracle.
  • a human oracle (i.e. the correctness of the system under test is determined by manual analysis)[7]


  1. ^ Kaner, Cem; A Course in Black Box Software Testing, 2004
  2. ^ Howden, W.E. (July 1978). "Theoretical and Empirical Studies of Program Testing". IEEE Transactions on Software Engineering. 4 (4): 293–298. doi:10.1109/TSE.1978.231514.
  3. ^ Weyuker, Elaine J.; "The Oracle Assumption of Program Testing", in Proceedings of the 13th International Conference on System Sciences (ICSS), Honolulu, HI, January 1980, pp. 44-49
  4. ^ Jalote, Pankaj; An Integrated Approach to Software Engineering, Springer/Birkhäuser, 2005, ISBN 0-387-20881-X
  5. ^ Meyer, Bertrand; Fiva, Arno; Ciupa, Ilinca; Leitner, Andreas; Wei, Yi; Stapf, Emmanuel (September 2009). "Programs That Test Themselves". Computer. 42 (9): 46–55. doi:10.1109/MC.2009.296.
  6. ^ a b c d e f g Barr, Earl T.; Harman, Mark; McMinn, Phil; Shahbaz, Muzammil; Yoo, Shin (November 2014). "The Oracle Problem in Software Testing: A Survey" (PDF). IEEE Transactions on Software Engineering. 41 (5): 507–525. doi:10.1109/TSE.2014.2372785.
  7. ^ a b Ammann, Paul; and Offutt, Jeff; "Introduction to Software Testing", Cambridge University Press, 2008, ISBN 978-0-521-88038-1
  8. ^ Segura, Sergio; Fraser, Gordon; Sanchez, Ana B.; Ruiz-Cortes, Antonio (2016). "A survey on metamorphic testing". IEEE Transactions on Software Engineering. 42 (9): 805–824. doi:10.1109/TSE.2016.2532875. hdl:11441/38271.
  9. ^ Chen, Tsong Yueh; Kuo, Fei-Ching; Liu, Huai; Poon, Pak-Lok; Towey, Dave; Tse, T.H.; Zhou, Zhi Quan (2018). "Metamorphic testing: A review of challenges and opportunities" (PDF). ACM Computing Surveys. 51 (1): 4:1–4:27. doi:10.1145/3143561.
  10. ^ Börger, E (1999). Hutter, D; Stephan, W; Traverso, P; Ullman, M (eds.). High Level System Design and Analysis Using Abstract State Machines. Applied Formal Methods — FM-Trends 98. Lecture Notes in Computer Science. 1641. pp. 1–43. CiteSeerX doi:10.1007/3-540-48257-1_1. ISBN 978-3-540-66462-8.
  11. ^ Peters, D.K. (March 1998). "Using test oracles generated from program documentation". IEEE Transactions on Software Engineering. 24 (3): 161–173. CiteSeerX doi:10.1109/32.667877.
  12. ^ Utting, Mark; Pretschner, Alexander; Legeard, Bruno (2012). "A taxonomy of model-based testing approaches" (PDF). Software Testing, Verification and Reliability. 22 (5): 297–312. doi:10.1002/stvr.456. ISSN 1099-1689.
  13. ^ Gaudel, Marie-Claude (2001). Craeynest, D.; Strohmeier, A (eds.). Testing from Formal Specifications, a Generic Approach. Reliable SoftwareTechnologies — Ada-Europe 2001. Lecture Notes in Computer Science. 2043. pp. 35–48. doi:10.1007/3-540-45136-6_3. ISBN 978-3-540-42123-8.
  14. ^ a b Weyuker, E.J. (November 1982). "On Testing Non-Testable Programs". The Computer Journal. 25 (4): 465–470. doi:10.1093/comjnl/25.4.465.
  15. ^ Peters, Dennis K. (1995). Generating a Test Oracle from Program Documentation (M. Eng. thesis). McMaster University. CiteSeerX
  16. ^ Peters, Dennis K.; Parnas, David L. "Generating a Test Oracle from Program Documentation" (PDF). Proceedings of the 1994 International Symposium on Software Testing and Analysis. ISSTA. ACM Press. pp. 58–65.
  17. ^ Hoffman, Douglas; Heuristic Test Oracles, Software Testing & Quality Engineering Magazine, 1999
  18. ^ Mayer, Johannes; Guderlei, Ralph (2004). "Test Oracles Using Statistical Methods" (PDF). Proceedings of the First International Workshop on Software Quality, Lecture Notes in Informatics. First International Workshop on Software Quality. Springer. pp. 179–189.
  19. ^ Hoffman, Douglas; Analysis of a Taxonomy for Test Oracles, Quality Week, 1998
  20. ^ Robinson, Harry; Finite State Model-Based Testing on a Shoestring, STAR West 1999


  • Binder, Robert V. (1999). "Chapter 18 - Oracles" in Testing Object-Oriented Systems: Models, Patterns, and Tools, Addison-Wesley Professional, 7 November 1999, ISBN 978-0-201-80938-1