Concolic testing

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Concolic testing (a portmanteau of concrete and symbolic), first coined in the paper "CUTE: A concolic unit testing engine for C"[1] by Koushik Sen, Darko Marinov, and Gul Agha, is a hybrid software verification technique that performs symbolic execution, a classical technique that treats program variables as symbolic variables, along a concrete execution (testing on particular inputs) path. Symbolic execution is used in conjunction with an automated theorem prover or constraint solver based on constraint logic programming to generate new concrete inputs (test cases) with the aim of maximizing code coverage. Its main focus is finding bugs in real-world software, rather than demonstrating program correctness.

A description and discussion of the concept of concolic testing was introduced in DART by Patrice Godefroid, Nils Klurland, and Koushik Sen.[2] CUTE[1] further extended the idea of concolic testing to data structures. Another tool, called EGT (renamed to EXE and later improved and renamed to KLEE), based on similar ideas was independently developed by Cristian Cadar and Dawson Engler in 2005, and published in 2005 and 2006.[3] PathCrawler[4] [5] first proposed to perform symbolic execution along a concrete execution path, but unlike concolic testing PathCrawler does not simplify complex symbolic constraints using concrete values. These tools (DART and CUTE, EXE) applied concolic testing to unit testing of C programs and concolic testing was originally conceived as a white box improvement upon established random testing methodologies. The technique was later generalized to testing multithreaded Java programs with jCUTE,[6] and unit testing programs from their executable codes (tool OSMOSE).[7] It was also combined with fuzz testing and extended to detect exploitable security issues in large-scale x86 binaries by Microsoft Research's SAGE.[8][9]

The concolic approach is also applicable to model checking. In a concolic model checker, the model checker traverses states of the model representing the software being checked, while storing both a concrete state and a symbolic state. The symbolic state is used for checking properties on the software, while the concrete state is used to avoid reaching unreachable state. One such tool is ExpliSAT by Sharon Barner, Cindy Eisner, Ziv Glazberg, Daniel Kroening and Ishai Rabinovitz[10]

Birth of concolic testing[edit]

Implementation of traditional symbolic execution based testing requires the implementation of a full-fledged symbolic interpreter for a programming language. Concolic testing implementor noticed that implementation of a full-fledged symbolic execution can be avoided if symbolic execution can be piggy-backed with the normal execution of a program through instrumentation. This idea of simplifying implementation of symbolic execution gave birth to concolic testing.

Example[edit]

Consider the following simple example, written in C:

  1. void f(int x, int y) {
    
  2.     int z = 2*y;
    
  3.     if (x == 100000) {
    
  4.         if (x < z) {
    
  5.             assert(0); /* error */
    
  6.         }
    
  7.     }
    
  8. }
    
Execution path tree for this example. Three tests are generated corresponding to the three leaf nodes in the tree, and three execution paths in the program.

Simple random testing, trying random values of x and y, would require an impractically large number of tests to reproduce the failure.

We begin with an arbitrary choice for x and y, for example x = y = 1. In the concrete execution, line 2 sets z to 2, and the test in line 3 fails since 1 ≠ 100000. Concurrently, the symbolic execution follows the same path but treats x and y as symbolic variables. It sets z to the expression 2y and notes that, because the test in line 3 failed, x ≠ 100000. This inequality is called a path condition and must be true for all executions following the same execution path as the current one.

Since we'd like the program to follow a different execution path on the next run, we take the last path condition encountered, x ≠ 100000, and negate it, giving x = 100000. An automated theorem prover is then invoked to find values for the input variables x and y given the complete set of symbolic variable values and path conditions constructed during symbolic execution. In this case, a valid response from the theorem prover might be x = 100000, y = 0.

Running the program on this input allows it to reach the inner branch on line 4, which is not taken since 100000 (x) is not less than 0 (z = 2y). The path conditions are x = 100000 and xz. The latter is negated, giving x < z. The theorem prover then looks for x, y satisfying x = 100000, x < z, and z = 2y; for example, x = 100000, y = 50001. This input reaches the error.

Algorithm[edit]

Essentially, a concolic testing algorithm operates as follows:

  1. Classify a particular set of variables as input variables. These variables will be treated as symbolic variables during symbolic execution. All other variables will be treated as concrete values.
  2. Instrument the program so that each operation which may affect a symbolic variable value or a path condition is logged to a trace file, as well as any error that occurs.
  3. Choose an arbitrary input to begin with.
  4. Execute the program.
  5. Symbolically re-execute the program on the trace, generating a set of symbolic constraints (including path conditions).
  6. Negate the last path condition not already negated in order to visit a new execution path. If there is no such path condition, the algorithm terminates.
  7. Invoke an automated theorem prover to generate a new input. If there is no input satisfying the constraints, return to step 6 to try the next execution path.
  8. Return to step 4.

There are a few complications to the above procedure:

  • The algorithm performs a depth-first search over an implicit tree of possible execution paths. In practice programs may have very large or infinite path trees — a common example is testing data structures that have an unbounded size or length. To prevent spending too much time on one small area of the program, the search may be depth-limited (bounded).
  • Symbolic execution and automated theorem provers have limitations on the classes of constraints they can represent and solve. For example, a theorem prover based on linear arithmetic will be unable to cope with the nonlinear path condition xy = 6. Any time that such constraints arise, the symbolic execution may substitute the current concrete value of one of the variables to simplify the problem. An important part of the design of a concolic testing system is selecting a symbolic representation precise enough to represent the constraints of interest.

Limitations[edit]

Concolic testing has a number of limitations:

  • If the program exhibits nondeterministic behavior, it may follow a different path than the intended one. This can lead to nontermination of the search and poor coverage.
  • Even in a deterministic program, a number of factors may lead to poor coverage, including imprecise symbolic representations, incomplete theorem proving, and failure to search the most fruitful portion of a large or infinite path tree.
  • Programs which thoroughly mix the state of their variables, such as cryptographic primitives, generate very large symbolic representations that cannot be solved in practice. For example, the condition "if (md5_hash(input) == 0xdeadbeef)" requires the theorem prover to invert MD5, which is an open problem.

Tools[edit]

Many tools, notably DART and SAGE, have not been made available to the public at large. Note however that for instance SAGE is "used daily" for internal security testing at Microsoft.[11]

References[edit]

  1. ^ a b Sen, Koushik; Darko Marinov; Gul Agha (2005). "Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering". New York, NY: ACM. pp. 263–272. ISBN 1-59593-014-0. Retrieved 2009-11-09.  |chapter= ignored (help)
  2. ^ Godefroid, Patrice; Nils Klarlund; Koushik Sen (2005). "Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation". New York, NY: ACM. pp. 213–223. ISSN 0362-1340. Retrieved 2009-11-09.  |chapter= ignored (help)
  3. ^ Dawson, Engler; Cristian Cadar, Vijay Ganesh, Peter Pawloski, David L. Dill and Dawson Engler (2006). "Proceedings of the 13th International Conference on Computer and Communications Security (CCS 2006)". Alexandria, VA, USA: ACM.  |chapter= ignored (help)
  4. ^ Williams, Nicky; Bruno Marre; Patricia Mouy (2004). "Proceedings of the 19th IEEE International Conference on Automated Software Engineering (ASE 2004), 20–25 September 2004, Linz, Austria". IEEE Computer Society. pp. 290–293. ISBN 0-7695-2131-2.  |chapter= ignored (help)
  5. ^ Williams, Nicky; Bruno Marre; Patricia Mouy; Muriel Roger (2005). "Dependable Computing - EDCC-5, 5th European Dependable Computing Conference, Budapest, Hungary, April 20–22, 2005, Proceedings". Springer. pp. 281–292. ISBN 3-540-25723-3.  |chapter= ignored (help)
  6. ^ Sen, Koushik; Gul Agha (August 2006). "Computer Aided Verification: 18th International Conference, CAV 2006, Seattle, WA, USA, August 17–20, 2006, Proceedings". Springer. pp. 419–423. ISBN 978-3-540-37406-0. Retrieved 2009-11-09.  |chapter= ignored (help)
  7. ^ Bardin, Sébastien; Philippe Herrmann (April 2008). "Proceedings of the 1st IEEE International Conference on Software Testing, Verification, and Validation (ICST 2008), Lillehammer, Norway.". IEEE Computer Society. pp. 22–31. ISBN 978-0-7695-3127-4.  |chapter= ignored (help),
  8. ^ Godefroid, Patrice; Michael Y. Levin; David Molnar (2007). Automated Whitebox Fuzz Testing (Technical report). Microsoft Research. TR-2007-58. 
  9. ^ Godefroid, Patrice (2007). "Proceedings of the 2nd international workshop on Random testing: co-located with the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2007)". New York, NY: ACM. p. 1. ISBN 978-1-59593-881-7. Retrieved 2009-11-09.  |chapter= ignored (help)
  10. ^ Sharon Barner, Cindy Eisner, Ziv Glazberg, Daniel Kroening, Ishai Rabinovitz: ExpliSAT: Guiding SAT-Based Software Verification with Explicit States. Haifa Verification Conference 2006: 138-154
  11. ^ SAGE team (2009). "Microsoft PowerPoint - SAGE-in-one-slide". Microsoft Research. Retrieved 2009-11-10.