From Wikipedia, the free encyclopedia
  (Redirected from Fuzz testing)
Jump to: navigation, search

Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" so that they are not directly rejected from the parser but able to exercise interesting behaviors deeper in the program and "invalid enough" so that they might stress different corner cases and expose errors in the parser.

For the purpose of security, input that crosses a trust boundary is often the most interesting.[1] For example, it is more important to fuzz code that handles the upload of a file by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user.


The fuzzing of programs with random inputs dates back to the 1950s when data was still stored on punched cards.[2] Programmers would use punched cards that were pulled from the trash or card decks of random numbers as input to computer programs. If an execution revealed undesired behavior, a bug had been detected and was fixed.

The execution of random inputs is also called random testing or monkey testing.

In 1981, Duran and Ntafos formally investigated the effectiveness of testing a program with random inputs.[3][4] While random testing had been widely perceived to be the worst means of testing a program, the authors could show that it is a cost-effective alternative to more systematic testing techniques.

In 1983, Steve Capps developed "The Monkey", a tool that would generate random inputs for classic Mac OS applications, such as MacPaint.[5] The figurative "monkey" refers to the infinite monkey theorem which states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will eventually type out the entire works of Shakespeare. In the case of testing, the monkey would write the particular sequence of inputs that will trigger a crash.

The term "fuzzing" originates from a 1988 class project, taught by Barton Miller at the University of Wisconsin.[6] To fuzz test a Unix utility meant to automatically generate random files and command-line parameters for the utility. The project was designed to test the reliability of Unix programs by executing a large number of random inputs in quick succession until they crashed. It also provided early debugging tools to determine the cause and category of each detected failure. To allow other researchers to conduct similar experiments with other software, the source code of the tools, the test procedures, and the raw result data were made publicly available.[7] Later, the term fuzzing was not limited only to command-line utilities.

In 1991, the crashme tool was released, which was intended to test the robustness of Unix and Unix-like operating systems by executing random machine instructions.[8]

In 1995, a fuzzer was used to test GUI-based tools (such as the X Window System), network protocols, and system library APIs.[9]

In April 2012, Google announced ClusterFuzz, a cloud-based fuzzing infrastructure for security-critical components of the Chromium web browser.[10] Security researchers can upload their own fuzzers and collect bug bounties if ClusterFuzz finds a crash with the uploaded fuzzer.

In September 2014, Shellshock[11] was disclosed as a family of security bugs in the widely used Unix Bash shell; most vulnerabilities of Shellshock were found using the fuzzer AFL.[12]. (Many Internet-facing services, such as some web server deployments, use Bash to process certain requests, allowing an attacker to cause vulnerable versions of Bash to execute arbitrary commands. This can allow an attacker to gain unauthorized access to a computer system.[13])

In April 2015, Hanno Böck showed how the fuzzer AFL could have found the 2014 Heartbleed vulnerability.[14][15] (The Heartbleed vulnerability was disclosed in April of 2014. It is a serious vulnerability that allows adversaries to decipher otherwise encrypted communication, for instance, during online banking. The vulnerability was accidentally introduced into OpenSSL which implements the https-protocol for secure communication and is used by the majority of servers on the internet. Shodan reported 238,000 machines still vulnerable in April 2016[16]; 200,000 in January 2017.[17])

In August 2016, the Defense Advanced Research Projects Agency (DARPA) held the finals of the first Cyber Grand Challenge, a fully automatic capture-the-flag competition that lasted 11 hours.[18] The objective was to develop automatic defense systems that can discover, exploit, and correct software flaws in real-time. Fuzzing was used as an effective offense strategy to discover flaws in the software of the opponents. It showed tremendous potential in the automation of vulnerability detection. The winner was a system called "Mayhem"[19] developed by the team ForAllSecure led by David Brumley.

In September 2016, Microsoft announced Project Springfield, a cloud-based fuzz testing service for finding security critical bugs in software.[20]

In December 2016, Google announced OSS-Fuzz which allows for continuous fuzzing of several security-critical open-source projects.[21]

Types of fuzzers[edit]

A fuzzer can be categorized as follows:[9][1]

  1. A fuzzer can be generation-based or mutation-based depending on whether inputs are generated from scratch or by modifying existing inputs,
  2. A fuzzer can be dumb or smart depending on whether it is aware of input structure, and
  3. A fuzzer can be white-, grey-, or black-box, depending on whether it is aware of program structure.

Reuse of existing input seeds[edit]

A mutation-based fuzzer leverages an existing corpus of seed inputs during fuzzing. It generates inputs by modifying (or rather mutating) the provided seeds. For example, when fuzzing the image library libpng, the user would provide a set of valid PNG image files as seeds while a mutation-based fuzzer would modify these seeds to produce semi-valid variants of each seed. The corpus of seed files may contain thousands of potentially similar inputs. Automated seed selection (or test suite reduction) allows to pick the best seeds in order to maximize the total number of bugs found during a fuzz campaign.[22]

A generation-based fuzzer generates inputs from scratch. For instance, a smart generation-based fuzzer[23] takes the input model that was provided by the user to generate new inputs. Unlike mutation-based fuzzers, a generation-based fuzzer does not depend on the existence or quality of a corpus of seed inputs.

Some fuzzers have the capability to do both, to generate inputs from scratch and to generate inputs by mutation of existing seeds.[24]

Aware of input structure[edit]

Typically, fuzzers are used to generate inputs for programs that take structured inputs, such as a file, a sequence of keyboard or mouse events, or a sequence of messages. This structure distinguishes valid input that is accepted and processed by the program from invalid input that is quickly rejected by the program. What constitutes a valid input may be explicitly specified in an input model. Examples of input models are formal grammars, file formats, GUI-models, and network protocols. Even items not normally considered as input can be fuzzed, such as the contents of databases, shared memory, environment variables or the precise interleaving of threads. An effective fuzzer generates semi-valid inputs that are "valid enough" so that they are not directly rejected from the parser and "invalid enough" so that they might stress corner cases and exercise interesting program behaviours.

A smart (model-based,[24] grammar-based,[23][25] or protocol-based[26]) fuzzer leverages the input model to generate a greater proportion of valid inputs. For instance, if the input can be modelled as an abstract syntax tree, then a smart mutation-based fuzzer[25] would employ random transformations to move complete subtrees from one node to another. If the input can be modelled by a formal grammar, a smart generation-based fuzzer[23] would instantiate the production rules to generate inputs that are valid w.r.t. the grammar. However, generally the input model must be explicitly provided which is difficult when it is proprietary, unknown, or very complex. If a large corpus of valid and invalid inputs are available, a grammar induction technique, such as Angluin's L* algorithm would be able to generate an input model.[27][28]

A dumb fuzzer[6][29] does not require the input model and can thus be employed to fuzz a wider variety of programs. For instance, AFL is a dumb mutation-based fuzzer that modifies a seed file by flipping random bits, by substituting random bytes with "interesting" values, and by moving or deleting blocks of data. However, a dumb fuzzer might generate a lower proportion of valid inputs and stress the parser code rather than the main components of a program. The disadvantage of dumb fuzzers can be illustrated by means of the construction of a valid checksum for a cyclic redundancy check (CRC). A CRC is an error-detecting code that ensures that the integrity of the data contained in the input file is preserved during transmission. A checksum is computed over the input data and recorded in the file. When the program processes the received file and the recorded checksum does not match the re-computed checksum, then the file is rejected as invalid. Now, a fuzzer that is unaware of the CRC is unlikely to generate the correct checksum. However, there are attempts to identify and re-compute a potential checksum in the mutated input, once a dumb mutation-based fuzzer has modified the protected data.[30]

Aware of program structure[edit]

Typically, a fuzzer is considered more effective if it achieves a higher degree of code coverage. The rationale is, if a fuzzer does not exercise certain structural elements in the program, then it is also not able to reveal bugs that are hiding in these elements. Some program elements are considered more critical than others. For instance, a division operator might cause a division by zero error, or a system call may crash the program.

A black-box fuzzer[6][25] treats the program as a black box and is unaware of internal program structure. For instance, a random testing tool that generates inputs at random is considered a blackbox fuzzer. Hence, a blackbox fuzzer can execute several hundred inputs per second, can be easily parallelized, and can scale to programs of arbitrary size. However, blackbox fuzzers may only scratch the surface and expose "shallow" bugs. Hence, there are attempts to develop blackbox fuzzers that can incrementally learn about the internal structure (and behavior) of a program during fuzzing by observing the program's output given an input. For instance, LearnLib employs active learning to generate an automaton that represents the behavior of a web application.

A white-box fuzzer[29][24] leverages program analysis to systematically increase code coverage or to reach certain critical program locations. For instance, SAGE[31] leverages symbolic execution to systematically explore different paths in the program. If the program's specification is available, a whitebox fuzzer might leverage techniques from model-based testing to generate inputs and check the program outputs against the program specification. A whitebox fuzzer can be very effective at exposing bugs that hide deep in the program. However, the time used for analysis (of the program or its specification) can become prohibitive. If the whitebox fuzzer takes relatively too long to generate an input, a blackbox fuzzer will be more efficient.[32] Hence, there are attempts to combine the efficiency of blackbox fuzzers and the effectiveness of whitebox fuzzers.[33]

A gray-box fuzzer leverages instrumentation rather than program analysis to glean information about the program. For instance, AFL and libFuzzer utilize lightweight instrumentation to trace basic block transitions exercised by an input. This leads to a reasonable performance overhead but informs the fuzzer about the increase in code coverage during fuzzing, which makes gray-box fuzzers extremely efficient vulnerability detection tools.[34]


Fuzzing is used mostly as an automated technique to expose vulnerabilities in security-critical programs that might be exploited with malicious intent.[10][20][21] More generally, fuzzing is used to demonstrate the presence of bugs rather than their absence. Running a fuzzing campaign for several weeks without finding a bug does not prove the program correct.[35] After all, the program may still fail for an input that has not been executed, yet; executing a program for all inputs is prohibitively expensive. If the objective is to prove a program correct for all inputs, a formal specification must exist and techniques from formal methods must be used.

Exposing bugs[edit]

In order to expose bugs, a fuzzer must be able to distinguish expected (normal) from unexpected (buggy) program behavior. However, a machine cannot always distinguish a bug from a feature. In automated software testing, this is also called the oracle problem.[36][37]

Typically, a fuzzer distinguishes between crashing and non-crashing inputs in the absence of specifications and to use a simple and objective measure. Crashes can be easily identified and might indicate potential vulnerabilities (e.g., denial of service or arbitrary code execution). However, vice versa the absence of a crash does not indicate the absence of a vulnerability. For instance, a program written in C may or may not crash when an input causes a buffer overflow. Rather the program's behavior is undefined.

To make a fuzzer more sensitive to failures other than crashes, sanitizers can be used to inject assertions that crash the program when a failure is detected.[38][39] There are different sanitizers for different kinds of bugs:

Fuzzing can also be used to detect "differential" bugs if a reference implementation is available. For automated regression testing,[40] the generated inputs are executed on two versions of the same program. For automated differential testing,[41] the generated inputs are executed on two implementations of the same program (e.g., lighttpd and httpd are both implementations of a web server). If both variants produce different output for the same input, then one may be buggy and should be examined more closely.

Validating static analysis reports[edit]

Static program analysis allows to analyze a program without actually executing it. This might lead to false positives where the tool reports problems with the program that do actually not exist. Fuzzing in combination with dynamic program analysis can be used to try and generate an input that actually witnesses the reported problem.[42]

Fuzzing toolchain[edit]

A fuzzer produces a large number of inputs in a relatively short time. For instance, in 2016 the Google OSS-fuzz project produced around 4 trillion inputs a week.[21] Hence, many fuzzers provide a toolchain that automates otherwise manual and tedious tasks which follow the automated generation of failure-inducing inputs.

Automated bug triage[edit]

Automated bug triage is used to group a large number of failure-inducing inputs by root cause and to prioritize each individual bug by severity. A fuzzer produces a large number of inputs, and many of the failure-inducing ones may effectively expose the same software bug. Only some of these bugs are security-critical and should be patched with higher priority. For instance the CERT Coordination Center provides the Linux triage tools which group crashing inputs by the produced stack trace and lists each group according to their probability to be exploitable.[43] The Microsoft Security Research Centre (MSEC) developed the !exploitable tool which first creates a hash for a crashing input to determine its uniqueness and then assigns an exploitability rating:[44]

  • Exploitable
  • Probably Exploitable
  • Probably Not Exploitable, or
  • Unknown.

Previously unreported, triaged bugs might be automatically reported to a bug tracking system. For instance, OSS-Fuzz runs large-scale, long-running fuzzing campaigns for several security-critical software projects where each previously unreported, distinct bug is reported directly to a bug tracker.[21] The OSS-Fuzz bug tracker automatically informs the maintainer of the vulnerable software and checks in regular intervals whether the bug has been fixed in the most recent revision using the uploaded minimized failure-inducing input.

Automated input minimization[edit]

Automated input minimization (or test case reduction) is an automated debugging technique to isolate that part of the failure-inducing input that is actually inducing the failure.[45][46] If the failure-inducing input is large and mostly malformed, it might be difficult for a developer to understand what exactly is causing the bug. Given the failure-inducing input, an automated minimization tool would remove as many input bytes as possible while still reproducing the original bug. For instance, Delta Debugging is an automated input minimization technique that employs an extended binary search algorithm to find such a minimal input.[47]

See also[edit]


  1. ^ a b John Neystadt (February 2008). "Automated Penetration Testing with White-Box Fuzzing". Microsoft. Retrieved 2009-05-14. 
  2. ^ Gerald M. Weinberg. "Fuzz Testing and Fuzz History". Retrieved 2017-02-06. 
  3. ^ Joe W. Duran; Simeon C. Ntafos (1981-03-09). "A report on random testing". Proceedings of the ACM SIGSOFT International Conference on Software Engineering (ICSE'81). 
  4. ^ Joe W. Duran; Simeon C. Ntafos (1984-07-01). "An Evaluation of Random Testing". IEEE Transactions on Software Engineering (TSE). 
  5. ^ "Macintosh Stories: Monkey Lives". Folklore.org. 1999-02-22. Retrieved 2010-05-28. 
  6. ^ a b c Barton Miller (2008). "Preface". In Ari Takanen, Jared DeMott and Charlie Miller, Fuzzing for Software Security Testing and Quality Assurance, ISBN 978-1-59693-214-2
  7. ^ "Fuzz Testing of Application Reliability". University of Wisconsin-Madison. Retrieved 2009-05-14. 
  8. ^ "crashme". CodePlex. Retrieved 2012-06-26. 
  9. ^ a b Michael Sutton; Adam Greene; Pedram Amini (2007). Fuzzing: Brute Force Vulnerability Discovery. Addison-Wesley. ISBN 0-321-44611-9. 
  10. ^ a b "Announcing ClusterFuzz". Retrieved 2017-03-09. 
  11. ^ Perlroth, Nicole (25 September 2014). "Security Experts Expect 'Shellshock' Software Bug in Bash to Be Significant". New York Times. Retrieved 25 September 2014. 
  12. ^ Zalewski, Michał (1 October 2014). "Bash bug: the other two RCEs, or how we chipped away at the original fix (CVE-2014-6277 and '78)". lcamtuf's blog. Retrieved 13 March 2017. 
  13. ^ Seltzer, Larry (29 September 2014). "Shellshock makes Heartbleed look insignificant". ZDNet. Retrieved 29 September 2014. 
  14. ^ Böck, Hanno. "Fuzzing: Wie man Heartbleed hätte finden können (in German)". Golem.de (in German). Retrieved 13 March 2017. 
  15. ^ Böck, Hanno. "How Heartbleed could've been found (in English)". Hanno's blog. Retrieved 13 March 2017. 
  16. ^ "Search engine for the internet of things – devices still vulnerable to Heartbleed". www.shodan.io. Retrieved 13 March 2017. 
  17. ^ "Heartbleed Report (2017-01)". www.shodan.io. Retrieved 10 July 2017. 
  18. ^ Walker, Michael. "DARPA Cyber Grand Challenge". darpa.mil. Retrieved 12 March 2017. 
  19. ^ "Mayhem comes in first place at CGC". Retrieved 12 March 2017. 
  20. ^ a b "Announcing Project Springfield". Retrieved 2017-03-08. 
  21. ^ a b c d "Announcing OSS-Fuzz". Retrieved 2017-03-08. 
  22. ^ Rebert, Alexandre; Cha, Sang Kil; Avgerinos, Thanassis; Foote, Jonathan; Warren, David; Grieco, Gustavo; Brumley, David (2014). "Optimizing Seed Selection for Fuzzing" (PDF). Proceedings of the 23rd USENIX Conference on Security Symposium. USENIX Association: 861–875. 
  23. ^ a b c Patrice Godefroid; Adam Kiezun; Michael Y. Levin. "Grammar-based Whitebox Fuzzing" (PDF). Microsoft Research. 
  24. ^ a b c Van-Thuan Pham; Marcel Böhme; Abhik Roychoudhury (2016-09-07). "Model-based whitebox fuzzing for program binaries". Proceedings of Automated Software Engineering (ASE'16). 
  25. ^ a b c "Peach Fuzzer". Retrieved 2017-03-08. 
  26. ^ Greg Banks; Marco Cova; Viktoria Felmetsger; Kevin Almeroth; Richard Kemmerer; Giovanni Vigna. SNOOZE: Toward a Stateful NetwOrk prOtocol fuzZEr. Proceedings of the Information Security Conference (ISC'06). 
  27. ^ Osbert Bastani; Rahul Sharma; Alex Aiken; Percy Liang (June 2017). Synthesizing Program Input Grammars. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2017). 
  28. ^ "VDA Labs - Evolutionary Fuzzing System". 
  29. ^ a b Vijay Ganesh; Tim Leek; Martin Rinard (2009-05-16). "Taint-based directed whitebox fuzzing". Proceedings of the ACM SIGSOFT International Conference on Software Engineering (ICSE'09). 
  30. ^ Wang, T.; Wei, T.; Gu, G.; Zou, W. (May 2010). "TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection". 2010 IEEE Symposium on Security and Privacy: 497–512. doi:10.1109/SP.2010.37. 
  31. ^ Patrice Godefroid; Michael Y. Levin; David Molnar (2008-02-08). "Automated Whitebox Fuzz Testing" (PDF). Proceedings of Network and Distributed Systems Symposium (NDSS'08). 
  32. ^ Marcel Böhme; Soumya Paul (2015-10-05). "A Probabilistic Analysis of the Efficiency of Automated Software Testing". IEEE Transactions on Software Engineering (TSE). 
  33. ^ Nick Stephens; John Grosen; Christopher Salls; Andrew Dutcher; Ruoyu Wang; Jacopo Corbetta; Yan Shoshitaishvili; Christopher Kruegel; Giovanni Vigna (2016-02-24). Driller: Augmenting. Fuzzing Through Selective Symbolic Execution (PDF). Proceedings of Network and Distributed Systems Symposium (NDSS'16). 
  34. ^ Marcel Böhme; Van-Thuan Pham; Abhik Roychoudhury (2016-10-28). "Coverage-based Greybox Fuzzing as a Markov Chain". Proceedings of the ACM Conference on Computer and Communications Security (CCS'16). 
  35. ^ Hamlet, Richard G.; Taylor, Ross (December 1990). "Partition testing does not inspire confidence". IEEE Transactions on Software Engineering. 16 (12): 1402–1411. doi:10.1109/32.62448. 
  36. ^ Weyuker, Elaine J. (1 November 1982). "On Testing Non-Testable Programs". The Computer Journal. 25 (4): 465–470. doi:10.1093/comjnl/25.4.465. 
  37. ^ Barr, Earl T.; Harman, Mark; McMinn, Phil; Shahbaz, Muzammil; Yoo, Shin (1 May 2015). "The Oracle Problem in Software Testing: A Survey". IEEE Transactions on Software Engineering. 41 (5): 507–525. doi:10.1109/TSE.2014.2372785. 
  38. ^ "Clang compiler documentation". clang.llvm.org. Retrieved 13 March 2017. 
  39. ^ "GNU GCC sanitizer options". gcc.gnu.org. Retrieved 13 March 2017. 
  40. ^ Orso, Alessandro; Xie, Tao (2008). "BERT: BEhavioral Regression Testing". Proceedings of the 2008 International Workshop on Dynamic Analysis (WODA 2008). ACM: 36–42. doi:10.1145/1401827.1401835. 
  41. ^ McKeeman, William M. (1998). "Differential Testing for Software" (PDF). Digital Technical Journal. 10 (1): 100–107. 
  42. ^ Babić, Domagoj; Martignoni, Lorenzo; McCamant, Stephen; Song, Dawn (2011). "Statically-directed Dynamic Automated Test Generation". Proceedings of the 2011 International Symposium on Software Testing and Analysis. ACM: 12–22. doi:10.1145/2001420.2001423. 
  43. ^ "CERT Triage Tools". CERT Division of the Software Engineering Institute (SEI) at Carnegie Mellon University (CMU). Retrieved 14 March 2017. 
  44. ^ "Microsoft !exploitable Crash Analyzer". CodePlex. Retrieved 14 March 2017. 
  45. ^ "Test Case Reduction". 2011-07-18. 
  46. ^ "IBM Test Case Reduction Techniques". 2011-07-18. 
  47. ^ Zeller, Andreas; Hildebrandt, Ralf (February 2002). "Simplifying and Isolating Failure-Inducing Input". IEEE Transactions on Software Engineering. 28 (2): 183–200. ISSN 0098-5589. doi:10.1109/32.988498. Retrieved 14 March 2017. 

Further reading[edit]

External links[edit]