|Date||August 4, 2016|
|Time||9:00 am to 8:00 pm|
|Venue||Paris Hotel & Conference Center|
|Location||Las Vegas, Nevada|
The 2016 Cyber Grand Challenge (CGC) was a challenge created by The Defense Advanced Research Projects Agency (DARPA) in order to develop automatic defense systems that can discover, prove, and correct software flaws in real-time.
The event placed machine versus machine (no human intervention) in what was called the "world's first automated network defense tournament."
It resembled in structure the long-standing "capture the flag" (CTF) security competitions, and the winning system indeed competed against humans in the "classic" DEF CON CTF held in the following days. The Cyber Grand Challenge featured, however, a more standardized scoring and vulnerability-proving system: all exploits and patched binaries were submitted and evaluated by the referee infrastructure
Races develop between criminals attempting to abuse vulnerabilities and analysts who assess, remediate, check, and deploy a patch before significant damage can be done. Experts adhere to a process that involves complicated reasoning followed by manual creation of each security signature and software patch, a technical process that requires months and dollars. This has resulted in various software insecurities favoring attackers. Devices such as smart televisions, wearable technologies, and high-end home appliances that are connected to the internet aren't always produced with security in mind and moreover utility systems, power grids, and traffic lights could be more susceptible to attacks, says the DARPA.
To help overcome these challenges, DARPA launched in 2014  the Cyber Grand Challenge: a two-year competition seeking to create automatic defensive systems capable of reasoning about flaws, formulating patches and deploying them on a network in real time. The competition was split into two main events: an open qualification event to be held in 2015 and a final event in 2016 where only the top seven teams from the qualifiers could participate. The winner of the final event would be awarded $2 million and the opportunity to play against humans in the 24th DEF CON capture the flag competition.
Reducing external interaction to its base components (e.g., system calls for well-defined I/O, dynamic memory allocation, and a single source of randomness) simplified both modeling and securely running the binaries in isolation to observe their behavior.
Internal complexity was however unrestricted, with challenges going as far as implementing a particle physics simulator, chess, programming/scripting languages, parsing of huge amounts of markup data, vector graphics, just-in-time compilation, VMs, etc.
The challenge authors were themselves scored based on how well they distinguished the players' relative performance, encouraging challenges to exercise specific weaknesses of automatic reasoning (e.g., state explosion) while remaining solvable by well-constructed systems.
Each playing system -- a fully-automated "Cyber Reasoning System" (CRS) -- had to demonstrate ability in several areas of computer security:
- Automatic vulnerability finding on previously-unknown binaries.
- Automatic patching of binaries without sacrificing performance.
- Automatic exploit generation within the framework's limitations.
- Implementing a security strategy: balancing resource-assignment among the available servers (a variation of the multi-armed bandit problem), responding to competitors (e.g., analyzing their patches, reacting to exploitation), evaluating own action's effect on the final score, ...
Due to the complexity of the task, players had to combine multiple techniques and do so in a fully-unattended and time-efficient fashion. For instance, the highest attack score was reached by discovering vulnerabilities via a combination of guided fuzzing and symbolic execution -- i.e., an AFL-based fuzzer combined with the angr binary analysis framework, leveraging a QEMU-based emulation and execution-tracing system.
CGC Qualification Event (CQE)
The CGC Qualification Event (CQE) was held on June 3, 2015 and lasted for 24 hours. CQE had two tracks: a funded-track for seven teams selected by DARPA based on their proposals (with an award up to $750,000 per team) and an open-track where any self-funded team could participate. Over 100 teams registered internationally and 28 reached the Qualification Event. During the event, teams were given 131 different programs and were challenged with finding vulnerabilities as well as fixing them automatically while maintaining performance and functionality. Collectively, all teams managed to identify vulnerabilities in 99 out of the 131 provided programs. After collecting all submissions from competitors, DARPA ranked all teams based on their patching and vulnerability-finding ability.
The top seven teams and finalists in alphabetical order were:
- CodeJitsu, a team of researchers from the University of California at Berkeley, Cyberhaven, and Syracuse (funded track).
- CSDS, a team of researchers from the University of Idaho (open track).
- Deep Red, a team of specialized engineers from Raytheon (open track).
- disekt, a computer security team that participates in various Capture the Flag security competitions hosted by other teams, universities and organizations (open track).
- ForAllSecure, a security startup composed of researchers and security experts (funded track).
- Shellphish, a hacking team from the University of California, Santa Barbara (open track).
- TECHx, a team of software analysis experts from GrammaTech, Inc. and the University of Virginia (funded track).
Upon qualification, each one of the above seven teams received $750,000 in funding to prepare for the final event.
CGC Final Event (CFE)
The CGC Final Event (CFE) was held on August 4, 2016 and lasted for 11 hours. During the final event, finalists saw their machines face against each other in a fully automatic capture-the-flag competition. Each of the seven qualifying teams competed for the top three positions that would share almost $4 million in prize money.
The winning systems of the Cyber Grand Challenge (CGC) Final Event were:
- "Mayhem" - developed by ForAllSecure, of Pittsburgh, Pa. - $2 million
- "Xandra" - developed by team TECHx consisting of GrammaTech Inc., Ithaca, N.Y., and UVa, Charlottesville, Va. - $1 million
- "Mechanical Phish" - developed by Shellphish, UC Santa Barbara, Ca. - $750,000
The other competing systems were:
- Rubeus - developed by Michael Stevenson, Raytheon, Deep Red of Arlington, Va.
- Galactica - developed by CodeJitsu of Berkeley, Ca., Syracuse, N.Y., and Lausanne, Switzerland
- Jima - developed by CSDS of Moscow, Id.
- Crspy - system developed by disekt of Athens, Ga.
- "Cyber Grand Challenge Event Information for Finalists" (PDF). Cybergrandchallenge.com. Archived from the original (PDF) on 28 April 2017. Retrieved 17 July 2016.
- "The Cyber Grand Challenge (CGC) seeks to automate cyber defense process". Cybergrandchallenge.com. Archived from the original on 1 August 2016. Retrieved 17 July 2016.
- Walker, Michael. "a race ensues between miscreants intending to exploit the vulnerability and analysts who must assess, remediate, test, and deploy a patch before significant damage can be done". darpa.mil. Retrieved 17 July 2016.
- Uyeno, Greg (5 July 2016). "Smart Televisions, wearable technologies, utility systems, power grids, and more inclined to cyber attacks". Live Science. Retrieved 17 July 2016.
- "CRS Team Interface API". -- as opposed to classic CTF games, in which players directly attack each others and freely change their own VMs
- Chang, Kenneth (2014-06-02). "Automating Cybersecurity". The New York Times. ISSN 0362-4331. Retrieved 2016-09-06.
- Tangent, The Dark. "DEF CON® 24 Hacking Conference". defcon.org. Retrieved 2016-09-06.
- "CGC ABI".
- Dedicated special issue of the IEEE Security & Privacy journal: "Hacking Without Humans". IEEE Security & Privacy. IEEE Computer Society. 16 (2). March 2018. ISSN 1558-4046.
- Publications on individual components, such as Shellphish's Stephens N, Grosen J, Salls C, Dutcher A, Wang R, Corbetta J, Shoshitaishvili Y, Kruegel C, Vigna G (2016). Driller: Augmenting Fuzzing Through Selective Symbolic Execution (PDF). Network & Distributed System Security Symposium (NDSS). Vol. 16.
- "Mechanical Phish".
- "Cyber Grand Challenge". Archived from the original on 2016-09-11.
- "The DARPA Cyber Grand Challenge: A Competitor's Perspective".
- "Legitimate Business Syndicate: What is the Cyber Grand Challenge?". blog.legitbs.net. Retrieved 2016-09-06.
- "DARPA | Cyber Grand Challenge". www.cybergrandchallenge.com. Archived from the original on 2016-08-01. Retrieved 2016-09-06.
- "Mayhem comes in first place at CGC". August 7, 2016. Retrieved August 13, 2016.