|Active||Operational in 2008
Final completion in 2009
|Operators||National Nuclear Security Administration|
|Location||Los Alamos National Laboratory|
|Architecture||12,960 IBM PowerXCell 8i CPUs, 6,480 AMD Opteron dual-core processors, Infiniband|
|Operating system||Red Hat Enterprise Linux|
|Space||296 racks, 560 m2 (6,000 sq ft)|
|Ranking||TOP500: 10, June 2011|
|Purpose||Modeling the decay of the U.S. nuclear arsenal|
|Legacy||First TOP500 Linpack sustained 1.0 petaflops, May 25, 2008|
Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008 to become the world's first TOP500 Linpack sustained 1.0 petaflops system.
In November 2008, it reached a top performance of 1.456 petaflops, retaining its top spot in the TOP500 list. It was also the fourth-most energy-efficient supercomputer in the world on the Supermicro Green500 list, with an operational rate of 444.94 megaflops per watt of power used. The hybrid Roadrunner design was then reused for several other energy efficient supercomputers. Roadrunner was decommissioned by Los Alamos on March 31, 2013. In its place, Los Alamos uses a supercomputer called Cielo, which was installed in 2010. Cielo is smaller and more energy efficient than Roadrunner, and cost $54 million.
IBM built the computer for the U.S. Department of Energy's (DOE) National Nuclear Security Administration. It was a hybrid design with 12,960 IBM PowerXCell 8i and 6,480 AMD Opteron dual-core processors in specially designed blade servers connected by Infiniband. The Roadrunner used Red Hat Enterprise Linux along with Fedora as its operating systems and was managed with xCAT distributed computing software. It also used the Open MPI Message Passing Interface implementation.
Roadrunner occupied approximately 296 server racks which covered 560 square metres (6,000 sq ft) and became operational in 2008. It was decommissioned March 31, 2013. The DOE used the computer for simulating how nuclear materials age in order to predict whether the USA's aging arsenal of nuclear weapons are both safe and reliable. Other uses for the Roadrunner included the science, financial, automotive and aerospace industries.
Roadrunner differed from other contemporary supercomputers because it was the first hybrid supercomputer. Previous supercomputers only used one processor architecture, since it was easier to design and program for. To realize the full potential of Roadrunner, all software had to be written specially for this hybrid architecture. The hybrid design consisted of dual-core Opteron server processors manufactured by AMD using the standard AMD64 architecture. Attached to each Opteron core is a PowerXCell 8i processor manufactured by IBM using Power Architecture and Cell technology. As a supercomputer, the Roadrunner was considered an Opteron cluster with Cell accelerators, as each node consists of a Cell attached to an Opteron core and the Opterons to each other.
Roadrunner was in development from 2002 and went online in 2006. Due to its novel design and complexity it was constructed in three phases and became fully operational in 2008. Its predecessor was a machine also developed at Los Alamos named Dark Horse. This machine was one of the earliest hybrid architecture systems originally based on ARM and then moved to the Cell processor. It was entirely a 3D design, its design integrated 3D memory, networking, processors and a number of other technologies.
The first phase of the Roadrunner was building a standard Opteron based cluster, while evaluating the feasibility to further construct and program the future hybrid version. This Phase 1 Roadrunner reached 71 teraflops and was in full operation at Los Alamos National Laboratory in 2006.
Phase 2 known as “AAIS” (Advanced Architecture Initial System) included building a small hybrid version of the finished system using an older version of the Cell processor. This phase was used to build prototype applications for the hybrid architecture. It went online in January 2007.
The goal of Phase 3 was to reach sustained performance in excess of 1 petaflops. Additional Opteron nodes and new PowerXCell processors were added to the design. These PowerXCell processors are five times as powerful as the Cell processors used in Phase 2. It was built to full scale at IBM’s Poughkeepsie, New York facility, where it broke the 1 petaflops barrier during its fourth attempt on May 25, 2008. The complete system was moved to its permanent location in New Mexico in the summer of 2008.
Roadrunner used two different models of processors. The first is the AMD Opteron 2210, running at 1.8 GHz. Opterons are used both in the computational nodes feeding the Cells with useful data and in the system operations and communication nodes passing data between computing nodes and helping the operators running the system. Roadrunner has a total of 6,912 Opteron processors with 6,480 used for computation and 432 for operation. The Opterons are connected together by HyperTransport links. Each Opteron has two cores for a total 13,824 cores.
The second processor is the IBM PowerXCell 8i, running at 3.2 GHz. These processors have one general purpose core (PPE), and eight special performance cores (SPE) for floating point operations. Roadrunner has a total of 12,960 PowerXCell processors, with 12,960 PPE cores and 103,680 SPE cores, for a total of 116,640 cores.
Physically, a TriBlade consists of one LS21 Opteron blade, an expansion blade, and two QS22 Cell blades. The LS21 has two 1.8 GHz dual-core Opterons with 16 GB memory for the whole blade, providing 8GB for each CPU. Each QS22 has two PowerXCell 8i CPUs, running at 3.2 GHz and 8 GB memory, which makes 4 GB for each CPU. The expansion blade connects the two QS22 via four PCIe x8 links to the LS21, two links for each QS22. It also provides outside connectivity via an Infiniband 4x DDR adapter. This makes a total width of four slots for a single TriBlade. Three TriBlades fit into one BladeCenter H chassis. The expansion blade is connected to the Opteron blade via HyperTransport.
Connected Unit (CU)
A Connected Unit is 60 BladeCenter H full of TriBlades, that is 180 TriBlades. All TriBlades are connected to a 288-port Voltaire ISR2012 Infiniband switch. Each CU also has access to the Panasas file system through twelve System x3755 servers.
CU system information:.
- 360 dual-core Opterons with 2.88 TiB RAM.
- 720 PowerXCell 8i cores with 2.88 TiB RAM.
- 12 System x3755 with dual 10-GBit Ethernet each.
- 288-port Voltaire ISR2012 switch with 192 Infiniband 4x DDR links (180 TriBlades and twelve I/O nodes).
The final cluster is made up of 18 connected units, which are connected via eight additional (second-stage) Infiniband ISR2012 switches. Each CU is connected through twelve uplinks for each second-stage switch, which makes a total of 96 uplink connections.
Overall system information:
- 6,480 Opteron processors with 51.8 TiB RAM (in 3,240 LS21 blades)
- 12,960 Cell processors with 51.8 TiB RAM (in 6,480 QS22 blades)
- 216 System x3755 I/O nodes
- 26 288-port ISR2012 Infiniband 4x DDR switches
- 296 racks
- 2.345 MW power
IBM Roadrunner was shut down on March 31, 2013. While the supercomputer was one of the fastest in the world, its energy efficiency was relatively low. Roadrunner delivered 444 megaflops per watt vs the 886 megaflops per watt of a comparable supercomputer. Before the supercomputer is dismantled, researchers will spend one month performing memory and data routing experiments that will aid in designing future supercomputers.
After IBM Roadrunner is dismantled, the electronics will be shredded. Los Alamos will perform the majority of the supercomputer's destruction, citing the classified nature of its calculations. Some of its parts will be retained for historical purposes.
- "Fact Sheet & Background: Roadrunner Smashes the Petaflop Barrier". IBM. 9 June 2008. Retrieved April 1, 2013.
- Sharon Gaudin (2008-06-09). "IBM's Roadrunner smashes 4-minute mile of supercomputing". Computerworld. Retrieved 2008-06-10.
- Fildes, Jonathan (2008-06-09). "Supercomputer sets petaflop pace". BBC News. Retrieved 2008-06-09.
- November 2008 "TOP500 Supercomputer Sites" Check
|url=scheme (help). top500.org. 11 November 2008.
- "The Green500 List — June 2009". The Green500.
- Montoya, Susan (30 March 2013). "End of the Line for Roadrunner Supercomputer". The Associated Press.
- "IBM to Build World's First Cell Broadband Engine Based Supercomputer". IBM. 2006-09-06. Retrieved 2008-05-31.
- "IBM Selected to Build New DOE Supercomputer". NNSA. 2006-09-06. Archived from the original on 2008-06-18. Retrieved 2008-05-31.
- International Supercomputing Conference to Host First Panel Discussion on Breaking the Petaflop/s Barrier
- Koch, Ken (2008-03-13). "Roadrunner Platform Overview" (PDF). Los Alamos National Laboratory. Retrieved 2008-05-31.
- Borrett, Ann (2007). "Roadrunner - Integrated Hybrid Node" (PDF).
- Jeff Squyres. "Open MPI: 10^15 Flops Can't Be Wrong" (PDF). Open MPI. Retrieved 2008-11-22.
- Brodkin, Jon. "World’s top supercomputer from ‘09 is now obsolete, will be dismantled". Ars Technica. Retrieved March 31, 2013.
- "Los Alamos computer breaks petaflop barrier". IBM. 2008-06-09. Retrieved 2008-06-12.
- Barker, Kevin J.; Davis, Kei; Hoisie, Adolfy; Kerbyson, Darren J.; Lang, Mike; Pakin, Scott; Sancho, Jose C. (2008). "International Conference for High Performance Computing, Networking, Storage and Analysis" (PDF). pp. 1–11. doi:10.1109/SC.2008.5217926. ISBN 978-1-4244-2834-2. Retrieved 2013-04-02.
- DarkHorse—a Proposed PetaScale Architecture
- "Top500 List - November 2012". TOP500. Retrieved April 2, 2013.
- "World’s first petascale supercomputer will be shredded to bits". Ars Technica. Retrieved April 1, 2013.
- "Los Alamos National Laboratory Roadrunner Home Page". Los Alamos National Laboratory. 2007-03-30. Retrieved 2008-05-31.
- "In Pictures: A look inside what may be the world's fastest supercomputer". Computerworld. 2008-05-13. Retrieved 2008-05-31.
- World's Fastest Computer
|World's most powerful supercomputer
June 2008 – November 2009