National Center for Computational Sciences
|This article needs additional citations for verification. (February 2010) (Learn how and when to remove this template message)|
The National Center for Computational Sciences (NCCS) is a United States Department of Energy Leadership Computing Facility. The NCCS provides resources for calculation and simulation in fields including astrophysics, materials, and climate research. This research is intended to enhance American competitiveness in industry. The NCCS, founded in 1992 and located at Oak Ridge National Laboratory (ORNL), currently manages a 2.33-petaflop (theoretical peak) Cray XT5 supercomputer named Jaguar for use in open research by academic and corporate researchers. Jaguar was named the world's fastest computer at SC09, a position it held until October 2010. Founded in 1992, the NCCS is a managed activity of the Advanced Scientific Computing Research program of the Department of Energy Office of Science (DOE-SC).
The petaflops Jaguar addresses some of the most challenging scientific problems in areas such as climate modeling, renewable energy, materials science, fusion, and combustion. Annually, 80% of Jaguar's resources are allocated through DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, a competitively selected, peer reviewed process open to researchers from universities, industry, government, and non-profit organizations.
Through a close, four-year partnership between ORNL and Cray, Jaguar has delivered state-of-the-art computing capability to scientists and engineers from academia, national laboratories and industry. The XT system has grown in strength through a series of advances since being installed as a 25-teraflop XT3 in 2005. By early 2008 Jaguar was a 263-teraflop Cray XT4 able to solve some of the most challenging problems that could not be solved otherwise. In 2008 Jaguar was expanded with the addition of a 1.4-petaflop Cray XT5. The resulting system has over 224,000 processing cores (as of June 2010) using AMD's Opteron processors connected internally with Cray's Seastar2+ network. The XT4 and XT5 parts of Jaguar are combined into a single system using an InfiniBand network that links each piece to the Spider file system.
Jaguar is one of the most powerful computer systems for science with world-leading performance, more than three times the memory of any other computer (at the time of installation), and world-leading bandwidth to disks and networks. The AMD Opteron processor is a powerful, general purpose processor that uses the X86 instruction set which has a rich set of applications, compilers, and tools. Jaguar has hundreds of applications that have been ported and run on the Cray XT system, many of which have been scaled up to run on 25,000 to 150,000 cores.
The mass storage facility at ORNL currently consists of tape and disk storage components, Linux servers, and High Performance Storage System (HPSS) software. As of October 2010 more than 14 petabytes were stored in HPSS in more than 21 million files. Tape storage is provided by robotic tape libraries. The Sun StorageTek SL8500 libraries can each hold up to 10,000 cartridges and together house a total of thirteen 9840 drives (20 gigabyte cartridges, uncompressed), sixteen 9940B drives (200 gigabyte cartridges, uncompressed), thirty-two T10000A drives (500 gigabyte cartridges, uncompressed), and sixteen T10000B drives (1,000 gigabyte cartridges, uncompressed). The 9840 and 9940A drives read and write uncompressed data at 10 megabytes per second; the 9940B reads and writes at 30 megabytes per second. The beneficial feature of the 9840 tape technology is its fast seek time for small file access; these are the performance drives. The T10000 tape technology provides the ability to store a larger amount of data on each tape cartridge for more voluminous data sets; these are the capacity drives.
Eugene is a 27-teraflop/S IBM Blue Gene/P system. The system consists of 2048 850-megahertz IBM quad core 450d PowerPC processors and 2 gigabytes of memory per each node. Eugene has a front-end node for user logins and compiling of codes. Users submit their jobs from this front-end node and cannot directly login to a compute node. Eugene has 64 I/O nodes and each submitted job must use at least one I/O node, meaning that each job consumes a minimum of 32 nodes per execution.
Smoky is an NCCS development resource provided to users needing a comparable system to larger resources such as Jaguar for application development. Smoky’s current configuration is an 80 node Linux cluster consisting of four quad-core 2.0 gigahertz AMD Opteron processors per node, 32 gigabytes of memory (2 gigabytes per core), a gigabit Ethernet network with Infiniband interconnect, and access to Spider, the center's Lustre-based file system. The primary purpose of smoky is for application development, specifically application scaling development for petascale applications. Its programming environment mirrors the environment that is available on the Jaguar system. Only a limited number of NCCS users are allowed access to this resource.
Lens and EVEREST
The visualization capabilities at the NCCS include a medium-sized visualization/data analysis cluster called Lens, a large PowerWall display called EVEREST, and a visualization laboratory.
Lens is a 32 node Linux cluster dedicated to data analysis and high-end visualization. Each node contains four quad-core 2.3 gigahertz AMD Opteron processors with 64 gigabytes of memory, and 2 NVIDIA 8800 GTX GPUs. The primary purpose of lens is to enable data analysis and visualization of simulation data generated on jaguar so as to provide a conduit for large scale scientific discovery. Members of allocated jaguar projects will automatically be given accounts on lens.
EVEREST (Exploratory Visualization Environment for REsearch in Science and Technology) is a large-scale venue for data exploration and analysis. EVEREST measures in at an impressive 30 feet long by 8 feet tall. Its main feature is a 27-projector PowerWall with an aggregate pixel count of 35 million pixels. The projectors are arranged in a 9×3 array, each providing 3,500 lumens for a very bright display. Displaying 11,520 by 3,072 pixels, or a total of 35 million pixels, the wall offers a tremendous amount of visual detail. The wall is integrated with the rest of the computing center, creating a high-bandwidth data path between large-scale high-performance computing and large-scale data visualization. EVEREST provides a premier data analysis and visualization capability and facility in the Department of Energy’s Office of Science. EVEREST is controlled by a 14 node cluster. Each node contains four dual-core AMD Opteron processors. These 14 nodes have nVidia QuadroFX 3000G graphics cards connected to the projectors, providing a very-high-throughput visualization capability. Scientists can make use of the EVEREST facility by contacting any member of the visualization team and booking a time. The visualization lab acts as an experimental facility for development of future visualization capabilities. It also serves as a staging area for technology to be deployed in EVEREST, staff offices, and conference rooms. It houses a 12-panel tiled LCD display, test cluster nodes, interaction devices, and video equipment.
|This section does not cite any sources. (February 2010) (Learn how and when to remove this template message)|
- Astrophysics – Understanding the mechanism behind supernova explosions reveals much about the origins of the chemical elements in the universe.
- Biology – Finding a more efficient way to convert cellulose to ethanol will help make commercially viable ethanol a reality. And understanding the behavior of proteins will help researchers overcome obstacles in countless areas.
- Chemistry – A more efficient system of developing chemical catalysts could lead to improved, less expensive industrial processes, and the simulation of molecular systems will shed light on the workings of incredibly small, complex systems.
- Climate – With Jaguar, next-generation climate models will have unprecedented resolution, giving policymakers better data with which to deal with the ramifications of climate change.
- Computer Science – Researchers are developing the tools necessary to evaluate a range of supercomputing systems, with the goals of discovering how best to use each, how to find the best fit for any given application, and how to tailor applications to get the best performance.
- Engineering – Understanding the dynamics of combustion could lead to cleaner, more efficient diesel engines. The simulation of coal gasification will lead to a new generation of advanced fossil fuel power plants.
- Fusion – Understanding the behavior of fusion plasmas and simulating various device aspects gives researchers insight into the construction of ITER, a prototype fusion power plant.
- Materials - Research into the nature of materials promises to revolutionize many areas of modern life, from power generation and transmission to transportation to the production of faster, smaller, more versatile computers and storage devices.
- Physics - Physicists are using the NCCS’ enormous computing power to reveal the nature of matter at its most elusive—from the behavior of molecules to the atoms that make up those molecules to the quarks, electrons, and other fundamental particles that make up the atoms and everything we know.
|Wikimedia Commons has media related to National Center for Computational Sciences.|