NCAR-Wyoming Supercomputing Center

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
NCAR building in 2014

The NCAR-Wyoming Supercomputing Center (NWSC) is a high-performance computing (HPC) and data archival facility located in Cheyenne, Wyoming that provides advanced computing services to researchers in the Earth system sciences.[1]

NWSC provides researchers’ needs for computing, data analysis, and scientific visualization resources combined with powerful data management capabilities to support finer Earth system model resolution, increased model complexity, better statistics, more predictive power, and longer simulation times.[2][3] The data storage and archival facility[4] at NWSC holds unique historical climate records and a wealth of other scientific data.[5] Scientists at U.S. universities and research institutions access NWSC resources remotely via the Internet from desktop or laptop computers.

History[edit]

Inside the supoercomputer command center within NCAR in 2014

The NWSC data center is funded by the National Science Foundation (NSF) and the State of Wyoming, and is operated by the National Center for Atmospheric Research. It was created through a partnership[6] of the University Corporation for Atmospheric Research (UCAR), the State of Wyoming, the University of Wyoming, Cheyenne LEADS,[7] the Wyoming Business Council, and Cheyenne Light Fuel and Power Company (now named Black Hills Corporation). Consistent with NCAR’s mission, this supercomputing center is a leader in energy efficiency, incorporating the newest and most efficient designs and technologies available. Planning[8][9] for this data center began in 2003, groundbreaking[10] at the North Range Business Park in Cheyenne took place in June 2010, and computing operations began in October 2012.[11]

Sustainability and energy efficiency[edit]

The facility design is based on modular and expandable spaces that can be adapted for computing system upgrades. Its sustainable design makes it 89% more efficient than a typical data center and up to 10% more efficient than state-of-the-art data centers operating in 2010.[12][13] Almost 92% of the energy it uses goes directly to its core purpose of powering supercomputers[14] to enable scientific discovery. Part of its efficiency comes from the regionally integrated design that uses Wyoming’s climate to provide natural cooling during 96% of the year and local wind energy[15] that supplies at least 10% of its power.[16] The NWSC achieved LEED Gold certification[17][18] for its sustainable design and construction. In 2013 it won first place[19] for Facility Design Implementation in the Uptime Institute’s Green Enterprise IT awards.[20] This award recognizes pioneering projects and innovations that significantly improve energy productivity and resource use in information technology. In June 2013, the NWSC won the Datacenter Dynamics[21] North American ‘Green’ Data Center award[22][23] for demonstrated sustainability in the design and operation of facilities.[24]

The center currently has a total of 153,000 square feet with 24,000 square feet of raised floor modules for supercomputing systems in its expandable design.[25] It incorporates numerous resource conservation features to reduce its environmental impact. Water consumption at the NWSC is reduced by about 40% compared to traditional designs by using innovative technologies, specialized cooling tower equipment, and low-flow plumbing fixtures. Waste heat from the systems is recycled to pre-heat components in the power plant, to heat the office spaces, and to melt snow and ice on outdoor walkways and rooftops. Windows supply natural light; combined with room occupancy sensors, the building saves 20-30% in lighting and electricity compared to typical office buildings. A building automation system saves energy by continuously optimizing pumps, fans, and controls that heat or cool only occupied areas of the facility. During construction, sustainable practices were used with emphasis on recycled and locally sourced materials.[26][27]

Supercomputing resources[edit]

NSF grants for computing, data, and scientific visualization resources are allocated to researchers who investigate the Earth system through simulation. The current HPC environment includes two petascale supercomputers, data analysis and visualization servers, an operational weather forecasting system, an experimental supercomputing architecture platform, a centralized file system, a data storage resource, and an archive of historical research data.[28] All computing and support systems required for scientific workflows are attached to the shared, high-speed, central file system to improve scientific productivity and reduce costs by analyzing and visualizing their data files in place at the NWSC.[29]

Supercomputer: Yellowstone[edit]

The Yellowstone supercomputer in 2014.

In 2012, the Yellowstone supercomputer[30] was installed in the NWSC as its inaugural HPC resource. Yellowstone is an IBM iDataPlex[31] cluster consisted of 72,288 Intel Sandy Bridge EP processor cores in 4,518 16-core nodes, each with 32 gigabytes of memory.[32] All nodes are interconnected with a full fat tree Mellanox FDR InfiniBand network.[33] Yellowstone has a peak performance of 1.504 petaflops and has demonstrated a computational capability of 1.2576 petaflops as measured by the High-Performance LINPACK (HPL) benchmark.[34] It debuted as the world’s 13th fastest computer[35] in the November 2012 ranking by the TOP500 organization. Also in November 2012, Yellowstone debuted as the 58th most energy efficient supercomputer in the world[36] by operating at 875.34 megaflops per watt as ranked by the Green500 organization. Yellowstone is expected to remain in production operation through the end of 2017.

Supercomputer: Cheyenne[edit]

Becoming operational in 2017, the 5.34-petaflops Cheyenne supercomputer is currently providing more than three times the computational capacity of Yellowstone. Cheyenne is an SGI ICE XA system with 4,032 dual-socket scientific computation nodes running 145,152, 18-core 2.3-GHz Intel Xeon E5-2697v4 (Broadwell) processing cores and has 315 terabytes of memory.[37] Interconnecting these nodes is a Mellanox EDR InfiniBand network with 9-D enhanced hypercube topology that performs with a latency of only 0.5 microsecond.[38] Cheyenne runs the SUSE Linux Enterprise Server 12 SP1 operating system.[39] Similar to Yellowstone, Cheyenne’s design and configuration will provide balanced I/O and exceptional computational capacity for the data-intensive needs of its user community.[40] Cheyenne debuted as the world's 20th most powerful computer in the November 2016 Top500 ranking.[41]

Data analysis and visualization clusters: Geyser and Caldera[edit]

The Geyser and Caldera clusters are specialized data analysis and visualization resources within the data-centric Yellowstone environment. The Geyser data analysis server is a 640-core cluster of 16 nodes, each with 1 terabyte of memory. With its large per-node memory, Geyser is designed to facilitate large-scale data analysis and post-processing tasks, including 3D visualization, with applications that do not support distributed-memory parallelism.[42] The Caldera computational cluster has 256 cores in 16 nodes, each with 64 gigabytes of memory and two Graphics Processing Units (GPUs) for use as either computational processors or graphics accelerators. Caldera’s two NVIDIA Tesla GPUs[43] per node support parallel processing, visualization activities, and development and testing of general-purpose GPU (GPGPU) code.

Operational forecasting system for Antarctic weather: Erebus[edit]

The center also houses a separate, smaller IBM iDataPlex cluster named Erebus[44] to support the operational forecasts of the NSF Office of Polar Programs[45] Antarctic Mesoscale Prediction System[46] (AMPS).[47] Erebus has 84 nodes similar to Yellowstone’s, an FDR-10 InfiniBand interconnect, and a dedicated 58-terabyte file system. If needed, Yellowstone will run Erebus’ daily weather forecasts for the Antarctic continent to ensure that the worldwide community of users receives these forecasts without interruption.[48]

Experimental supercomputing architecture platform: Pronghorn[edit]

Pronghorn’s architecture has promise for meeting the Earth system sciences’ demanding requirements for data analysis, visualization, and GPU-assisted computation. As part of a partnership between Intel, IBM, and NCAR, this exploratory system is being used to evaluate the effectiveness of the Xeon Phi coprocessor’s Many Integrated Core (MIC) architecture for running climate, weather, and other environmental applications. If these coprocessors prove beneficial to key NCAR applications, they can be easily added to the standard IBM iDataPlex nodes in Yellowstone as a cost-effective way to extend its capabilities.

Pronghorn has 16 dual-socket IBM x360 nodes[49] featuring Intel’s Xeon Phi 5110P coprocessors[50] and 2.6-gigahertz Intel Sandy Bridge (Xeon E5-2670) cores.[51] The system has 64 gigabytes of DDR3-1600 memory per node (63 GB usable memory per node) and is interconnected with a full fat tree Mellanox FDR InfiniBand network.

Centralized file system: GLADE[edit]

Geyser, Caldera, and Yellowstone all mount the central file system named GLobally Accessible Data Environment[52] (GLADE), which provides work spaces common to all HPC resources at NWSC for computation, analysis, and visualization. This allows users to analyze data files in place, without sending large amounts of data across a network or creating duplicate copies in multiple locations. GLADE provides centralized high-performance file systems spanning supercomputing, data post-processing, data analysis, visualization, and HPC-based data transfer services.[53] GLADE also hosts data from NCAR’s Research Data Archive,[54] NCAR’s Community Data Portal,[55] and the Earth System Grid[56] that curates CMIP5/AR5 data. The GLADE central disk resource has a usable storage capacity of 36 petabytes as of February 2017.[57] GLADE has a sustainable aggregate I/O bandwidth of more than 220 gigabits per second.

Data archival service: HPSS[edit]

Archival data storage at the NWSC is provided by a High Performance Storage System (HPSS) that consists of tape libraries with storage capacity of 320 petabytes. These scalable, robotic systems consist of six Oracle StorageTek SL8500 tape libraries using T10000C tape drives[58] with an I/O rate of 240 megabits per second.

Climate data archive: RDA[edit]

NWSC’s data-intensive computing strategy includes a full suite of community data services. NCAR develops data products and services that address the future challenges of data growth, preservation, and management. The Research Data Archive[59] (RDA) contains a large collection of meteorological and oceanographic datasets that support scientific studies in climate, weather, hydrology, Earth system modeling, and other related sciences.[60] It is an open resource; the global research community also uses it.

Educational projects[edit]

The NWSC also serves an educational role.[61] Its public outreach program features the NWSC visitor center[62] that explains the science goals and the technology of NCAR and the University of Wyoming.[63][64] NCAR's higher education internship program[65] places two engineering interns at the NWSC each summer.

The facilities at NWSC are being used in a research collaboration[66][67] with Colorado State University, Oak Ridge National Laboratory, Lagrange Systems,[68] and NCAR to produce resilient resource management strategies for HPC environments, increase the number of researchers and scientific problems that can use HPC, and help achieve sustainable computing at extreme scales within realistic power budgets.[69]

See also[edit]

References[edit]

  1. ^ NCAR-Wyoming Supercomputing Center commissioned, FY2011 NCAR Annual Report. Retrieved 2012-10-16.
  2. ^ Advancing the geosciences through simulation, FY2010 NCAR Annual Report. Retrieved 2012-12-20.
  3. ^ NCAR supercomputer ready for research projects, Wyoming Public Media website. Retrieved 2012-10-16.
  4. ^ Data Support for Climate and Weather Research, National Center for Atmospheric Research website. Retrieved 2013-06-12.
  5. ^ CISL Research Data Archive, NCAR-CISL Research Data Archive website. Retrieved 2013-06-12.
  6. ^ A Public-Private Success Story, Interview, A look at the coalition behind the NCAR-Wyoming Supercomputing Center. NCAR-UCAR AtmosNews website. Retrieved 2012-10-16.
  7. ^ The Cheyenne-Laramie County Corporation for Economic Development, Cheyenne LEADS website. Retrieved 2012-06-11.
  8. ^ Establishing a Petascale Collaboratory for the Geosciences: Scientific Frontiers, 2005. Ad Hoc Committee and Technical Working Group for a Petascale Collaboratory for the Geosciences. A Report to the Geosciences Community. UCAR/JOSS. 80 pp.
  9. ^ Establishing a Petascale Collaboratory for the Geosciences: Technical and Budgetary Prospectus, 2005. Technical Working Group and Ad Hoc Committee for a Petascale Collaboratory for the Geosciences. A Report to the Geosciences Community, UCAR/JOSS. 56 pp.
  10. ^ Breaking Ground on a Groundbreaking Center, NCAR-UCAR AtmosNews website. Retrieved 2012-07-20.
  11. ^ Climate change research gets petascale supercomputer, Computerworld website. Retrieved 2012-10-16.
  12. ^ Green Technology - Raising the Bar in Data Center Efficiency, NCAR-Wyoming Supercomputing Center website. Retrieved 2013-06-19. Source of these estimates is "Basis of Design," an internal construction planning document by H&L Architecture, California Data Center Design Group, Rumsey Engineering, and RMH Group Inc., dated 2010-02-25.
  13. ^ Designing a New Type of Data Center, Data Center Knowledge website. Retrieved 2013-06-19.
  14. ^ Green Technology – Raising the Bar in Data Center Efficiency, NCAR-Wyoming Supercomputing Center website. Retrieved 2013-06-10. Source of this estimate is "Basis of Design," an internal construction planning document by H&L Architecture, California Data Center Design Group, Rumsey Engineering, and RMH Group Inc., dated 2010-02-25.
  15. ^ Happy Jack Windpower, Project. Duke Energy website. Retrieved 2013-06-19.
  16. ^ NCAR Weather Supercomputer Comes Online, Data Center Knowledge website. Retrieved 2013-06-19.
  17. ^ NCAR-Wyoming Supercomputing Center Fact Sheet, NCAR-UCAR AtmosNews website. Retrieved 2012-10-15.
  18. ^ Climate Research Spurred By Green Design of Supercomputing Center, Engineering News Record (ENR) Mountain States website. Retrieved 2013-06-27.
  19. ^ World-class Supercomputing Center Designed to Achieve 1.08 PUE, 2013 Green Enterprise IT Award Winner, Facility Design-Implementation, Uptime Institute website. Retrieved 2013-06-12.
  20. ^ First Place: NCAR-Wyoming Supercomputing Center Recognized for Outstanding Design Implementation, NCAR-UCAR AtmosNews website. Retrieved 2013-04-03.
  21. ^ DatacenterDynamics North American awards ceremony, Datacenter Dynamics website. Retrieved 2013-08-02.
  22. ^ NCAR-Wyoming Supercomputing Center Wins National Data Center Award, Engineering News Record (ENR) Mountain States website. Retrieved 2013-08-02.
  23. ^ NWSC named ‘Green’ Data Center of the Year, NCAR-UCAR AtmosNews website. Retrieved 2013-08-02.
  24. ^ The ‘Green’ Data Center, Datacenter Dynamics website. Retrieved 2013-08-02.
  25. ^ NCAR-Wyoming Supercomputing Center Fact Sheet, NCAR-UCAR AtmosNews website. Retrieved 2012-10-15.
  26. ^ NCAR and the Green Supercomputing Facility of the Future, Reuters website. Retrieved 2012-09-20.
  27. ^ First Place: NCAR-Wyoming Supercomputing Center recognized for outstanding design implementation, NCAR-UCAR AtmosNews website. Retrieved 2013-04-20.
  28. ^ Statistical Analysis of Massive Data Streams: Proceedings of a Workshop, 2004. The National Academies Press website. Retrieved 2013-06-16.
  29. ^ New data service speeds the progress of research, FY2010 NCAR Annual Report. Retrieved 2012-12-20.
  30. ^ Yellowstone Supercomputer Sports Massive Xeon E5 Array, Go Parallel website at sourceforge.net. Retrieved 2012-09-01.
  31. ^ IBM System x iDataPlex dx360 M4, IBM website. Retrieved 2013-06-20.
  32. ^ Yellowstone, NCAR Computational and Information Systems Laboratory website. Retrieved 2012-11-20.
  33. ^ System overview, NCAR Computational and Information Systems Laboratory website. Retrieved 2013-06-20.
  34. ^ Yellowstone data-intensive computing environment, FY2012 CISL Annual Report. Retrieved 2012-12-20.
  35. ^ TOP500 List – November 2012, TOP500 Supercomputer Sites. Retrieved 2012-12-01.
  36. ^ The Green500 List – November 2012, The Green500 website. Retrieved 2013-06-28.
  37. ^ Cheyenne - SGI ICE XA, Xeon E5-2697v4 18C 2.3GHz, Infiniband EDR, Top500 website. Retrieved 2017-02-28.
  38. ^ Cheyenne: NCAR’s Next-Generation Data-Centric Supercomputing Environment, CISL website. Retrieved 2017-02-28.
  39. ^ Cheyenne - SGI ICE XA, Xeon E5-2697v4 18C 2.3GHz, Infiniband EDR, Top500 website. Retrieved 2017-02-28.
  40. ^ Provide Supercomputing Resources, FY2016 CISL Annual Report. Retrieved 2017-02-28.
  41. ^ NCAR Launches Five-Petaflop Supercomputer, Top500 website. Retrieved 2017-02-28.
  42. ^ System overview, NCAR Computational and Information Systems Laboratory website. Retrieved 2013-06-20.
  43. ^ NVIDIA Tesla GPUs Power World's Fastest Supercomputer, TweakTown website. Retrieved 2013-06-25.
  44. ^ New supercomputer in Wyoming aids Antarctic Safety: More-detailed weather forecasts to guide takeoffs, landings, NCAR-UCAR AtmosNews website. Retrieved 2013-06-20.
  45. ^ Polar Programs (PLR), NSF website. Retrieved 2013-02-24.
  46. ^ The Antarctic Mesoscale Prediction System (AMPS), NCAR Mesoscale and Microscale Meteorology website. Retrieved 2012-12-20.
  47. ^ A new model: Antarctic weather forecasting system switches to latest global tool, The Antarctic Sun website. Retrieved 2013-06-25.
  48. ^ Advancing Antarctic Science with AMPS, FY2010 NCAR Annual Report. Retrieved 2012-12-20.
  49. ^ IBM introduces new Opteron superclusters for HPC, HPCwire website. Retrieved 2013-06-25.
  50. ^ Intel Xeon Phi coprocessor 5110P: Highly parallel processing to power your breakthrough innovations, Intel website. Retrieved 2013-06-25.
  51. ^ Intel Xeon Processor E5-2670 (20M Cache, 2.60 GHz, 8.00 GT/s Intel QPI), Intel website. Retrieved 2013-06-25.
  52. ^ New data service speeds the progress of research, FY2010 NCAR Annual Report. Retrieved 2012-12-20.
  53. ^ Globally Accessible Data Environment, FY2012 CISL Annual Report. Retrieved 2012-12-20.
  54. ^ CISL Research Data Archive, Research Data Archive website. Retrieved 2012-09-01.
  55. ^ Community Data Portal, Gateway to Data for the Geosciences, Community Data Portal website. Retrieved 2012-09-01.
  56. ^ ESG Gateway at the National Center for Atmospheric Research, Earth System Grid website. Retrieved 2012-09-01.
  57. ^ System overview, NCAR Computational and Information Systems Laboratory website. Retrieved 2013-06-20.
  58. ^ StorageTek T10000C Tape Drive, Oracle website. Retrieved 2013-06-25.
  59. ^ Data support for climate and weather research, FY2010 NCAR Annual Report. Retrieved 2012-12-20.
  60. ^ CISL Research Data Archive, Research Data Archive website. Retrieved 2013-04-11.
  61. ^ NWSC education and outreach, FY2012 CISL Annual Report. Retrieved 2012-12-20.
  62. ^ Visitor Center, NCAR-Wyoming Supercomputing Center website. Retrieved 2013-02-20.
  63. ^ NWSC visitor center exhibits, FY2012 CISL Annual Report. Retrieved 2012-12-20.
  64. ^ Young scientists, NCAR-Wyoming Supercomputing Center website. Retrieved 2013-02-20.
  65. ^ SIParCS Home, CISL Summer Internships in Parallel Computational Science website. Retrieved 2013-06-03.
  66. ^ Grant helps CSU professors design green supercomputers, The Fort Collins Colorodoan website. Retrieved 2013-06-26.
  67. ^ Colorado State University Engineering Professors Receive Award to Design Green Supercomputers, Colorado State University Department of Public Relations News & Information website. Retrieved 2013-06-26.
  68. ^ Reliable, Scalable, Critical: A New Traffic Optimization Solution, Lagrange Systems website. Retrieved 2013-07-01.
  69. ^ Research Spending & Results, Award Detail, Sudeep Pasricha, PI". Research.gov grants management website for the National Science Foundation. Retrieved 2013-06-25.

External links[edit]

Coordinates: 41°07′44″N 104°53′51″W / 41.1289°N 104.8974°W / 41.1289; -104.8974