Texas Advanced Computing Center

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that provides comprehensive advanced computing resources and support services to researchers in Texas and across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

Founded in 2001, TACC is one of the centers of computational excellence in the United States. Through the National Science Foundation (NSF) XSEDE project, TACC’s resources and services are made available to the national academic research community. TACC is located on UT's J.J. Pickle Research Campus.

TACC collaborators include researchers in other UT Austin departments and centers, at Texas universities in the High Performance Computing Across Texas Consortium,[1] and at other U.S. universities and government laboratories.


TACC research and development activities are supported by several federal programs, including:

NSF XSEDE (formerly Teragrid) Program

Funded by the National Science Foundation (NSF), the Extreme Science and Engineering Discovery Environment (XSEDE) is a virtual system that scientists can use to interactively share computing resources, data, and expertise. XSEDE is the most powerful and robust collection of integrated advanced digital resources and services in the world. TACC is one of the leading partners in the XSEDE project, whose resources include more than one petaflop of computing capability and more than 30 petabytes of online and archival data storage. As part of the project, TACC provides access to Ranger, Lonestar, Longhorn, Spur, and Ranch through XSEDE quarterly allocations. TACC staff members support XSEDE researchers nationwide, and perform research and development to make XSEDE more effective and impactful. The XSEDE partnership also includes: University of Illinois at Urbana-Champaign, Carnegie Mellon University/University of Pittsburgh, University of Texas at Austin, University of Tennessee Knoxville, University of Virginia, Shodor Education Foundation, Southeastern Universities Research Association, University of Chicago, University of California San Diego, Indiana University, Jülich Supercomputing Centre, Purdue University, Cornell University, Ohio State University, University of California Berkeley, Rice University, and the National Center for Atmospheric Research. It is led by the University of Illinois's National Center for Supercomputing Applications.

University of Texas Research Cyberinfrastructure (UTRC) Project

The UT System Research Cyberinfrastructure Project (UTRC) is an initiative that allows researchers at all 15 UT System institutions to access advanced computing research infrastructure. As part of the UTRC, UT system researchers have unique access to TACC resources including TACC’s Lonestar, a national XSEDE resource, and Corral, a high-performance storage system for all types of digital data.

iPlant Collaborative

The iPlant Collaborative is a 5-year, 50 million dollar NSF project that utilizes new computational science and cyberinfrastructure solutions to address challenges in the plant sciences. iPlant integrates high-performance petascale storage, federated identify management, on-demand virtualization, and distributed computing across XSEDE sites behind a set of REST APIs. These serve as the basis for presenting community-extensible rich web clients that enable the plant science community to perform sophisticated bioinformatics analyses across a variety of conceptual domains.[clarification needed]

STAR Partners Program

The Science and Technology Affiliates for Research Program offers opportunities for companies to increase their effectiveness through utilizing TACC’s computing technologies. Current STAR partners include corporations BP, Chevron, Dell, Green Revolution Cooling, Intel, and Technip.

Supercomputer Clusters[edit]


Stampede is one of the most powerful machines in the world for open science research. Funded by the National Science Foundation Grant ACI-1134872 and built in partnership with Intel, Dell and Mellanox, Stampede went into production on January 7, 2013. Stampede comprises 6400 nodes, 102400 cpu cores, 205 TB total memory, 14 PB total and 1.6 PB local storage. The bulk of the cluster consists of 160 racks of primary compute nodes, each with dual Xeon E5-2680 8-core processors, Xeon Phi coprocessor, and 32 GB ram.[2] The cluster also contained 16 nodes with 32 cores and 1 TB ram each, 128 "standard" compute nodes with Nvidia Kepler K20 GPUs, and other nodes for I/O (to a Lustre filesystem), login, and cluster management.[3] Stampede can complete 9.6 quadrillion floating point operations per second.

A pre-production configuration of Stampede[4] was listed as the 7th fastest supercomputer on the November 2012 Top500 list with a delivered performance of 2660 TFlops. Because the system was still being assembled, the submitted benchmark was run using 1875 nodes with Xeon Phi coprocessors and 3900 nodes without Xeon Phi coprocessors.[5] For the June 2013 Top500 list, the benchmark was re-run using 6006 nodes (all with Xeon Phi coprocessors), delivering 5168 TFlops and moving the system up to 6th place. The benchmark was not re-run for the November 2013 Top500 list and Stampede dropped back to the 7th position.

In its first year of production, Stampede completed 2,196,848 jobs by 3,400 researchers, performing more than 75,000 years of scientific computations.


Maverick, TACC's latest addition to its suite of advanced computing systems, combines capacities for interactive advanced visualization and large-scale data analytics as well as traditional high performance computing. Recent exponential increases in the size and quantity of digital datasets necessitate new systems such as Maverick, capable of fast data movement and advanced statistical analysis. Maverick debuts the new NVIDIA K40 GPU for remote visualization and GPU computing to the national community.


  • 132 NVIDIA Tesla K40 GPUs
  • TACC-developed remote vis software: ScoreVIS, DisplayCluster, GLuRay, and more
  • Visualization software stack: Paraview, VisIT, EnSight, Amira, and more


  • 132 1/4TB memory nodes
  • connected to 20PB file system
  • Mellanox FDR InfiniBand interconnect
  • comprehensive software includes: MATLAB, Parallel R, and more


Lonestar, a powerful, multi-use cyberinfrastructure HPC and remote visualization resource, is the name of a series of HPC cluster systems at TACC.

The first Lonestar system was built by Dell and integrated by Cray, using Dell PowerEdge 1750 servers and Myrinet interconnects, with a peak performance of 3672 gigaFlops. An upgrade in 2004 increased the number of processors to 1024 and the peak rate of 6338 gigaflops. The second iteration (Lonestar 2) in 2006 was deployed with Dell PowerEdge 1855 servers and Infiniband. (1300 processors, 2000 gigabytes memory, peak performance 8320 gigaflops.) Later that year, the cluster's third iteration was built from Dell PowerEdge 1955 servers; it was composed of 5200 processors and 10.4 TB memory. Lonestar 3 entered the Top500 list in November 2006 as the 12th fastest supercomputer, with 55.5 TFlops peak.[6]

In April 2011, TACC announced another upgrade of the Lonestar cluster. The $12 million Lonestar 4 cluster replaced its predecessor with 1,888 Dell M610 PowerEdge blade servers, each with two six-core Intel Xeon 5600 processors (22,656 total cores). The system storage includes a 1000TB parallel (SCRATCH) Lustre file system, and 276TB of local compute-node disk space (146GB/node). Lonestar also provides access to five large memory (1TB) nodes, and eight nodes containing two NVIDIA GPU's, giving users access to high-throughput computing and remote visualization capabilities respectively. Lonestar 4[7] entered the Top500 list in June 2011 as the 28th fastest supercomputer, with 301.8 TFlops peak.

The Top500 rankings of various iterations of the Lonestar cluster are listed in TACC's submissions to the Top500.[8]


TACC's long-term mass storage solution is an Oracle® StorageTek Modular Library System, named Ranch. Ranch utilizes Oracle's Sun Storage Archive Manager Filesystem (SAM-FS) for migrating files to/from a tape archival system with a current offline storage capacity of 40 PB. Ranch's disk cache is built on Oracle's Sun ST6540 and DataDirect Networks 9550 disk arrays containing approximately 110 TB of usable spinning disk storage. These disk arrays are controlled by an Oracle Sun x4600 SAM-FS Metadata server which has 16 CPUs and 32 GB of RAM.


Deployed in April 2009 by the Texas Advanced Computing Center to support data-centric science at the University of Texas, Corral consists of 6 Petabytes of online disk and a number of servers providing high-performance storage for all types of digital data. It supports MySQL and Postgres databases, high-performance parallel file system, and web-based access, and other network protocols for storage and retrieval of data to and from sophisticated instruments, HPC simulations, and visualization laboratories.

Visualization Resources[edit]

To support the research being performed on our high performance computing systems, TACC provides advanced visualization resources and consulting services, which are accessible both in-person and remotely. These resources encompass both hardware and software, and include: Stallion, among the highest resolution tiled displays in the world; Longhorn, the largest hardware accelerated, remote, interactive visualization cluster; and the Longhorn Visualization Portal, an internet gateway to the Longhorn cluster and an easy-to-use interface for scientific visualization.

Visualization Laboratory

The TACC Visualization Laboratory, located in POB 2.404a, is open to all UT faculty, students and staff, as well as UT Systems users. The Vislab includes 'Stallion', one of the highest resolution tiled displays in the world (see below); 'Lasso', a 12.4 megapixel collaborative multi-touch display; 'Bronco', a Sony 4D SRX-S105 overhead projector and flat screen area that gives users a 20 ft. x 11 ft., 4096 x 2160 resolution display, which is driven by a high-end Dell workstation and is ideal for ultra-high-resolution visualizations and presentations; 'Horseshoes', four high-end Dell Precision systems, equipped with Intel multi-core processors and NVIDIA graphics technology for use in graphics production, visualization, and video editing; 'Saddle', a conference and small meeting room equipped with commercial audio and video capabilities to enable full HD videoconferencing; 'Mustang' and 'Silver' are stereoscopic visualization displays that are equipped with the latest technology using Samsung's 240 Hz stereo output modes in conjunction with 55" LED display panel and can be used to render depth as a result of the parallax generated by active and passive stereoscopic technologies; Mellanox FDR InfiniBand networking technologies to connect these systems at higher speeds. The Vislab also serves as a research hub for human-computer interaction, tiled display software development, and visualization consulting.


Stallion is a 328 megapixel tiled display system, with over 150 times the resolution of a standard HD display, it is among the highest pixel count displays in the world. The cluster provides users with the ability to display high-resolution visualizations on a large 16x5 tiled display of 30-inch Dell monitors. This configuration allows for an exploration of visualizations at an extremely high level of detail and quality compared to a typical moderate pixel count projector. The cluster allows users access to over 82GB of graphics memory, and 240 processing cores. This configuration enables the processing of datasets of a massive scale, and the interactive visualization of substantial geometries. A 36TB shared file system is available to enable the storage of tera-scale size datasets.


External links[edit]