Texas Advanced Computing Center
The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is a research center for advanced computational science, engineering and technology. TACC is located on UT's J.J. Pickle Research Campus.
TACC provides comprehensive advanced computing resources and support services to researchers in Texas and across the USA. TACC also conducts research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.
TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support users of these resources. Through the National Science Foundation (NSF) TeraGrid project, these resources and services are also made available to the national academic research community.
TACC collaborators include researchers in other UT Austin departments and centers, at Texas universities in the High Performance Computing Across Texas Consortium, and at other U.S. universities and government laboratories.
TACC research and development activities are supported by several federal programs, including:
- the NSF TeraGrid program
- Computational Chemistry Grid project
- the NSF Information Technology Research program
- the NSF National Middleware Initiative Testbed
- the Department of Defense High Performance Computing Modernization Office User Productivity Enhancement, Technology Transfer and Training program
- the Department of Energy Scientific Discovery through Advanced Computing program 
- the National Aeronautics and Space Administration Information Power Grid program.
Built by Dell and Cray, the cluster initially used PowerEdge 1750 servers and Myrinet interconnects, and reached peak performance of 3672 gigaFlops. An upgrade in 2004 increased the number of processors to 1024 and the peak rate of 6338 gigaflops. The second iteration (Lonestar 2) in 2006 was deployed with Dell PowerEdge 1855 servers and Infiniband. (1300 processors, 2000 gigabytes memory, peak performance 8320 gigaflops.) Later that year, the cluster's third iteration was built from Dell PowerEdge 1955 servers; it was composed of 5200 processors and 10.4 TB memory. Lonestar 3 entered the Top500 list in November 2006 as the 12th fastest supercomputer, with 55.5 TFlops peak.
In April 2011, TACC announced another upgrade of the Lonestar cluster. The $12 million cluster replaced its predecessor with 1,888 Dell M610 PowerEdge blade servers, each with two six-core Intel Xeon 5600 processors (22,656 total cores). The cluster contained 44.3 TB memory, and 1.2 PB storage. Lonestar 4 entered the Top500 list in June 2011 as the 28th fastest supercomputer, with 301.8 TFlops peak.
The Top500 rankings of various iterations of the Lonestar cluster are listed in TACC's submissions to the Top500.
As of March 2013, Ranger had been decommissioned according to TACC's website.
In September 2006, the NSF granted TACC a $59 million award to purchase and deploy a supercomputer system. The system, dubbed "Ranger", built in a partnership with Sun Microsystems, went into production on February 4, 2008.
Ranger was the first Sun Constellation System in production. It included 62,976 processor cores (provided by 15,744 Opteron quad-core processors in 3,936 quad-socket Sun Blade server nodes) running the CentOS Linux distribution, and originally had peak performance of 504 TFlops, 123 TB memory and 1.73 PB of storage. It was upgraded to faster AMD Opteron processors in June 2008, with peak performance of 580 teraflops.
On January 7, 2013, TACC's new cluster, "Stampede", went into production. Stampede comprised 6400 nodes, 102400 cpu cores, 205 TB total memory, 14 PB total and 1.6 PB local storage. The bulk of the cluster consisted of 160 racks of primary compute nodes, each with dual Xeon E5-2680 8-core processors, Xeon Phi coprocessor, and 32 GB ram. The cluster also contained 16 nodes with 32 cores and 1 TB ram each, 128 "standard" compute nodes with Nvidia Kepler K20 GPUs, and other nodes for I/O (to a Lustre filesystem), login, and cluster management.
Stampede was the 7th fastest supercomputer on the November 2012 Top500 list, which reported the system as having twice as many cores (204900), likely due to counting of hyperthreading cores. In the November 2012 Top500 ranking, Stampede is listed as having a peak performance of 3959 TFlops, but the cluster was not operational until January 2013; The November 2012 Tom's Hardware article notes that the Xeon Phi coprocessors were still being installed, which may explain the discrepancy.
- HIPCAT Consortium
- SciDAC program
- "Sun Constellation Linux Cluster". Texas Advanced Computing Center. Retrieved 2007-06-09.
- Schwartz, Jonathan (2008-03-03). "The World's Largest Supercomputing Cloud". Retrieved 2007-06-09.
- "Ranger Processor Upgrade and Extended Maintenance". Texas Advanced Computing Center. 2008-06-11. Retrieved 2008-06-26.