|Original author(s)||Kazushige Goto|
|Type||Linear algebra library; implementation of BLAS|
In scientific computing, GotoBLAS, GotoBLAS2 and OpenBLAS are related open source implementations of the Basic Linear Algebra Subprograms (BLAS) API with many hand-crafted optimizations for specific processor types. GotoBLAS was developed by Kazushige Goto at the Texas Advanced Computing Center. As of 2003[update], it was used in seven of the world's ten fastest supercomputers.
GotoBLAS remains available, but development ceased with a final version touting optimal performance on Intel's Nehalem architecture (contemporary in 2008). OpenBLAS is a successor library, developed at the Lab of Parallel Software and Computational Science, ISCAS.
GotoBLAS was written by Goto during his sabbatical leave from the Japan Patent Office in 2002. It was initially optimized for the Pentium 4 processor and managed to immediately boost the performance of a supercomputer based on that CPU from 1.5 TFLOPS to 2 TFLOPS. As of 2005[update], the library was available at no cost for noncommercial use. A later open source version was released under the terms of the BSD license.
GotoBLAS's matrix-matrix multiplication routine, called GEMM in BLAS terms, is highly tuned for the x86 and AMD64 processor architectures by means of handcrafted assembly code. It follows a similar decomposition into smaller "kernel" routines that other BLAS implementations use, but where earlier implementations streamed data from the L1 processor cache, GotoBLAS uses the L2 cache. The kernel used for GEMM is a routine called GEBP, for "General block-times-panel multiply", which was experimentally found to be "inherently superior" over several other kernels that were considered in the design.
Several other BLAS routines are, as is customary in BLAS libraries, implemented in terms of GEMM.
OpenBLAS is a continuation of GotoBLAS development. It adds optimized implementations of linear algebra kernels for several processor architectures, including Intel Sandy Bridge and Loongson. It claims to achieve performance comparable to the Intel MKL.
- John Markoff (28 November 2005). "Writing the fastest code, by hand, for fun". New York Times.
- "GotoBlas2". Retrieved 28 August 2013.
- Goto, Kazushige; van de Geijn, Robert A. (2008). "Anatomy of High-Performance Matrix Multiplication". ACM Transactions on Mathematical Software 34 (3): Article 12, 25 pages. doi:10.1145/1356052.1356053.
- Goto, Kazushige; van de Geijn, Robert A. (2008). "High-performance implementation of the level-3 BLAS". ACM Transactions on Mathematical Software 35 (1).
- Wang Qian; Zhang Xianyi; Zhang Yunquan; Qing Yi (2013). "AUGEM: Automatically Generate High Performance Dense Linear Algebra Kernels on x86 CPUs". Int'l Conf. on High Performance Computing, Networking, Storage and Analysis.
- Zhang Xianyi; Wang Qian; Zhang Yunquan (2012). "Model-driven Level 3 BLAS Performance Optimization on Loongson 3A Processor". IEEE 18th Int'l Conf. on Parallel and Distributed Systems (ICPADS).
|This software article is a stub. You can help Wikipedia by expanding it.|