Jack Dongarra

From Wikipedia, the free encyclopedia
  (Redirected from Jack J. Dongarra)
Jump to: navigation, search
Jack Dongarra
Jack Dongarra
Born (1950-07-18) July 18, 1950 (age 67)
Nationality American
Citizenship American / United States
Alma mater University of New Mexico
Known for EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK,[1][2] Netlib, PVM, MPI,[3] NetSolve,[4] Top500, ATLAS,[5] and PAPI.[6]
Awards Fellow of the Association for Computing Machinery (2001)[7]
IEEE Computer Society Charles Babbage Award
Website www.netlib.org/utk/people/JackDongarra
Scientific career
Fields Computer Science
Computational science
Parallel computing[8]
Institutions University of Tennessee
University of New Mexico
Argonne National Laboratory
Oak Ridge National Laboratory
University of Manchester
Texas A&M University
Thesis Improving the Accuracy of Computed Matrix Eigenvalues (1980)
Doctoral advisor Cleve Moler[9]
Doctoral students Thara Angskun, Henri Casanova, Zizhong Chen, Camille Coti, Erika Fuentes, Youngbae Kim, Lorie Liebrock, Piotr Luszczek, Antoine Petitet, Jelena Pjesivac-Grbovic, Zhiao Shi, Mohammad Sidani, Fengguang Song, Sathish Vadhiyar, James White, Haihang You[9]

Jack J. Dongarra (born July 18, 1950) is an American University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department[10] at the University of Tennessee. He holds the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, and is an Adjunct Professor in the Computer Science Department at Rice University. Dongarra holds the Turing Fellowship in the schools of Computer Science and Mathematics at the University of Manchester. He is a Faculty Fellow at Texas A&M University's Institute for Advanced Study [11]. Dongarra is the founding director of Innovative Computing Laboratory.[8][12][13][14][15][16]


Dongarra received a Bachelor of Science degree in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Doctor of Philosophy in Applied Mathematics from the University of New Mexico in 1980 under the supervision of Cleve Moler.[9] He worked at the Argonne National Laboratory until 1989, becoming a senior scientist.


He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high-quality mathematical software. He has contributed to the design and implementation of the following open-source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK,[1][2] Netlib, PVM, MPI,[3] NetSolve,[4] TOP500, ATLAS,[5] HPCG[17][18] and PAPI.[6] With Eric Grosse, he pioneered the open-source distribution of numeric source code via email with netlib. He has published approximately 300 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high-performance computers using innovative approaches; in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing's award for Career Achievement; in 2011 he was the recipient of the IEEE Computer Society Charles Babbage Award; and in 2013 he was the recipient of the ACM/IEEE Ken Kennedy Award for his leadership in designing and promoting standards for mathematical software used to solve numerical problems common to high-performance computing. He is a Fellow of the AAAS, ACM, SIAM, and the IEEE and a foreign member of the Russian Academy of Sciences and a member of the US National Academy of Engineering.


  1. ^ a b Choi, J.; Dongarra, J. J.; Pozo, R.; Walker, D. W. (1992). "ScaLAPACK: a scalable linear algebra library for distributed memory concurrent computers". \Proceedings 1992] the Fourth Symposium on the Frontiers of Massively Parallel Computation. p. 120. ISBN 0-8186-2772-7. doi:10.1109/FMPC.1992.234898. 
  2. ^ a b "ScaLAPACK — Scalable Linear Algebra PACKage". Netlib.org. Retrieved 2012-07-14. 
  3. ^ a b Gabriel, E.; Fagg, G. E.; Bosilca, G.; Angskun, T.; Dongarra, J. J.; Squyres, J. M.; Sahay, V.; Kambadur, P.; Barrett, B.; Lumsdaine, A.; Castain, R. H.; Daniel, D. J.; Graham, R. L.; Woodall, T. S. (2004). "Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation". Recent Advances in Parallel Virtual Machine and Message Passing Interface. Lecture Notes in Computer Science. 3241. p. 97. ISBN 978-3-540-23163-9. doi:10.1007/978-3-540-30218-6_19. 
  4. ^ a b "NetSolve". Icl.cs.utk.edu. Retrieved 2012-07-14. 
  5. ^ a b Clint Whaley, R.; Petitet, A.; Dongarra, J. J. (2001). "Automated empirical optimizations of software and the ATLAS project". Parallel Computing. 27: 3. doi:10.1016/S0167-8191(00)00087-9. 
  6. ^ a b "PAPI". Icl.cs.utk.edu. Retrieved 2012-07-14. 
  7. ^ http://awards.acm.org/award_winners/dongarra_3406337.cfm
  8. ^ a b Jack Dongarra's publications indexed by Google Scholar
  9. ^ a b c Jack Dongarra at the Mathematics Genealogy Project
  10. ^ eecs.utk.edu
  11. ^ "Dr. Jack Dongarra — Hagler Institute for Advanced Study at Texas A&M University". Retrieved 2017-09-20. 
  12. ^ "Innovative Computing Laboratory – Academic Research in Enabling Technology and High Performance Computing". Icl.cs.utk.edu. Retrieved 2012-07-14. 
  13. ^ Jack J. Dongarra at DBLP Bibliography Server
  14. ^ List of publications from Microsoft Academic Search
  15. ^ Jack Dongarra's publications indexed by the Scopus bibliographic database, a service provided by Elsevier. (subscription required)
  16. ^ Jack Dongarra author profile page at the ACM Digital Library
  17. ^ Hemsoth, Nicole (June 26, 2014). "New HPC Benchmark Delivers Promising Results". HPCWire. Retrieved 2014-09-08. 
  18. ^ Dongarra, Jack; Heroux, Michael (June 2013). "Toward a New Metric for Ranking High Performance Computing Systems" (PDF). Sandia National Laboratory. Retrieved 2016-07-04. 

External links[edit]