HPX

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
High Performance ParalleX
Developer(s)The Stellar group
LSU Center for Computation and Technology
Initial release2008
Stable release
1.2.1 / 18 February 2019 (2019-02-18)
Repositorygithub.com/STEllAR-GROUP/hpx
Written inC++
Operating systemMicrosoft Windows
Linux
Mac OS X
TypeRuntime System
LicenseBoost Software License.[1]
Websitestellar.cct.lsu.edu

High Performance ParalleX (HPX) is an environment for high performance computing. It is currently under active development by the STE||AR group at Louisiana State University.[2] Focused on scientific computing, it provides an alternative execution model to conventional approaches such as MPI. HPX aims to overcome the challenges MPI faces with increasing large supercomputers by using asynchronous communication between nodes and lightweight control objects instead of global barriers, allowing application developers to exploit fine-grained parallelism[3]

HPX is developed in idiomatic C++ and released as open source under the Boost Software License, which allows usage in commercial applications.

Applications[edit]

Though designed as a general purpose environment for high-performance computing, HPX has primarily been used in

  • Astrophysics simulation, including the N-body problem[4], neutron star evolution[5], and the merging of stars[6]
    • Octo-Tiger[7][8], An astrophysics application simulating the evolution of star systems.
  • LibGeoDecomp[9][10][11], A Library for Geometric Decomposition codes
  • Simulation crack and fractures utilizing Peridynamics
  • Phylanx[12][13][14], A Library for Distributed Array Processing

References[edit]

  1. ^ "License", Boost Software License - Version 1.0, boost.org, retrieved 2012-07-30
  2. ^ "About the STE||AR Group". Retrieved 17 April 2019.
  3. ^ Kaiser, Hartmut; Brodowicz, Maciek; Sterling, Thomas (2009). "ParalleX An Advanced Parallel Execution Model for Scaling-Impaired Applications". 2009 International Conference on Parallel Processing Workshops. pp. 394–401. doi:10.1109/icppw.2009.14. ISBN 978-1-4244-4923-1.
  4. ^ C. Dekate, M. Anderson, M. Brodowicz, H. Kaiser, B. Adelstein-Lelbach and T. Sterling (2012). "Improving the Scalability of Parallel N-body Applications with an Event-driven Constraint-based Execution Model". International Journal of High Performance Computing Applications. 26 (3): 319–332. arXiv:1109.5190. doi:10.1177/1094342012440585.CS1 maint: Multiple names: authors list (link)
  5. ^ M. Anderson, T. Sterling, H. Kaiser and D. Neilsen (2011). "Neutron Star Evolutions using Tabulated Equations of State with a New Execution Model" (PDF). American Physical Society April 2012 Meeting.CS1 maint: Multiple names: authors list (link)
  6. ^ D. Pfander, G. Daiß, D. Marcello, H. Kaiser, D. Pflüger, David (2018). "Accelerating Octo-Tiger: Stellar Mergers on Intel Knights Landing with HPX". DHPCC++ Conference 2018 Hosted by IWOCL. doi:10.1145/3204919.3204938.CS1 maint: Multiple names: authors list (link)
  7. ^ STEllAR-GROUP/octotiger Repository on GitHub, The STE||AR Group, 2019-04-17, retrieved 2019-04-17
  8. ^ Heller, Thomas; Lelbach, Bryce Adelstein; Huck, Kevin A; Biddiscombe, John; Grubel, Patricia; Koniges, Alice E; Kretz, Matthias; Marcello, Dominic; Pfander, David (2019-02-14). "Harnessing billions of tasks for a scalable portable hydrodynamic simulation of the merger of two stars". The International Journal of High Performance Computing Applications: 109434201881974. doi:10.1177/1094342018819744. ISSN 1094-3420.
  9. ^ "LibGeoDecomp - Petascale Computer Simulations". www.libgeodecomp.org. Retrieved 2019-04-17.
  10. ^ A library for C++/Fortran computer simulations (e.g. stencil codes, mesh-free, unstructured grids, n-body & particle methods). Scales from smartphones to petascale supercomputers (e.g. Titan, T.., The STE||AR Group, 2019-04-06, retrieved 2019-04-17
  11. ^ A. Schäfer, D. Fey (2008). "LibGeoDecomp: A Grid-Enabled Library for Geometric Decomposition Codes". Proceedings of the 15th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface. Lecture Notes in Computer Science. 5205: 285–294. doi:10.1007/978-3-540-87475-1_39. ISBN 978-3-540-87474-4.
  12. ^ "Phylanx – A Distributed Array Toolkit". Retrieved 2019-04-17.
  13. ^ An Asynchronous Distributed C++ Array Processing Toolkit: STEllAR-GROUP/phylanx, The STE||AR Group, 2019-04-16, retrieved 2019-04-17
  14. ^ Tohid, R.; Wagle, Bibek; Shirzad, Shahrzad; Diehl, Patrick; Serio, Adrian; Kheirkhahan, Alireza; Amini, Parsa; Williams, Katy; Isaacs, Kate (2018). "Asynchronous Execution of Python Code on Task-Based Runtime Systems". 2018 IEEE/ACM 4th International Workshop on Extreme Scale Programming Models and Middleware (ESPM2). Dallas, TX, USA: IEEE: 37–45. doi:10.1109/ESPM2.2018.00009. ISBN 9781728101781.

External links[edit]