OpenACC

From Wikipedia, the free encyclopedia
Jump to: navigation, search
OpenACC
Stable release 2.0
Written in C, C++, and Fortran
Operating system Cross-platform
Platform Cross-platform
Type API
Website http://www.openacc.org/

OpenACC (for Open Accelerators) is a programming standard for parallel computing developed by Cray, CAPS, Nvidia and PGI. The standard is designed to simplify parallel programming of heterogeneous CPU/GPU systems.[1]

Like in OpenMP, the programmer can annotate C, C++ and Fortran source code to identify the areas that should be accelerated using PRAGMA compiler directives and additional functions.[2] Unlike OpenMP in versions before 4.0, code can be started not only on the CPU, but also on the GPU. With new features of OpenMP 4.0 this changed completely.

OpenACC members have worked as members of the OpenMP standard group to merge into OpenMP specification to create a common specification which extends OpenMP to support accelerators in a future release of OpenMP.[3][4] These efforts resulted in a technical report[5] for comment and discussion timed to include the annual Supercomputing Conference (November 2012, Salt Lake City) and to address non-Nvidia accelerator support with input from hardware vendors who participate in OpenMP.[6]

At ISC’12 OpenACC was demonstrated to work on Nvidia, AMD and Intel accelerators, without performance data.[7]

In November 12, 2012, at the SC12 conference, a draft of the OpenACC version 2.0 specification was presented.[8] New suggested capabilities include new controls over data movement (such as better handling of unstructured data and improvements in support for non-contiguous memory), and support for explicit function calls and separate compilation (allowing the creation and reuse of libraries of accelerated code).

Compiler support[edit]

Support of OpenACC is available in commercial compilers from PGI (from version 12.6), Cray, and CAPS.[7][9][10] OpenUH[11] is an Open64 based open source OpenACC compiler, developed by HPCTools group from University of Houston. An open source compiler, accULL, is also developed by the University of La Laguna (C language only).[12] GNU GCC is also working on adding OpenACC support.[13]

Usage[edit]

In a way similar to OpenMP 3.x on homogeneous system or the earlier OpenHMPP,[14] the primary mode of programming in OpenACC is directives.[15] The specifications also include a runtime library defining several support functions. To exploit them, user should include "openacc.h" in C or "openacc_lib.h" in Fortran;[16] and then call acc_init() function.

Directives[edit]

OpenACC defines an extensive list of pragmas (directives),[2] for example:

 #pragma acc parallel
 #pragma acc kernels

Both are used to define parallel computation kernels to be executed on the accelerator, using distinct semantics[17][18]

 #pragma acc data

Is the main directive to define and copy data to and from the accelerator.

 #pragma acc loop

Is used to define the type of parallelism in a parallel or kernels region.

 #pragma acc cache
 #pragma acc update
 #pragma acc declare
 #pragma acc wait

Runtime API[edit]

There are some runtime API functions defined too: acc_get_num_devices(), acc_set_device_type(), acc_get_device_type(), acc_set_device_num(), acc_get_device_num(), acc_async_test(), acc_async_test_all(), acc_async_wait(), acc_async_wait_all(), acc_init(), acc_shutdown(), acc_on_device(), acc_malloc(), acc_free().

OpenACC generally takes care of work organisation for the target device however this can be overridden through the use of gangs and workers. A gang consists of workers and operates over a number of processing elements (as with a workgroup in OpenCL).

See also[edit]

References[edit]

  1. ^ "Nvidia, Cray, PGI, and CAPS launch ‘OpenACC’ programming standard for parallel computing". The Inquirer. 4 November 2011. 
  2. ^ a b "OpenACC standard version 2.0". OpenACC.org. Retrieved 14 January 2014. 
  3. ^ "How does the OpenACC API relate to the OpenMP API?". OpenACC.org. Retrieved 14 January 2014. 
  4. ^ "How did the OpenACC specifications originate?". OpenACC.org. Retrieved 14 January 2014. 
  5. ^ "The OpenMP Consortium Releases First Technical Report". OpenMP.org. 5 November 2012. Retrieved 14 January 2014. 
  6. ^ "OpenMP at SC12". OpenMP.org. 29 August 2012. Retrieved 14 January 2014. 
  7. ^ a b "OpenACC Group Reports Expanding Support for Accelerator Programming Standard". HPCwire. 20 June 2012. Retrieved 14 January 2014. 
  8. ^ "OpenACC Version 2.0 Posted for Comment". OpenACC.org. 12 November 2012. Retrieved 14 January 2014. 
  9. ^ "OpenACC Standard to Help Developers to Take Advantage of GPU Compute Accelerators". Xbit laboratories. 16 November 2011. Retrieved 14 January 2014. 
  10. ^ "CAPS Announcing Full Support for OpenACC 2.0 in its Compilers". HPCwire. 14 November 2014. Retrieved 14 January 2014. 
  11. ^ "OpenUH Compiler". Retrieved 4 March 2014. }
  12. ^ "accULL The OpenACC research implementation". Retrieved 14 January 2014. 
  13. ^ Gavrin, Evgeny (26 September 2013). "OpenACC branch [openacc-1_0-branch"]. gcc mailing list. http://gcc.gnu.org/ml/gcc/2013-09/msg00235.html. Retrieved 14 January 2014.
  14. ^ Dolbeau, Romain; Bihan, Stéphane; Bodin, François (4 October 2007). "HMPP: A Hybrid Multi-core Parallel Programming Environment". Workshop on General Purpose Processing on Graphics Processing Units. Retrieved 14 January 2014. 
  15. ^ "Easy GPU Parallelism with OpenACC". Dr.Dobb's. 11 June 2012. Retrieved 14 January 2014. 
  16. ^ "OpenACC API QuickReference Card, version 1.0". NVidia. November 2011. Retrieved 14 January 2014. 
  17. ^ "OpenACC Kernels and Parallel Constructs". PGI insider. August 2012. Retrieved 14 January 2014. 
  18. ^ "OpenACC parallel section VS kernels". CAPS entreprise Knowledge Base. 3 January 2013. Retrieved 14 January 2014. 

External links[edit]