Charm++

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Charm++
Paradigm Message-driven parallel programming, migratable objects, Object-oriented, asynchronous many-tasking
Designed by Laxmikant Kale
Developer Parallel Programming Laboratory
First appeared late 1980s (late 1980s)
Stable release
6.7.1 / April 20, 2016; 14 months ago (2016-04-20)
Implementation language C++
Platform Cray XC, XK, XE, XT, IBM Blue Gene L/P/Q, Infiniband, TCP, UDP, MPI
OS Linux, Windows, OS X
Website http://charmplusplus.org

Charm++ is a parallel object-oriented programming language based on C++ and developed in the Parallel Programming Laboratory at the University of Illinois. Charm++ is designed with the goal of enhancing programmer productivity by providing a high-level abstraction of a parallel program while at the same time delivering good performance on a wide variety of underlying hardware platforms. Programs written in Charm++ are decomposed into a number of cooperating message-driven objects called chares. When a programmer invokes a method on an object, the Charm++ runtime system sends a message to the invoked object, which may reside on the local processor or on a remote processor in a parallel computation. This message triggers the execution of code within the chare to handle the message asynchronously.

Chares may be organized into indexed collections called chare arrays and messages may be sent to individual chares within a chare array or to the entire chare array simultaneously.

The chares in a program are mapped to physical processors by an adaptive runtime system. The mapping of chares to processors is transparent to the programmer, and this transparency permits the runtime system to dynamically change the assignment of chares to processors during program execution to support capabilities such as measurement-based load balancing, fault tolerance, automatic checkpointing, and the ability to shrink and expand the set of processors used by a parallel program.

Applications implemented using Charm++ include NAMD and OpenAtom (molecular dynamics), ChaNGa and SpECTRE (astronomy), EpiSimdemics (epidemiology), Cello/Enzo-P (adaptive mesh refinement), and ROSS (parallel discrete event simulation). All of these applications have scaled up to a hundred thousand cores or more on petascale systems.

Adaptive MPI (AMPI)[1] is an implementation of the Message Passing Interface standard on top of the Charm++ runtime system and provides the capabilities of Charm++ in a more traditional MPI programming model. AMPI encapsulates each MPI process within a user-level migratable thread that is bound within a Charm++ object. By embedding each thread with a chare, AMPI programs can automatically take advantage of the features of the Charm++ runtime system with little or no changes to the underlying MPI program.

Example[edit]

Here is some Charm++ code for demonstration purposes:[2]

Header file (hello.h)
class Hello : public CBase_Hello {
 public:
  Hello(); // C++ constructor

  void sayHi(int from); // Remotely invocable "entry method"
};
Charm++ Interface file (hello.ci)
module hello {
  array [1D] Hello {
    entry Hello();
    entry void sayHi(int);
  };
};
Source file (hello.C)
#include "hello.decl.h"
#include "hello.h"

extern CProxy_Main mainProxy;
extern int numElements;

Hello::Hello() {
  // No member variables to initialize in this example
}

void Hello::sayHi(int from) {

  // Have this chare object say hello to the user.
  CkPrintf("Hello from chare # %d on processor %d (told by %d).\n",
             thisIndex, CkMyPe(), from);

  // Tell the next chare object in this array of chare objects
  // to also say hello. If this is the last chare object in
  // the array of chare objects, then tell the main chare
  // object to exit the program.
  if (thisIndex < (numElements - 1))
    thisProxy[thisIndex + 1].sayHi(thisIndex);
  else
    mainProxy.done();
}

#include "hello.def.h"

See also[edit]


References[edit]

External links[edit]