Portals network programming application programming interface
|Developer(s)||Sandia National Laboratories, University of New Mexico|
Portals is a low-level network API for high-performance networking on high-performance computing systems developed by Sandia National Laboratories and the University of New Mexico. Portals is currently the lowest-level network programming interface on the commercially successful XT line of supercomputers from Cray.
Portals is based on the concept of elementary building blocks that can be combined to support a wide variety of upper-level network transport semantics. Portals provides one-sided data movement operations, but unlike other one-sided programming interfaces, the target of a remote operation is not a virtual address. Instead, the ultimate destination in memory of an incoming message is determined at the receiver by comparing contents of the message header with the contents of structures at the destination. This flexibility allows for efficient implementations of both one-sided and two-sided communications. In particular, Portals is aimed at providing the fundamental operations necessary to support a high-performance and scalable implementation of the Message Passing Interface (MPI) standard. It was also used as the initial network transport layer for the Lustre file system.
Portals began in the early 1990s as an extension to the nX message passing system used in the SUNMOS and Puma operating system. It was first implemented for the Intel Paragon at Sandia, and later ported to the Intel TeraFLOPS machine named ASCI Red. There were four building blocks in the first version of Portals: the single block, the dynamic block, the independent block and the combined block. All incoming messages would first pass through a match-list that allowed individual portals to respond to specific groups, ranks, and a set of user specified match-bits.
The Portals concept continued to evolve over successive generations of lightweight kernels and massively parallel systems. In 1999, an operational programming interface was given to Portals so that it could be implemented for intelligent or programmable network interfaces outside of a lightweight kernel environment. This standard was designed for systems where the work required to prepare, transmit, and deliver messages is longer than the round-trip to the Portals data structures. For example, in modern systems, this work is dominated by the round-trip through the IO bus to the network interface. The standard has been revised since the initial release to make it more suited for modern high performance, massively parallel computers. The MPI library was ported from the retronymed Portals 2 to the new Portals 3.0.
In light of emerging partitioned global address space (PGAS) languages, several new features have been added to the Portals API as part of Portals 4. Portals 4 also made several changes to improve the interaction between the processor and network interface (NIC) for implementations that provide offload. Finally, an option to support a form of flow-control was added to Portals 4.
- Ron Brightwell, et al (1996-06). "Design and Implementation of MPI on Puma Portals". MPI Developer's Conference, 1996. Proceedings., Second. CiteSeerX: 10
.1 .1 .54 .3830. Check date values in:
- Ron Brightwell, et al (December 1999). "The Portals 3.0 Message Passing Interface Revision 1.0". Sandia National Laboratories.
- Rolf Riesen, et al (April 2006). "The Portals 3.3 Message Passing Interface Document Revision 2.1". Sandia National Laboratories. Retrieved 2009-10-02.
- "Design and Implementation of MPI on Portals 3.0". Lecture Notes in Computer Science (Springer). 2002.
- Neil Pundit. "CPlant: The Largest Linux Cluster". IEEE Technical Committee on Scalable Computing. Retrieved 2009-10-02.
- Kevin Pedretti, et al (2005-09-27). "Implementation and Performance of Portals 3.3 on the Cray XT3". IEEE International Conference on Cluster Computing.
- Rolf Riesen, et al (2008-04-30). "The Portals 4.0 Message Passing Interface". Sandia National Laboratories. Retrieved 2009-12-21.