This article contains content that is written like an advertisement. (September 2019) (Learn how and when to remove this template message)
RapidIO - the unified fabric for Performance Critical Computing
|Width in bits||Port widths of 1, 2, 4, 8, and 16 lanes|
|No. of devices||Sizes of 256, 65,536, and 4,294,967,296|
|Speed||Per lane (each direction):|
|External interface||Yes, Chip-Chip, Board-Board (Backplane), Chassis-Chassis|
The RapidIO architecture is a high-performance packet-switched interconnect technology. RapidIO supports messaging, read/write and cache coherency semantics. RapidIO fabrics guarantee in-order packet delivery, enabling power- and area- efficient protocol implementation in hardware. Based on industry-standard electrical specifications such as those for Ethernet, RapidIO can be used as a chip-to-chip, board-to-board, and chassis-to-chassis interconnect. The protocol is marketed as: RapidIO - the unified fabric for Performance Critical Computing, and is used in many applications such as Data Center & HPC, Communications Infrastructure, Industrial Automation and Military & Aerospace that are constrained by at least one of size, weight, and power (SWaP).
RapidIO has its roots in energy-efficient, high-performance computing. The protocol was originally designed by Mercury Computer Systems and Motorola (Freescale) as a replacement for Mercury's RACEway proprietary bus and Freescale's PowerPC bus. The RapidIO Trade Association was formed in February 2000, and included telecommunications and storage OEMs as well as FPGA, processor, and switch companies. The protocol was designed to meet the following objectives:
- Low latency
- Guaranteed, in order, packet delivery
- Support for messaging and read/write semantics
- Could be used in systems with fault tolerance/high availability requirements
- Flow control mechanisms to manage short-term (less than 10 microseconds), medium-term (tens of microseconds) and long-term (hundreds of microseconds to milliseconds) congestion
- Efficient protocol implementation in hardware
- Low system power
- Scales from two to thousands of nodes
The RapidIO Specification Revision 1.1(3xN Gen1), released in March 2001, defined a wide, parallel bus. This specification did not achieve extensive commercial adoption.
The RapidIO Specification Revision 1.2, released in June 2002, defined a serial interconnect based on the XAUI physical layer. Devices based on this specification achieved significant commercial success within wireless baseband, imaging and military compute. 
The RapidIO Specification Revision 1.3 was released in June 2005.
The RapidIO Specification Revision 2.0(6xN Gen2), released in March 2008, added more port widths (2×, 8×, and 16×) and increased the maximum lane speed to 6.25 GBd / 5 Gbit/s. Revision 2.1 has repeated and expanded the commercial success of the 1.2 specification.
The RapidIO Specification Revision 2.1 was released in September 2009.
The RapidIO Specification Revision 2.2 was released in May 2011.
The RapidIO Specification Revision 3.0(10xN Gen3), released in Ocbober 2013, has the following changes and improvements compared to the 2.x specifications:
- Based on industry-standard Ethernet 10GBASE-KR electrical specifications for short (20 cm + connector) and long (1 m + 2 connector) reach applications
- Directly leverages the Ethernet 10GBASE-KR DME training scheme for long-reach signal quality optimization
- Defines a 64b/67b encoding scheme (similar to the Interlaken standard) to support both copper and optical interconnects and to improve bandwidth efficiency
- Dynamic asymmetric links to save power (for example, 4× in one direction, 1× in the other)
- Addition of a time synchronization capability similar to IEEE 1588, but much less expensive to implement
- Support for 32-bit device IDs, increasing maximum system size and enabling innovative hardware virtualization support
- Revised routing table programming model simplifies network management software
- Packet exchange protocol optimizations
The RapidIO Specification Revision 3.1, released in Ocbober 2014, was developed through a collaboration between the RapidIO Trade Association and NGSIS. Revision 3.1 has the following enhancements compared to the 3.0 specification:
- MECS Time Synchronization protocol for smaller embedded systems. MECS Time Synchronization supports redundant time sources. This protocol is lower cost than the Timestamp Synchronization Protocol introduced in revision 3.0
- PRBS test facilities and standard register interface.
- Structurally Asymmetric Link behavioral definition and standard register interface. Structurally Asymmetric Links carry much more data in one direction than the other, for applications such as sensors or processing pipelines. Unlike dynamic asymmetric links, Structurally Asymmetric Links allow implementers to remove lanes on boards and in silicon, saving size, weight, and power. Structurally asymmetric links also allow the use of alternative lanes in the case of a hardware failure on a multi-lane port.
- Extended error log to capture a series of errors for diagnostic purposes
- Space device profiles for endpoints and switches, which define what it means to be a space-compliant RapidIO device.
The RapidIO Specification Revision 3.2 was released in February 2016.
The RapidIO Specification Revision 4.0(25xN Gen4) was released in June 2016. has the following changes and improvements compared to the 3.x specifications:
- Support 25 Gbaud lane rate and physical layer specification, with associated programming model changes
- Allow IDLE3 to be used with any Baud Rate Class, with specified IDLE sequence negotiation
- Increased maximum packet size to 284 bytes in anticipation of Cache Coherency specification
- Support 16 physical layer priorities
- Support “Error Free Transmission” for high throughput isochronous information transfer
The RapidIO Specification Revision 4.1 was released in July 2017.
RapidIO used in wireless infrastructure
RapidIO fabrics enjoy dominant market share in global deployment of cellular infrastructure 3G, 4G & LTE networks with millions of RapidIO ports shipped into wireless base stations worldwide. RapidIO fabrics were originally designed to support connecting different types of processors from different manufacturers together in a single system. This flexibility has driven the widespread use of RapidIO in wireless infrastructure equipment where there is a need to combine heterogeneous, DSP, FPGA and communication processors together in a tightly coupled system with low latency and high reliability.
RapidIO used in data center / HPC analytics
Data Center and HPC Analytics systems have been deployed using a RapidIO 2D Torus Mesh Fabric, that provides a high speed general purpose interface among the system cartridges for applications that benefit from high bandwidth, low latency node-to-node communication. The RapidIO 2D Torus unified fabric is routed as a torus ring configuration connecting up to 45 server cartridges capable of providing 5Gbs per lane connections in each direction to its north, south, east and west neighbors. This allows the system to meet many unique HPC applications where efficient localized traffic is needed.
Also, using an open modular data center and compute platform, a heterogeneous HPC system has showcased the low latency attribute of RapidIO to enable real-time analytics. In March 2015 a top-of-rack switch was announced to drive RapidIO into mainstream data center applications.
RapidIO in aerospace
The interconnect or "bus" is one of the critical technologies in the design and development of spacecraft avionic systems that dictates its architecture and level of complexity. There are a host of existing architectures that are still in use given their level of maturity. These existing systems are sufficient for a given type of architecture need and requirement. Unfortunately, for next generation missions a more capable avionics architecture is desired; which is well beyond the capabilities levied by existing architectures. A viable option toward the design and development of these next generation architectures is to leverage existing commercial protocols capable of accommodating high levels of data transfer.
In 2012, RapidIO was selected by the Next Generation Spacecraft Interconnect Standard (NGSIS) working group to serve as the foundation for standard communication interconnects to be used in spacecraft. The NGSIS is an umbrella standards effort that includes RapidIO Version 3.1 development, and a box level hardware standards effort under VITA 78 called SpaceVPX or High ReliabilityVPX. The NGSIS requirements committee developed extensive requirements criteria with 47 different elements for the NGSIS interconnect. Independent trade study results by NGSIS member companies demonstrated the superiority of RapidIO over other existing commercial protocols, such as InfiniBand, Fibre Channel, and 10G Ethernet. As a result, the group decided that RapidIO offered the best overall interconnect for the needs of next-generation spacecraft.
The RapidIO roadmap aligns with Ethernet PHY development. RapidIO specifications for 50 GBd and higher links are under investigation.
- Link Partner
- One end of a RapidIO link.
- A device that can originate and/or terminate RapidIO packets.
- Processing Element
- A device which has at least one RapidIO port
- A device that can route RapidIO packets.
The RapidIO protocol is defined in a 3-layered specification:
- Physical: Electrical specifications, PCS/PMA, link-level protocol for reliable packet exchange
- Transport: Routing, multicast, and programming model
- Logical: Logical I/O, messaging, global shared memory (CC-NUMA), flow control, data streaming
System specifications include:
- System Initialization
- Error Management/Hot Swap
The RapidIO electrical specifications are based on industry-standard Ethernet and Optical Interconnect Forum standards:
- XAUI for lane speeds of 1.25, 2.5, and 3.125 GBd (1, 2, and 2.5 Gbit/s)
- OIF CEI 6+ Gbit/s for lane speeds of 5.0 and 6.25 GBd (4 and 5 Gbit/s)
- 10GBASE-KR 802.3-ap (long reach) and 802.3-ba (short reach) for lane speeds of 10.3125 GBd (9.85 Gbit/s)
The RapidIO PCS/PMA layer supports two forms of encoding/framing:
- 8b/10b for lane speeds up to 6.25 GBd
- 64b/67b, similar to that used by Interlaken for lane speeds over 6.25 GBd
Every RapidIO processing element transmits and receives three kinds of information: Packets, control symbols, and an idle sequence.
Every packet has two values that control the physical layer exchange of that packet. The first is an acknowledge ID (ackID), which is the link-specific, unique, 5-, 6-, or 12-bit value that is used to track packets exchanged on a link. Packets are transmitted with serially increasing ackID values. Because the ackID is specific to a link, the ackID is not covered by CRC, but by protocol. This allows the ackID to change with each link it passes over, while the packet CRC can remain a constant end-to-end integrity check of the packet. When a packet is successfully received, it is acknowledged using the ackID of the packet. A transmitter must retain a packet until it has been successfully acknowledged by the link partner.
The second value is the packet's physical priority. The physical priority is composed of the Virtual Channel (VC) identifier bit, the Priority bits, and the Critical Request Flow (CRF) bit. The VC bit determines if the Priority and CRF bits identify a Virtual Channel from 1 to 8, or are used as the priority within Virtual Channel 0. Virtual Channels are assigned guaranteed minimum bandwidths. Within Virtual Channel 0, packets of higher priority can pass packets of lower priority. Response packets must have a physical priority higher than requests in order to avoid deadlock.
The physical layer contribution to RapidIO packets is a 2-byte header at the beginning of each packet that includes the ackID and physical priority, and a final 2-byte CRC value to check the integrity of the packet. Packets larger than 80 bytes also have an intermediate CRC after the first 80 bytes. With one exception a packet's CRC value(s) acts as an end-to-end integrity check.
RapidIO control symbols can be sent at any time, including within a packet. This gives RapidIO the lowest possible in-band control path latency, enabling the protocol to achieve high throughput with smaller buffers than other protocols.
Control symbols are used to delimit packets (Start of Packet, End of Packet, Stomp), to acknowledge packets (Packet Acknowledge, Packet Not Acknowledged), reset (Reset Device, Reset Port) and to distribute events within the RapidIO system (Multicast Event Control Symbol). Control symbols are also used for flow control (Retry, Buffer Status, Virtual Output Queue Backpressure) and for error recovery.
The error recovery procedure is very fast. When a receiver detects a transmission error in the received data stream, the receiver causes its associated transmitter to send a Packet Not Accepted control symbol. When the link partner receives a Packet Not Accepted control symbol, it stops transmitting new packets and sends a Link Request/Port Status control symbol. The Link Response control symbol indicates the ackID that should be used for the next packet transmitted. Packet transmission then resumes.
The IDLE sequence is used during link initialization for signal quality optimization. It is also transmitted when the link does not have any control symbols or packets to send.
Every RapidIO endpoint is uniquely identified by a Device Identifier (deviceID). Each RapidIO packet contains two device IDs. The first is the destination ID (destID), which indicates where the packet should be routed. The second is the source ID (srcID), which indicates where the packet originated. When an endpoint receives a RapidIO request packet that requires a response, the response packet is composed by swapping the srcID and destID of the request.
RapidIO switches use the destID of received packets to determine the output port or ports that should forward the packet. Typically, the destID is used to index into an array of control values. The indexing operation is fast and low cost to implement. RapidIO switches support a standard programming model for the routing table, which simplifies system control.
The RapidIO transport layer supports any network topology, from simple trees and meshes to n-dimensional hypercubes, multi-dimensional toroids, and more esoteric architectures such as entangled networks.
The RapidIO transport layer enables hardware virtualization (for example, a RapidIO endpoint can support multiple device IDs). Portions of the destination ID of each packet can be used to identify specific pieces of virtual hardware within the endpoint.
The RapidIO logical layer is composed of several specifications, each providing packet formats and protocols for different transaction semantics.
The logical I/O layer defines packet formats for read, write, write-with-response, and various atomic transactions. Examples of atomic transactions are set, clear, increment, decrement, swap, test-and-swap, and compare-and-swap.
The Messaging specification defines Doorbells and Messages. Doorbells communicate a 16-bit event code. Messages transfer up to 4KiB of data, segmented into up to 16 packets each with a maximum payload of 256 bytes. Response packets must be sent for each Doorbell and Message request. The response packet status value indicates done, error, or retry. A status of retry requests the originator of the request to send the packet again. The logical level retry response allows multiple senders to access a small number of shared reception resources, leading to high throughput with low power.
The Flow Control specification defines packet formats and protocols for simple XON/XOFF flow control operations. Flow control packets can be originated by switches and endpoints. Reception of a XOFF flow control packet halts transmission of a flow or flows until an XON flow control packet is received or a timeout occurs. Flow Control packets can also be used as a generic mechanism for managing system resources.
The Globally Shared Memory specification defines packet formats and protocols for operating a cache coherent shared memory system over a RapidIO network.
The Data Streaming specification supports messaging with different packet formats and semantics than the Messaging specification. Data Streaming packet formats support the transfer of up to 64K of data, segmented over multiple packets. Each transfer is associated with a Class of Service and Stream Identifier, enabling thousands of unique flows between endpoints.
The Data Streaming specification also defines Extended Header flow control packet formats and semantics to manage performance within a client-server system. Each client uses extended header flow control packets to inform the server of the amount of work that could be sent to the server. The server responds with extended header flow control packets that use XON/XOFF, rate, or credit based protocols to control how quickly and how much work the client sends to the server.
Systems with a known topology can be initialized in a system specific manner without affecting interoperability. The RapidIO system initialization specification supports system initialization when system topology is unknown or dynamic. System initialization algorithms support the presence of redundant hosts, so system initialization need not have a single point of failure.
Each system host recursively enumerates the RapidIO fabric, seizing ownership of devices, allocating device IDs to endpoints and updating switch routing tables. When a conflict for ownership occurs, the system host with the larger deviceID wins. The "losing" host releases ownership of its devices and retreats, waiting for the "winning" host. The winning host completes enumeration, including seizing ownership of the losing host. Once enumeration is complete, the winning host releases ownership of the losing host. The losing host then discovers the system by reading the switch routing tables and registers on each endpoint to learn the system configuration. If the winning host does not complete enumeration in a known time period, the losing host determines that the winning host has failed and completes enumeration.
System enumeration is supported in Linux by the RapidIO subsystem.
RapidIO supports high availability, fault tolerant system design, including hot swap. The error conditions that require detection, and standard registers to communicate status and error information, are defined. A configurable isolation mechanism is also defined so that when it is not possible to exchange packets on a link, packets can be discarded to avoid congestion and enable diagnosis and recovery activities. In-band (port-write packet) and out-of-band (interrupt) notification mechanisms are defined.
The RapidIO specification does not discuss the subjects of form factors and connectors, leaving this to specific application-focussed communities. RapidIO is supported by the following form factors:
Processor-agnostic RapidIO support is found in the Linux kernel.
The RapidIO interconnect is used extensively in the following applications:
- Wireless base stations
- Aerospace and Military single-board computers, as well as radar, acoustic and image processing systems
- Medical imaging
- Industrial control and data path applications
RapidIO is expanding into supercomputing, server, and storage applications.
PCI Express is targeted at the host to peripheral market, as opposed to embedded systems. Unlike RapidIO, PCIe is not optimized for peer-to-peer multi processor networks. PCIe is ideal for host to peripheral communication. PCIe does not scale as well in large multiprocessor peer-to-peer systems, as the basic PCIe assumption of a "root complex" creates fault tolerance and system management issues.
Another alternative interconnect technology is Ethernet. Ethernet is a robust approach to linking computers over large geographic areas, where network topology may change unexpectedly, the protocols used are in flux, and link latencies are large. To meet these challenges, systems based on Ethernet require significant amounts of processing power, software and memory throughout the network to implement protocols for flow control, data transfer, and packet routing. RapidIO is optimized for energy efficient, low latency, processor-to-processor communication in fault tolerant embedded systems that span geographic areas of less than one kilometre.
SpaceFibre is a competing technology for space applications.
Time-Triggered Ethernet is a competing technology for more complex backplane (VPX) and backbone applications for space (launchers and human-rated integrated avionics).
- "RapidIO.org I Open Standard Interconnect Architecture". RapidIO.org.
- Fuller, Sam (27 December 2004). "Preface". RapidIO: The Embedded System Interconnect. John Wiley & Sons Ltd. ISBN 0-470-09291-2. Retrieved 9 October 2014.
- "RapidIO Standard Revision 1.2". www.rapidio.org. RapidIO Trade Association. 26 June 2002. Retrieved 9 October 2014.
- "Integrated Device Technology 2011 Annual Report" (PDF). www.idt.com. Integrated Device Technology Inc. 6 June 2011. p. 4. Retrieved 9 October 2014.
- Jag Bolaria (October 15, 2013). "RapidIO Reaches for the Clouds". www.linleygroup.com. The Linley Group. Retrieved 9 October 2014.
- "RapidIO Standard Revision 2.0". www.rapidio.org. RapidIO Trade Association. 23 February 2005. Retrieved 9 October 2014.
- "Integrated Device Technology 2014 Annual Report" (PDF). www.idt.com. Integrated Device Technology Inc. 28 May 2014. pp. 5, 35. Retrieved 9 October 2014.
- "RapidIO Standard Revision 3.0". www.rapidio.org. RapidIO Trade Association. 10 November 2013. Retrieved 9 October 2014.
- "RapidIO Standard Revision 3.1" (PDF). www.rapidio.org. RapidIO Trade Association. 13 October 2014. Retrieved 18 October 2014.
- "RapidIO Standard Revision 4.0". www.rapidio.org. RapidIO Trade Association. June 2016. Retrieved 15 August 2016.
- "RapidIO Standard Revision 4.1". www.rapidio.org. RapidIO Trade Association. July 2017. Retrieved 11 August 2019.
- "Reader Forum: Cloud radio access and small cell networks based on RapidIO". www.rcrwireless.com.
- "PayPal Finds Order from Chaos with HPC". hpcwire.com. 24 September 2014.
- "Prodrive Technologies announces its Datacenter - HPC system (DCCP-280) with RapidIO & 10 Gigabit Ethernet - Prodrive Technologies". prodrive-technologies.com. 30 January 2014.
- "IDT, Orange Silicon Valley, NVIDIA Accelerate Computing Breakthrough With RapidIO-based Clusters Ideal for Gaming, Analytics". businesswire.com.
- "Prodrive Technologies Launches PRSB-760G2 for large RapidIO networks - Prodrive Technologies". prodrive-technologies.com. 2 March 2015.
- Patrick Collier (14 October 2013). "Next Generation Space Interconnect Standard (NGSIS): A Modular Open Standards Approach for High Performance Interconnects for Space" (PDF). Reinventing Space Conference. p. 5. Retrieved 9 October 2014.
- "RapidIO Roadmap". www.rapidio.com. RapidIO Trade Association. 10 June 2012. p. 4. Retrieved 9 October 2014.
- "SpaceFibre Overview" (PDF). STAR-Dundee. Archived from the original (PDF) on 22 October 2014. Retrieved 21 October 2014.