|Width in bits||Port widths of 1, 2, 4, 8, and 16 lanes|
|Number of devices||Sizes of 256, 64K, and 4 Gig|
Per lane (each direction):
|External interface||Yes, Chip-Chip, Board-Board (Backplane), Chassis-Chassis|
The RapidIO architecture is a high-performance packet-switched, interconnect technology. RapidIO supports both messaging and read/write semantics. RapidIO fabrics guarantee in-order packet delivery, enabling power- and area- efficient protocol implementation in hardware. Based on industry-standard electrical specifications such as those for Ethernet, RapidIO can be used as a chip-to-chip, board-to-board, and chassis-to-chassis interconnect. The protocol is marketed as The Embedded Fabric of Choice, and is used in many applications that are constrained by at least one of size, weight, and power (SWaP).
- 1 History
- 2 Terminology
- 3 Protocol Overview
- 4 Form Factors
- 5 Software
- 6 Applications
- 7 Competing Protocols
- 8 See also
- 9 References
- 10 External links
RapidIO has its roots in energy-efficient, high-performance computing. The protocol was originally designed by Mercury Computer Systems and Motorola (Freescale) as a replacement for Mercury’s RACEway proprietary bus and Freescale's PowerPC bus. The RapidIO Trade Association was formed in February 2000, and included telecommunications and storage OEMs as well as FPGA, processor, and switch companies. The protocol was designed to meet the following objectives:
- Low latency
- Guaranteed, in order, packet delivery
- Support for messaging and read/write semantics
- Could be used in systems with fault tolerance/high availability requirements
- Flow control mechanisms to manage short-term (less than 10 microseconds), medium-term (tens of microseconds) and long-term (hundreds of microseconds to milliseconds) congestion
- Efficient protocol implementation in hardware
- Low system power
- Scales from two to thousands of nodes
The RapidIO Specification Revision 1.1, released in 2001, defined a wide, parallel bus. This specification did not achieve extensive commercial adoption.
The RapidIO Specification Revision 1.2, released in 2002, defined a serial interconnect based on the XAUI physical layer. This specification has been wildly successful[clarification needed] within telecom, military compute, medical imaging, and industrial control and data path applications.
The RapidIO Specification Revision 2.0, released in 2008, added more port widths (2×, 8×, and 16×) and increased the maximum lane speed to 6.25 Gbaud. Revision 2.1 has been repeated and expanded the commercial success of the 1.2 specification.
The RapidIO Specification Revision 3.0, released in 2013, has the following changes and improvements compared to the 2.x specifications:
- Based on industry-standard Ethernet 10GBASE-KR electrical specifications for short (20 cm + connector) and long (1 m + 2 connector) reach applications
- Directly leverages the Ethernet 10GBASE-KR DME training scheme for long-reach signal quality optimization
- Defines a 64b/67b encoding scheme (similar to the Interlaken standard) to support both copper and optical interconnects and to improve efficiency
- Dynamic asymmetric links to save power (for example, 4× in one direction, 1× in the other)
- Addition of IEEE 1588 PTP-like time synchronization capabilities
- Support for 32-bit device IDs, increasing maximum system size and enabling innovative hardware virtualization support
- Revised routing table programming model simplifies network management software
- Packet exchange protocol optimizations
The RapidIO roadmap aligns with Ethernet PHY development. RapidIO specifications for 25 GBaud and higher links are in development.
RapidIO was selected by the Next Generation Spacecraft Interconnect Standard (NGSIS) working group to serve as the foundation for standard communication interconnects to be used in spacecraft. Independent trade study results, by NGSIS member companies, demonstrated the superiority of RapidIO over other existing commercial protocols, such as InfiniBand, Fibre Channel, and 10G Ethernet. As part of the effort, the NGSIS requirements committee developed extensive requirements criteria with 47 different elements for the NGSIS interconnect. The group decided that RapidIO offered the best overall interconnect for the needs of next-generation spacecraft. The two organizations began a dialog on the best way to work together and have determined that the RapidIO Trade Association will serve as the organizational home for completing the NGSIS standards work. This effort is referred to as Part S.
- Link Partner: One end of a RapidIO link.
- Endpoint: A device that can originate and/or terminate RapidIO packets.
- Processing Element: A device which has at least one RapidIO port
- Switch: A device that can route RapidIO packets.
The RapidIO protocol is defined in a 3-layered specification:
- Physical: Electrical specifications, PCS/PMA, link-level protocol for reliable packet exchange
- Transport: Routing, multicast, and programming model
- Logical: Logical I/O, messaging, global shared memory ( CC-NUMA ), flow control, data streaming
System specifications include:
- System Initialization
- Error Management/Hot Swap
The RapidIO electrical specifications are based on industry-standard Ethernet and Optical Interconnect Forum standards:
- XAUI for lane speeds of 1.25, 2.5, and 3.125 GBaud
- OIF CEI 6+ Gbps for lane speeds of 5.0 and 6.25 GBaud
- 10GBASE-KR 802.3-ap (long reach) and 802.3-ba (short reach) for lane speeds of 10.3125 GBaud
The RapidIO PCS/PMA layer supports two forms of encoding/framing:
- 8b/10b for lane speeds up to 6.25 Gbaud
- 64b/67b, similar to that used by Interlaken for lane speeds over 6.25 Gbaud
Every RapidIO processing element transmits and receives three kinds of information: Packets, control symbols, and an idle sequence.
Every packet has two values that control the physical layer exchange of that packet. The first is an acknowledge ID (ackID), which is the unique 5, 6, or 12 bit value that is used to track packets for the duration of the exchange. Packets are transmitted with serially increasing ackID values. The ackID is not covered by CRC, but by protocol. When a packet is successfully received, it is acknowledged using the ackID of the packet. A transmitter must retain a packet until it has been successfully acknowledged by the link partner.
The second value is the packet's physical priority. The physical priority is composed of the Virtual Channel (VC) identifier bit, the Priority bits, and the Critical Request Flow (CRF) bit. The VC bit determines if the Priority and CRF bits identify a Virtual Channel from 1 through 8, or are used as the priority within Virtual Channel 0. Virtual Channels are assigned guaranteed minimum bandwidths. Within Virtual Channel 0, packets of higher priority can pass packets of lower priority. Response packets must have a physical priority higher than requests in order to avoid deadlock.
The physical layer contribution to RapidIO packets is a 2-byte header at the beginning of each packet that includes the ackID and physical priority, and a final 2-byte CRC value to check the integrity of the packet. Packets larger than 80 bytes also have an intermediate CRC after the first 80 bytes. With one exception a packet's CRC value(s) acts as an end-to-end integrity check.
RapidIO control symbols can be sent at any time, including within a packet. This gives RapidIO the lowest possible in-band control path latency, enabling the protocol to achieve high throughput with smaller buffers than other protocols.
Control symbols are used to delimit packets (Start of Packet, End of Packet, Stomp), to acknowledge packets (Packet Acknowledge, Packet Not Acknowledged), reset (Reset Device, Reset Port) and to distribute events within the RapidIO system (Multicast Event Control Symbol). Control symbols are also used for flow control (Retry, Buffer Status, Virtual Output Queue Backpressure) and for error recovery.
The error recovery procedure is very fast. When a receiver detects a transmission error in the received data stream, the receiver causes its associated transmitter to send a Packet Not Accepted control symbol. When the link partner receives a Packet Not Accepted control symbol, it stops transmitting new packets and sends a Link Request/Port Status control symbol. The Link Response control symbol indicates the ackID that should be used for the next packet transmitted. Packet transmission then resumes.
The IDLE sequence is used during link initialization for signal quality optimization. It is also transmitted when the link does not have any control symbols or packets to send.
Every RapidIO endpoint is uniquely identified by a Device Identifier (deviceID). Each RapidIO packet contains two device IDs. The first is the destination ID (destID), which indicates where the packet should be routed. The second is the source ID (srcID), which indicates where the packet originated. When an endpoint receives a RapidIO request packet that requires a response, the response packet is composed by swapping the srcID and destID of the request.
RapidIO switches use the destID of received packets to determine the output port or ports that should forward the packet. Typically, the destID is used to index into an array of control values. The indexing operation is fast and low cost to implement. RapidIO switches support a standard programming model for the routing table, which simplifies system control.
The RapidIO transport layer supports any network topology, from simple trees and meshes to n-dimensional hypercubes, multi-dimensional toroids, and more esoteric architectures such as entangled networks.
The RapidIO transport layer enables hardware virtualization (for example, a RapidIO endpoint can support multiple device IDs). Portions of the destination ID of each packet can be used to identify specific pieces of virtual hardware within the endpoint.
The RapidIO logical layer is composed of several specifications, each providing packet formats and protocols for different transaction semantics.
The logical I/O layer defines packet formats for read, write, write-with-response, and various atomic transactions. Examples of atomic transactions are set, clear, increment, decrement, swap, test-and-swap, and compare-and-swap.
The Messaging specification defines Doorbells and Messages. Doorbells communicate a 16-bit event code. Messages transfer up to 4K of data, segmented into up to 16 packets each with a maximum payload of 256 bytes. Response packets must be sent for each Doorbell and Message request. The response packet status value indicates done, error, or retry. A status of retry requests the originator of the request to send the packet again. The logical level retry response allows multiple senders to access a small number of shared reception resources, leading to high throughput with low power.
The Flow Control specification defines packet formats and protocols for simple XON/XOFF flow control operations. Flow control packets can be originated by switches and endpoints. Reception of a XOFF flow control packet halts transmission of a flow or flows until an XON flow control packet is received or a timeout occurs. Flow Control packets can also be used as a generic mechanism for managing system resources.
The Globally Shared Memory specification defines packet formats and protocols for operating a cache coherent shared memory system over a RapidIO network.
The Data Streaming specification supports messaging with different packet formats and semantics than the Messaging specification. Data Streaming packet formats support the transfer of up to 64K of data, segmented over multiple packets. Each transfer is associated with a Class of Service and Stream Identifier, enabling thousands of unique flows between endpoints.
The Data Streaming specification also defines Extended Header flow control packet formats and semantics to manage performance within a client-server system. Each client uses extended header flow control packets to inform the server of the amount of work that could be sent to the server. The server responds with extended header flow control packets that use XON/XOFF, rate, or credit based protocols to control how quickly and how much work the client sends to the server.
Systems with a known topology can be initialized in a system specific manner without affecting interoperability. The RapidIO system initialization specification supports system initialization when system topology is unknown or dynamic. System initialization algorithms support the presence of redundant hosts, so system initialization need not have a single point of failure.
Each system host recursively enumerates the RapidIO fabric, seizing ownership of devices, allocating device IDs to endpoints and updating switch routing tables. When a conflict for ownership occurs, the system host with the larger deviceID wins. The "losing" host releases ownership of its devices and retreats, waiting for the "winning" host. The winning host completes enumeration, including seizing ownership of the losing host. Once enumeration is complete, the winning host releases ownership of the losing host. The losing host then discovers the system by reading the switch routing tables and registers on each endpoint to learn the system configuration. If the winning host does not complete enumeration in a known time period, the losing host determines that the winning host has failed and completes enumeration.
System enumeration is supported in Linux by the RapidIO subsystem.
RapidIO supports high availability, fault tolerant system design, including hot swap. The error conditions that require detection, and standard registers to communicate status and error information, are defined. A configurable isolation mechanism is also defined so that when it is not possible to exchange packets on a link, packets can be discarded to avoid congestion and enable diagnosis and recovery activities. In-band (port-write packet) and out-of-band (interrupt) notification mechanisms are defined.
The RapidIO specification does not discuss the subjects of form factors and connectors, leaving this to specific application-focussed communities. RapidIO is supported by the following form factors:
Processor-agnostic RapidIO support is found in the Linux kernel.
The RapidIO interconnect is used extensively in the following applications:
- Wireless base stations
- Aerospace and Military single-board computers, as well as radar, acoustic and image processing systems
- Medical imaging
- Industrial control and data path applications
RapidIO is expanding into supercomputing, server, and storage applications.
One alternate technology is PCI Express, but PCI Express is targeted at the host to peripheral market, as opposed to embedded systems. Unlike RapidIO, PCIe is not optimized for peer-to-peer multi processor networks. PCIe is ideal for host to peripheral communication. PCIe does not scale as well in large multiprocessor peer-to-peer systems, as the basic PCIe assumption of a "root complex" creates fault tolerance and system management issues.
Another alternative interconnect technology is Ethernet. Ethernet is a robust approach to linking computers over large geographic areas, where network topology may change unexpectedly, the protocols used are in flux, and link latencies are large. To meet these challenges, systems based on Ethernet require significant amounts of processing power, software and memory throughout the network to implement protocols for flow control, data transfer, and packet routing. RapidIO is optimized for energy efficient, low latency, processor-to-processor communication in fault tolerant embedded systems that span geographic areas of less than one kilometre.