Reduced instruction set computer
Reduced instruction set computing, or RISC (pronounced 'risk'), is a CPU design strategy based on the insight that a simplified instruction set (as opposed to a complex set) provides higher performance when combined with a microprocessor architecture capable of executing those instructions using fewer microprocessor cycles per instruction.[1] A computer based on this strategy is a reduced instruction set computer, also called RISC. The opposing architecture is called complex instruction set computing, i.e. CISC.
Various suggestions have been made regarding a precise definition of RISC, but the general concept is that of a system that uses a small, highly optimized set of instructions, rather than a more versatile set of instructions often found in other types of architectures. Another common trait is that RISC systems use the load/store architecture,[2] where memory is normally accessed only through specific instructions, rather than accessed as part of other instructions like add
.
Although a number of systems from the 1960s and 70s have been identified as being forerunners of RISC, the modern version of the design dates to the 1980s. In particular, two projects at Stanford University and University of California, Berkeley are most associated with the popularization of this concept. Stanford's design would go on to be commercialized as the successful MIPS architecture, while Berkeley's RISC gave its name to the entire concept, commercialized as the SPARC. Another success from this era were IBM's efforts that eventually led to the Power Architecture. As these projects matured, a wide variety of similar designs flourished in the late 1980s and especially the early 1990s, representing a major force in the Unix workstation market as well as embedded processors in laser printers, routers and similar products.
Well-known RISC families include DEC Alpha, AMD Am29000, ARC, ARM, Atmel AVR, Blackfin, Intel i860 and i960, MIPS, Motorola 88000, PA-RISC, Power (including PowerPC), RISC-V, SuperH, and SPARC. In the 21st century, the use of ARM architecture processors in smart phones and tablet computers such as the iPad and Android devices provided a wide user base for RISC-based systems. RISC processors are also used in supercomputers such as the K computer, the fastest on the TOP500 list in 2011, second at the 2012 list, and fourth at the 2013 list,[3][4] and Sequoia, the fastest in 2012 and third in the 2013 list.
History and development
A number of systems, going back to the 1970s (and even 1960s) have been credited as the first RISC architecture, partly based on their use of load/store approach.[5] The term RISC was coined by David Patterson of the Berkeley RISC project, although somewhat similar concepts had appeared before.[6]
The CDC 6600 designed by Seymour Cray in 1964 used a load/store architecture with only two addressing modes (register+register, and register+immediate constant) and 74 opcodes, with the basic clock cycle/instruction issue rate being 10 times faster than the memory access time.[7] Partly due to the optimized load/store architecture of the CDC 6600 Jack Dongarra states that it can be considered as a forerunner of modern RISC systems, although a number of other technical barriers needed to be overcome for the development of a modern RISC system.[8]
Michael J. Flynn views the first RISC system as the IBM 801 design which began in 1975 by John Cocke, and completed in 1980.[2] The 801 was eventually produced in a single-chip form as the ROMP in 1981, which stood for 'Research OPD [Office Products Division] Micro Processor'.[9] As the name implies, this CPU was designed for "mini" tasks, and was also used in the IBM RT-PC in 1986, which turned out to be a commercial failure.[10] But the 801 inspired several research projects, including new ones at IBM that would eventually lead to the IBM POWER instruction set architecture.[11][12]
The most public RISC designs, however, were the results of university research programs run with funding from the DARPA VLSI Program. The VLSI Program, practically unknown today, led to a huge number of advances in chip design, fabrication, and even computer graphics. The Berkeley RISC project started in 1980 under the direction of David Patterson and Carlo H. Sequin.[6] [13][14]
Berkeley RISC was based on gaining performance through the use of pipelining and an aggressive use of a technique known as register windowing.[13][14] In a traditional CPU, one has a small number of registers, and a program can use any register at any time. In a CPU with register windows, there are a huge number of registers, e.g. 128, but programs can only use a small number of them, e.g. eight, at any one time. A program that limits itself to eight registers per procedure can make very fast procedure calls: The call simply moves the window "down" by eight, to the set of eight registers used by that procedure, and the return moves the window back.[15] The Berkeley RISC project delivered the RISC-I processor in 1982. Consisting of only 44,420 transistors (compared with averages of about 100,000 in newer CISC designs of the era) RISC-I had only 32 instructions, and yet completely outperformed any other single-chip design. They followed this up with the 40,760 transistor, 39 instruction RISC-II in 1983, which ran over three times as fast as RISC-I.[14]
The MIPS architecture grew out of a graduate course by John L. Hennessy at Stanford University in 1981, resulted in a functioning system in 1983, and could run simple programs by 1984.[16] The MIPS approach emphasized an aggressive clock cycle and the use of the pipeline, making sure it could be run as "full" as possible.[16] The MIPS system was followed by the MIPS-X and in 1984 Hennessy and his colleagues formed MIPS Computer Systems.[16][17] The commercial venture resulted in the R2000 microprocessor in 1985, and was followed by the R3000 in 1988.[17]
It's not clear what RISC is. RISC is a term like artificial intelligence. It's a jargon word that tends to get used for everything.
In the early 1980s, significant uncertainties surrounded the RISC concept, and it was uncertain if it could have a commercial future, but by the mid-1980s the concepts had matured enough to be seen as commercially viable.[10][16] In 1986 Hewlett Packard started using an early implementation of their PA-RISC in some of their computers.[10] In the meantime, the Berkeley RISC effort had become so well known that it eventually became the name for the entire concept and in 1987 Sun Microsystems began shipping systems with the SPARC processor, directly based on the Berkeley RISC-II system.[10][19]
The US government Committee on Innovations in Computing and Communications credits the acceptance of the viability of the RISC concept to the success of the SPARC system.[10] The success of SPARC renewed interest within IBM, which released new RISC systems by 1990 and by 1995 RISC processors were the foundation of a $15 billion server industry.[10]
Since 2010 a new open source ISA, RISC-V, is under development at the University of California, Berkeley, for research purposes and as a free alternative to proprietary ISAs. As of 2014 version 2 of the userspace ISA is fixed.[20] The ISA is designed to be extensible from a barebones core sufficient for a small embedded processor to supercomputer and cloud computing use with standard and chip designer defined extensions and coprocessors. It has been tested in silicon design with the ROCKET SoC which is also available as an open source processor generator in the CHISEL language.
Characteristics and design philosophy
This section needs additional citations for verification. (March 2012) |
Instruction set philosophy
A common misunderstanding of the phrase "reduced instruction set computer" is the mistaken idea that instructions are simply eliminated, resulting in a smaller set of instructions.[21] In fact, over the years, RISC instruction sets have grown in size, and today many of them have a larger set of instructions than many CISC CPUs.[22][23] Some RISC processors such as the PowerPC have instruction sets as large as the CISC IBM System/370, for example; conversely, the DEC PDP-8—clearly a CISC CPU because many of its instructions involve multiple memory accesses—has only 8 basic instructions and a few extended instructions.
The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instruction accomplishes is reduced—at most a single data memory cycle—compared to the "complex instructions" of CISC CPUs that may require dozens of data memory cycles in order to execute a single instruction.[24] In particular, RISC processors typically have separate instructions for I/O and data processing.[citation needed]
The term load/store architecture is sometimes preferred.
Instruction format
Most RISC machines used a fixed length instruction(e.g. 32 bits) and layout, with more predictable encodings, which simplifies fetch and interdependency logic considerably; Several, such as ARM, Power ISA, MIPS, RISC-V, and the Adapteva Epiphany, have an optional compressed instruction option to work around the problem of reduced code density. The SH5 also follows this pattern, albeit having evolved in the opposite direction, having added longer media instructions to an original 16bit encoding.
Hardware utilization
For any given level of general performance, a RISC chip will typically have far fewer transistors dedicated to the core logic which originally allowed designers to increase the size of the register set and increase internal parallelism.
Other features that are typically found in RISC architectures are:
- Uniform instruction format, using a single word with the opcode in the same bit positions in every instruction, demanding less decoding;
- Identical general purpose registers, allowing any register to be used in any context, simplifying compiler design (although normally there are separate floating point registers);
- Simple addressing modes, with complex addressing performed via sequences of arithmetic, load-store operations, or both;
- Few data types in hardware, some CISCs have byte string instructions, or support complex numbers; this is so far unlikely to be found on a RISC.
- Processor throughput of one instruction per cycle on average
Exceptions abound, of course, within both CISC and RISC.
RISC designs are also more likely to feature a Harvard memory model, where the instruction stream and the data stream are conceptually separated; this means that modifying the memory where code is held might not have any effect on the instructions executed by the processor (because the CPU has a separate instruction and data cache), at least until a special synchronization instruction is issued. On the upside, this allows both caches to be accessed simultaneously, which can often improve performance.
Many early RISC designs also shared the characteristic of having a branch delay slot. A branch delay slot is an instruction space immediately following a jump or branch. The instruction in this space is executed, whether or not the branch is taken (in other words the effect of the branch is delayed). This instruction keeps the ALU of the CPU busy for the extra time normally needed to perform a branch. Nowadays the branch delay slot is considered an unfortunate side effect of a particular strategy for implementing some RISC designs, and modern RISC designs generally do away with it (such as PowerPC and more recent versions of SPARC and MIPS).[citation needed]
Some aspects attributed to the first RISC-labeled designs around 1975 include the observations that the memory-restricted compilers of the time were often unable to take advantage of features intended to facilitate manual assembly coding, and that complex addressing modes take many cycles to perform due to the required additional memory accesses. It was argued that such functions would be better performed by sequences of simpler instructions if this could yield implementations small enough to leave room for many registers, reducing the number of slow memory accesses. In these simple designs, most instructions are of uniform length and similar structure, arithmetic operations are restricted to CPU registers and only separate load and store instructions access memory. These properties enable a better balancing of pipeline stages than before, making RISC pipelines significantly more efficient and allowing higher clock frequencies.
In the early days of the computer industry, programming was done in assembly language or machine code, which encouraged powerful and easy-to-use instructions. CPU designers therefore tried to make instructions that would do as much work as feasible. With the advent of higher level languages, computer architects also started to create dedicated instructions to directly implement certain central mechanisms of such languages. Another general goal was to provide every possible addressing mode for every instruction, known as orthogonality, to ease compiler implementation. Arithmetic operations could therefore often have results as well as operands directly in memory (in addition to register or immediate).
The attitude at the time was that hardware design was more mature than compiler design so this was in itself also a reason to implement parts of the functionality in hardware or microcode rather than in a memory constrained compiler (or its generated code) alone. After the advent of RISC, this philosophy became retroactively known as complex instruction set computing, or CISC.
CPUs also had relatively few registers, for several reasons:
- More registers also implies more time-consuming saving and restoring of register contents on the machine stack.
- A large number of registers requires a large number of instruction bits as register specifiers, meaning less dense code (see below).
- CPU registers are more expensive than external memory locations; large register sets were cumbersome with limited circuit boards or chip integration.
An important force encouraging complexity was very limited main memories (on the order of kilobytes). It was therefore advantageous for the code density—the density of information held in computer programs—to be high, leading to features such as highly encoded, variable length instructions, doing data loading as well as calculation (as mentioned above). These issues were of higher priority than the ease of decoding such instructions.
An equally important reason was that main memories were quite slow (a common type was ferrite core memory); by using dense information packing, one could reduce the frequency with which the CPU had to access this slow resource. Modern computers face similar limiting factors: main memories are slow compared to the CPU and the fast cache memories employed to overcome this are limited in size. This may partly explain why highly encoded instruction sets have proven to be as useful as RISC designs in modern computers.
RISC was developed as an alternative to what is now known as CISC. Over the years, other strategies have been implemented as alternatives to RISC and CISC. Some examples are VLIW, MISC, OISC, massive parallel processing, systolic array, reconfigurable computing, and dataflow architecture.
In the mid-1970s, researchers (particularly John Cocke) at IBM (and similar projects elsewhere) demonstrated that the majority of combinations of these orthogonal addressing modes and instructions were not used by most programs generated by compilers available at the time. It proved difficult in many cases to write a compiler with more than limited ability to take advantage of the features provided by conventional CPUs.
It was also discovered that, on microcoded implementations of certain architectures, complex operations tended to be slower than a sequence of simpler operations doing the same thing. This was in part an effect of the fact that many designs were rushed, with little time to optimize or tune every instruction; only those used most often were optimized, and a sequence of those instructions could be faster than a less-tuned instruction performing an equivalent operation as that sequence. One infamous example was the VAX's INDEX
instruction.[13]
As mentioned elsewhere, core memory had long since been slower than many CPU designs. The advent of semiconductor memory reduced this difference, but it was still apparent that more registers (and later caches) would allow higher CPU operating frequencies. Additional registers would require sizeable chip or board areas which, at the time (1975), could be made available if the complexity of the CPU logic was reduced.
Yet another impetus of both RISC and other designs came from practical measurements on real-world programs. Andrew Tanenbaum summed up many of these, demonstrating that processors often had oversized immediates. For instance, he showed that 98% of all the constants in a program would fit in 13 bits, yet many CPU designs dedicated 16 or 32 bits to store them. This suggests that, to reduce the number of memory accesses, a fixed length machine could store constants in unused bits of the instruction word itself, so that they would be immediately ready when the CPU needs them (much like immediate addressing in a conventional design). This required small opcodes in order to leave room for a reasonably sized constant in a 32-bit instruction word.
Since many real-world programs spend most of their time executing simple operations, some researchers decided to focus on making those operations as fast as possible. The clock rate of a CPU is limited by the time it takes to execute the slowest sub-operation of any instruction; decreasing that cycle-time often accelerates the execution of other instructions.[25] The focus on "reduced instructions" led to the resulting machine being called a "reduced instruction set computer" (RISC). The goal was to make instructions so simple that they could easily be pipelined, in order to achieve a single clock throughput at high frequencies.
Later, it was noted that one of the most significant characteristics of RISC processors was that external memory was only accessible by a load or store instruction. All other instructions were limited to internal registers. This simplified many aspects of processor design: allowing instructions to be fixed-length, simplifying pipelines, and isolating the logic for dealing with the delay in completing a memory access (cache miss, etc.) to only two instructions. This led to RISC designs being referred to as load/store architectures.[26]
One more issue is that some complex instructions are difficult to restart, e.g. following a page fault. In some cases, restarting from the beginning will work (although wasteful), but in many cases this would give incorrect results. Therefore, the machine needs to have some hidden state to remember which parts went through and what remains to be done. With a load/store machine, the program counter is sufficient to describe the state of the machine.
The main distinguishing feature of RISC is that the instruction set is optimized for a highly regular instruction pipeline flow.[21] All the other features associated with RISC—branch delay slots, separate instruction and data caches, load/store architecture, large register set, etc.—may seem to be a random assortment of unrelated features, but each of them is helpful in maintaining a regular pipeline flow that completes an instruction every clock cycle.
Comparison to other architectures
This article needs additional citations for verification. (March 2012) |
Some CPUs have been specifically designed to have a very small set of instructions – but these designs are very different from classic RISC designs, so they have been given other names such as minimal instruction set computer (MISC), or transport triggered architecture (TTA), etc.
Despite many successes, RISC has made few inroads into the desktop PC and commodity server markets, where Intel's x86 platform remains the dominant processor architecture. There are three main reasons for this:
- A very large base of proprietary PC applications are written for x86 or compiled into x86 machine code, whereas no RISC platform has a similar installed base; hence PC users were locked into the x86.
- Although RISC was indeed able to scale up in performance quite quickly and cheaply, Intel took advantage of its large market by spending vast amounts of money on processor development. Intel could spend many times as much as any RISC manufacturer on improving low level design and manufacturing. The same could not be said about smaller firms like Cyrix and NexGen, but they realized that they could apply (tightly) pipelined design practices also to the x86-architecture, just as in the 486 and Pentium. The 6x86 and MII series did exactly this, but was more advanced; it implemented superscalar speculative execution via register renaming, directly at the x86-semantic level. Others, like the Nx586 and AMD K5 did the same, but indirectly, via dynamic microcode buffering and semi-independent superscalar scheduling and instruction dispatch at the micro-operation level (older or simpler ‘CISC’ designs typically execute rigid micro-operation sequences directly). The first available chip deploying such dynamic buffering and scheduling techniques was the NexGen Nx586, released in 1994; the AMD K5 was severely delayed and released in 1995.
- Later, more powerful processors, such as Intel P6, AMD K6, AMD K7, and Pentium 4, employed similar dynamic buffering and scheduling principles and implemented loosely coupled superscalar (and speculative) execution of micro-operation sequences generated from several parallel x86 decoding stages. Today, these ideas have been further refined (some x86-pairs are instead merged into a more complex micro-operation, for example) and are still used by modern x86 processors such as Intel Core 2 and AMD K8.
Outside of the desktop arena, however, the ARM architecture (RISC and born at about the same time as SPARC) has to a degree broken the Intel stranglehold with its widespread use in smartphones, tablets and many forms of embedded device. It is also the case that since the Pentium Pro (P6) Intel has been using an internal RISC processor core for its processors.[27]
While early RISC designs differed significantly from contemporary CISC designs, by 2000 the highest performing CPUs in the RISC line were almost indistinguishable from the highest performing CPUs in the CISC line.[28][29][30]
Use of RISC architectures
RISC architectures are now used across a wide range of platforms, from cellular telephones and tablet computers to some of the world's fastest supercomputers such as the K computer, the fastest on the TOP500 list in 2011.[3][4]
Low end and mobile systems
By the beginning of the 21st century, the majority of low end and mobile systems relied on RISC architectures.[31] Examples include:
- The ARM architecture dominates the market for low power and low cost embedded systems (typically 200–1800 MHz in 2014). It is used in a number of systems such as most Android-based systems, the Apple iPhone and iPad, Microsoft Windows Phone (former Windows Mobile), RIM devices, Nintendo Game Boy Advance and Nintendo DS, etc.
- The MIPS line, (at one point used in many SGI computers) and now in the PlayStation, PlayStation 2, Nintendo 64, PlayStation Portable game consoles, and residential gateways like Linksys WRT54G series.
- Hitachi's SuperH, originally in wide use in the Sega Super 32X, Saturn and Dreamcast, now developed and sold by Renesas as the SH4
- Atmel AVR used in a variety of products ranging from Xbox handheld controllers to BMW cars.
- RISC-V, the open source fifth Berkeley RISC ISA, with 32 bit address space a small core integer instruction set, an experimental "Compressed" ISA for code density and designed for standard and special purpose extensions.
High end RISC and supercomputing
- MIPS, by Silicon Graphics (ceased making MIPS-based systems in 2006).
- SPARC, by Oracle (previously Sun Microsystems), and Fujitsu.
- IBM's Power Architecture, used in many of IBM's supercomputers, midrange servers and workstations.
- Hewlett-Packard's PA-RISC, also known as HP-PA (discontinued at the end of 2008).
- Alpha, used in single-board computers, workstations, servers and supercomputers from Digital Equipment Corporation, Compaq and HP (discontinued as of 2007).
- RISC-V, the open source fifth Berkeley RISC ISA, with 64 or 128-bit address spaces, and the integer core extended with floating point, atomics and vector processing, and designed to be extended with instructions for networking, IO, data processing etc. A 64-bit superscalar design, "Rocket", is available for download.
See also
- Addressing mode
- Classic RISC pipeline
- Complex instruction set computer
- Computer architecture
- Instruction set
- Microprocessor
- Minimal instruction set computer
References
This article includes a list of general references, but it lacks sufficient corresponding inline citations. (May 2010) |
- ^ Northern Illinois University, Department of Computer Science, “RISC - Reduced instruction set computer”
- ^ a b Flynn, Michael J. (1995). Computer architecture: pipelined and parallel processor design. pp. 54–56. ISBN 0867202041.
- ^ a b "Japanese 'K' Computer Is Ranked Most Powerful". The New York Times. 20 June 2011. Retrieved 20 June 2011.
- ^ a b "Supercomputer "K computer" Takes First Place in World". Fujitsu. Retrieved 20 June 2011.
- ^ Fisher, Joseph A.; Faraboschi, Paolo; Young, Cliff (2005). Embedded Computing: A VLIW Approach to Architecture, Compilers and Tools. p. 55. ISBN 1558607668.
- ^ a b Milestones in computer science and information technology by Edwin D. Reilly 2003 ISBN 1-57356-521-0 page 50
- ^ Grishman, Ralph. Assembly Language Programming for the Control Data 6000 Series. Algorithmics Press. 1974. pg 12
- ^ Numerical Linear Algebra on High-Performance Computers by Jack J. Dongarra, et al 1987 ISBN 0-89871-428-1 page 6
- ^ Processor architecture: from dataflow to superscalar and beyond by Jurij Šilc, Borut Robič, Theo Ungerer 1999 ISBN 3-540-64798-8 page 33
- ^ a b c d e f Funding a Revolution: Government Support for Computing Research by Committee on Innovations in Computing and Communications 1999 ISBN 0-309-06278-0 page 239
- ^ Processor design: system-on-chip computing for ASICs and FPGAs by Jari Nurmi 2007 ISBN 1-4020-5529-3 pages 40-43
- ^ Readings in computer architecture by Mark Donald Hill, Norman Paul Jouppi, Gurindar Sohi 1999 ISBN 1-55860-539-8 pages 252-254
- ^ a b c Patterson, D. A.; Ditzel, D. R. (1980). "The case for the reduced instruction set computer". ACM SIGARCH Computer Architecture News. 8 (6): 25–33. doi:10.1145/641914.641917. CiteSeerx: 10.1.1.68.9623.
- ^ a b c RISC I: A Reduced Instruction Set VLSI Computer by David A. Patterson and Carlo H. Sequin, in the Proceedings of the 8th annual symposium on Computer Architecture, 1981.
- ^ Design and Implementation of RISC I by Carlo Sequin and David Patterson, in the Proceedings of the Advanced Course on VLSI Architecture, University of Bristol, July 1982 [1]
- ^ a b c d The MIPS-X RISC microprocessor by Paul Chow 1989 ISBN 0-7923-9045-8 pages xix-xx
- ^ a b Processor design: system-on-chip computing for ASICs and FPGAs by Jari Nurmi 2007 ISBN 1-4020-5529-3 pages 52-53
- ^ "Joseph H. Condon". Princeton University History of Science.
- ^ Computer science handbook by Allen B. Tucker 2004 ISBN 1-58488-360-X page 100-6
- ^ Waterman, Andrew; Lee, Yunsup; Patterson, David A.; Asanovi, Krste. "The RISC-V Instruction Set Manual, Volume I: Base User-Level ISA version 2 (Technical Report EECS-2014-54)". University of California, Berkeley. Retrieved 26 Dec 2014.
- ^ a b Margarita Esponda and Ra'ul Rojas. "The RISC Concept - A Survey of Implementations". Section 2: "The confusion around the RISC concept". 1991.
- ^ "RISC vs. CISC: the Post-RISC Era" by Jon "Hannibal" Stokes (Arstechnica)
- ^ "RISC versus CISC" by Lloyd Borrett Australian Personal Computer, June 1991
- ^ "Guide to RISC Processors for Programmers and Engineers": Chapter 3: "RISC Principles" by Sivarama P. Dandamudi, 2005, ISBN 978-0-387-21017-9. "the main goal was not to reduce the number of instructions, but the complexity"
- ^ "Microprocessors From the Programmer's Perspective" by Andrew Schulman 1990
- ^ Kevin Dowd. High Performance Computing. O'Reilly & Associates, Inc. 1993.
- ^ "Intel x86 Processors – CISC or RISC? Or both??" by Sundar Srinivasan
- ^ "Schaum's Outline of Computer Architecture" by Nicholas P. Carter 2002 p. 96 ISBN 0-07-136207-X
- ^ "CISC, RISC, and DSP Microprocessors" by Douglas L. Jones 2000
- ^ "A History of Apple's Operating Systems" by Amit Singh. "the line between RISC and CISC has been growing fuzzier over the years."
- ^ Guide to RISC processors: for programmers and engineers by Sivarama P. Dandamudi - 2005 ISBN 0-387-21017-2 pages 121-123