Jump to content

Intel iAPX 432

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Someone not using his real name (talk | contribs) at 11:22, 9 March 2014. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Intel iAPX 432
Intel Corporation logo, 1968-2005
General information
Launched1981
Common manufacturer
  • Intel
Performance
Max. CPU clock rate5 MHz to 8 MHz

The iAPX 432 (intel Advanced Processor architecture) was Intel's first 32-bit microprocessor design, introduced in 1981 as a set of three integrated circuits. It was intended to be Intel's major design for the 1980s, implementing many advanced multitasking and memory management features. The design was therefore referred to as a Micromainframe.

Originally designed for clock frequencies of up to 10 MHz, actual devices sold were specified for maximum clock speeds of 4 MHz, 5 MHz, 7 MHz and 8 MHz respectively[1] with a peak performance of 2 million instructions per second at 8 MHz.[2]

The iAPX 432 was "designed to be programmed entirely in high-level languages",[3] with Ada being primary and it supported object-oriented programming and garbage collection directly in hardware and microcode. Direct support for various data structures was also intended to allow modern operating systems for the iAPX 432 to be implemented using far less program code than for ordinary processors. These properties and features resulted in a hardware and microcode design that was much more complex than most processors of the era, especially microprocessors.

Using the semiconductor technology of its day, Intel's engineers weren't able to translate the design into a very efficient first implementation. Along with the lack of optimization in a premature Ada compiler, this contributed to rather slow but expensive computer systems, performing typical benchmarks at roughly 1/4 the speed of the new 80286 chip at the same clock frequency (in early 1982).[citation needed]

This initial performance gap to the rather low profile and low priced 8086-line was probably the main reason why Intel's plan to replace the latter (later known as x86) with the iAPX 432 failed. Although engineers saw ways to improve a next generation design, the iAPX 432 Capability architecture had now started to be regarded more as an implementation overhead rather than as the simplifying support it was intended to be.[4]

The iAPX 432 project was a commercial failure for Intel.[5]

History

Development

Intel's 432 project started in 1975, a year after the 8-bit Intel 8080 was completed and a year before their 16-bit 8086 project began. The 432 project was initially named the 8800, as their next step beyond the existing Intel 8008 and 8080 microprocessors. This became a very big step. The 8-bit microprocessors' instruction sets were too primitive to support compiled programs and large software systems. Intel now aimed to build a sophisticated complete system in a few LSI chips, that was functionally equal to or better than the best 32-bit minicomputers and mainframes requiring entire cabinets of older chips. This system would support multiprocessors, modular expansion, fault tolerance, advanced operating systems, advanced programming languages, very large applications, ultra reliability, and ultra security. Its architecture would address the needs of Intel's customers for a decade.[6]

The iAPX 432 development team was managed by Bill Lattin, with Justin Rattner as the lead engineer. (Rattener would later become CTO of Intel.) Initially the team worked from Santa Clara, but in March 1977 Lattin and his team of 17 engineers moved to Intel's new site in Portland.[7]

It soon became clear that it would take several years and many engineers to design all this. And it would similarly take several years of further progress in Moore's Law, before improved chip manufacturing could fit all this into a few dense chips. Meanwhile, Intel urgently needed a simpler interim product to meet the immediate competition from Motorola, Zilog, and National Semiconductor. So Intel began a rushed project to design the 8086 as a low-risk incremental evolution from the 8080, using a separate design team. The mass-market 8086 shipped in 1978.

The 8086 was designed to be upward-compatible with existing 8080 DOS software (at assembly source code level). In contrast, the 432 had no software compatibility or migration requirements. The architects had total freedom to do a novel design from scratch, using whatever techniques they guessed would be best for large-scale systems and software. They applied fashionable computer science concepts from universities, particularly capability machines, object-oriented programming, high-level CISC machines, Ada, and densely encoded instructions. This ambitious mix of novel features made the chip larger and more complex. The chip's complexity limited the clock speed and lengthened the design schedule.

The core of the design — the main processor — was termed the General Data Processor (GDP) and built as two chips: one (the 43201) to fetch and decode instructions, the other (the 43202) to execute them. Most systems would also include a third chip: the 43203 Interface Processor (IP) which operated as a channel controller for I/O.

These were some of the largest [clarification needed] designs of the era. The two-chip GDP had a combined count of approximately 97,000 transistors while the single chip IP had approximately 49,000. By comparison, the Motorola 68000 (introduced in 1979) had approximately 40,000 transistors.

In 1983, Intel released two additional integrated circuits for the iAPX 432 Interconnect Architecture: the 43204 Bus Interface Unit (BIU) and 43205 Memory Control Unit (MCU). These chips allowed for nearly glueless multiprocessor systems with up to 63 nodes.

The project's failures

The innovative features of the iAPX 432 were individually detrimental to good performance[citation needed]. Combined together, it ran many times[weasel words] slower than contemporary conventional microprocessor designs such as the Motorola 68010 and Intel 80286. One problem was that the two-chip implementation of the GDP limited it to the speed of the motherboard's electrical wiring.[citation needed] A larger issue was the capability architecture needed large associative caches to run efficiently,[citation needed] but the chips had no room left for that. The instruction set also used bit-aligned variable-length instructions (as opposed to the byte or word-aligned semi-fixed formats used in the majority of computer designs). Instruction decoding was much more complex than in other designs.[which?] In addition, the BIU was designed to support fault-tolerant systems, and in doing so up to 40% of the bus time was held up in wait states.[citation needed]

Another major problem was its immature untuned Ada compiler.[citation needed] It used high-cost object-oriented instructions in every case, instead of the faster scalar instructions where it would have made sense to do so.[citation needed] For instance the iAPX 432 included a very expensive inter-module procedure call instruction, which the compiler used for all calls, despite the existence of much faster branch and link instructions. Another very slow call was enter_environment, which set up the memory protection. The compiler ran this for every single variable in the system, even though the vast majority[weasel words] were running inside an existing environment and did not have to be checked.[citation needed] To make matters worse, data passed to and from procedures was always passed by value rather than by reference[citation needed] in many cases[which?] requiring huge memory copies.

Impact and similar designs

An outcome of the failure of the 432 was that microprocessor designers[who?] concluded that object support in the chip leads to a complex design that will invariably run slowly, and the 432 was often cited[citation needed] as a counter-example by proponents of RISC designs. However it is held by some[who?] that the OO support was not the primary problem with the 432 and that the implementation shortcomings (especially in the compiler) mentioned above would have made any CPU design slow.[citation needed] Since the iAPX 432 there has been only one other attempt at a similar design[citation needed], the Rekursiv processor, although the INMOS Transputer's process support was similar — and very fast[citation needed].

Intel had spent considerable time, money and mindshare on the 432,[weasel words] had a skilled[citation needed] team[who?] devoted to it, and were unwilling to abandon it entirely after its failure in the marketplace. A new architect—Glenford Myers—was brought in to produce an entirely new architecture and implementation for the core processor, which would be built in a joint Intel/Siemens project (later BiiN), resulting in the i960-series processors. The i960 RISC subset became popular[citation needed] for a time[when?] in the embedded processor market, but the high-end 960MC and the tagged-memory 960MX were marketed only for military applications[citation needed].

Architecture

Object-oriented memory and capabilities

The iAPX 432 has hardware and microcode support for object-oriented programming and capability-based addressing.[8] The system uses segmented memory, with up to 224 segments of up to 64 kB each, providing a total virtual address space of 240 bytes. The physical address space is 224 bytes (16 MB).

Programs are not able to reference data or instructions by address; instead they must specify a segment and an offset within the segment. Segments are referenced by Access Descriptors (ADs), which provide an index into the system object table and a set of rights (capabilities) governing accesses to that segment. Segments may be "access segments", which can only contain Access Descriptors, or "data segments" which cannot contain ADs. The hardware and microcode rigidly enforce the distinction between data and access segments, and will not allow software to treat data as access descriptors, or vice versa.

System-defined objects consist of either a single access segment, or an access segment and a data segment. System-defined segments contain data or access descriptors for system-defined data at designated offsets, though the operating system or user software may extend these with additional data. Each system object has a type field which is checked by microcode, such that a Port Object cannot be used where a Carrier Object is needed. User program can define new object types which will get the full benefit of the hardware type checking, through the use of Type Control Objects (TCO).

In Release 1 of the iAPX 432 architecture, a system-defined object typically consisted of an access segment, and optionally (depending on the object type) a data segment specified by an access descriptor at a fixed offset within the access segment.

By Release 3 of the architecture, in order to improve performance, access segments and data segments were combined into single segments of up to 128 kB, split into an access part and a data part of 0–64 kB each. This reduced the number of object table lookups dramatically, and doubled the maximum virtual address space.[9]

Garbage collection

Software running on the 432 does not need to explicitly deallocate objects that are no longer needed, and in fact no method is provided to do so. Instead, the microcode implements part of the marking portion[citation needed] of Edsger Dijkstra's on-the-fly parallel garbage collection algorithm (a mark-and-sweep style collector).[10] The entries in the system object table contain the bits used to mark each object as being white, black, or grey as needed by the collector.

The iMAX-432 operating system includes the software portion of the garbage collector.

References

  1. ^ Intel iAPX-432 Micromainframe
  2. ^ [1][dead link]
  3. ^ Intel Corporation (1981). Introduction to the iAPX 432 Architecture (PDF). pp. iii.
  4. ^ Colwell, Robert; Gehringer, Edward (1988). "Performance Effects of Architectural Complexity in the Intel 432" (PDF). Transactions on Computer Systems. 6 (3). Association of Computing Machinery: 206=339. Retrieved 12 June 2013.
  5. ^ Dvorak, John C. "Whatever Happened to the Intel iAPX432?". Retrieved 19 July 2012.
  6. ^ Intel iAPX 432 (Computer Science project paper), David King, Liang Zhou, Jon Bryson, David Dickson. Online at http://www.brouhaha.com/~eric/retrocomputing/intel/iapx432/cs460
  7. ^ Heike Mayer (2012). Entrepreneurship and Innovation in Second Tier Regions. Edward Elgar Publishing. pp. 100–101. ISBN 978-0-85793-869-5.
  8. ^ Henry M Levy, chapter 9 of Capability-Based Computer Systems, Digital Press 1984, online at http://www.cs.washington.edu/homes/levy/capabook/Chapter9.pdf
  9. ^ Glenford J Meyers, Advances in Computer Architecture, 2nd edition, Wiley 1982, ISBN 0-471-07878-6. Section VI covers the iAPX 432.
  10. ^ On-the-fly garbage collection: an exercise in cooperation by Edsger W Dijkstra, Leslie Lamport, A J Martin, C S Scholten, E F M Steffens

External links