Jump to content

Talk:Computer architecture/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Techextreme.com

I pulled the link to techextreme.com. All it has are advertisements and an offer to buy the domain. -- Bill

Bill, it's usually best to put new comments at the bottom of each "talk" page rather than at the top. Also, you can easily sign your name in a handy Wikilinked format and add an automatic timestamp to your signature simply by typing four tildes (~~~~) at the end of your posting.
Atlant 11:52, 29 July 2005 (UTC)

Any reference books

It will be great if you can suggest some books on this topic in the references/See-also section. - Bose 203.200.30.3 11:53, 10 September 2005 (UTC)

C. Gordon Bell (and Bill Strecker?) wrote a famous one; I'll look on my bookshelf tomorrow.
Atlant 23:49, 10 September 2005 (UTC)

different types of speed: latency, throughput, etc.

The paragraph starts by saying there are different types of speed but then mentions only one, latency. At a minimum there should be a mention of the classic dichotomy between latency and throughput. Ideogram 14:52, 31 May 2006 (UTC)

Virtual Memory & Reconfigurable Computing

Although these two topics are somewhat related to computer architecture, they do not embody it. Meaning, there are dozens of other topics in computer architecture that are just as important (if not more) that are not mentioned. I believe the article should be kept more general, perhaps adding those items to a separate "also see" list at the bottom.

- Quanticles 09:13, 28 January 2007 (UTC)


I also agree that these two items should not receive the attention that they do on this page. The fact that many processors today (embedded) do not :make use of virtual memory and that reconfigurable computing is as limited as it is, shows that these two topics do not embody computer architecture :and should only be listed as related issues.
- Some anonymous guy.

Ahmdal's Law

I cannot find any mention of "Ahmdal's Law" in wikipedia. does it belong in this article? should it have its own article? where is the primary source one must seek to find the origin of this "Ahmdal's Law" that I only hear about second hand?--User:William Morgan 04:38, 27 August 2006 (UTC)

There is an article actually, at Amdahl's law. However when I searched for Amdahl I was taken straight to a page about some company. So I have added an other uses link to that article. AlyM 09:14, 27 August 2006 (UTC)

IMHO, it's such a common-sense principle that it does not really need its own article or even name. —Preceding unsigned comment added by 165.123.166.155 (talk) 03:19, 3 April 2008 (UTC)

Abstraction Layer question:

Regarding the color image with text: "A typical vision of a computer architecture as a series of abstraction layers:...", I don't understand how "assembler" can be a layer within a computer's architecture. I might agree if the term 'machine language' (or similar) were used here, but to me an 'assembler' is a running program which turns human readable (ASCII characters, e.g.) machine instruction input into machine readable code. Obviously, an assembler is used by programmers when writing (at least part of) an OS "kernel," but you also need software (such as an assembler) to create "firmware code" that can be understood by the "hardware," yet we see no "assembler" layer between "hardware" and "firmware" in the illustration. I'm merely an assembly hacker and technician, but still, either 'assembly' (perhaps, as a required operation) or 'Instruction Set reader(?)' (as a reference to how the machine's CPU can execute the "kernel" code) would make more sense to me as an "abstraction layer" than "assembler," but I'm certainly willing to learn. Daniel B. Sedory 00:18, 22 March 2007 (UTC)

I've already read the article here, but still see no connection between its discussion of computer architecture and the term "assembler"; the only time it appears on the whole page is a link under the diagram, which jumps to "Assembly Language"; where you'll find a definition similar to what I stated above:

"An assembly language program is translated into the target computer's machine code by a utility program called an assembler. The assembler performs a (more or less) one-to-one (isomorphic) translation from mnemonic statements into machine instructions and data."

So, is this diagram in error, or can someone please explain to me why the term "assembler" should appear in it? Daniel B. Sedory 21:33, 12 April 2007 (UTC)

Hi Daniel, I saw the comment on my talk page, unfortunately I didn't saw this discussion previously. Yes I added that Image based on Tanenbaum '79 book and the classes I took at university. Tanenbaum defines an abstraction level as composed of a set of resources/services and a language to access them; so it is formally more correct to say that "assembly" is the language and "assembler" the theoretically-conceived abstraction level. (this article, like all the ones on related topics, is still in a very early stage of development and gives no explanation)--BMF81 11:11, 13 April 2007 (UTC
I guess, Herr Tanenbaum has mistaken. If not, why not adding ALL programming languages and scripts before application level, to "make the world complete"? Terms "assembly" and "assembler" mean same thing - A-Level language(human understandable language close to that of the machine). It was meant "to assemble code", thats why calling it as a tool "assembler" or as a process "assembly" means exactly the same thing. Furthermore almost everyone who has programmed or written books about this language, referred to it as "assembler"; Dispute on these two terms is of academical nature, but was risen much later after the language was invented and used. Thats why it is purely abstract. To make it short, hardware DOES NOT understand assembly code. First after compiling it(even if very little needs to be done,it is still a complete transformation), hardware gets right "food". Now, replacing "assembler" with "machine code" makes it closer to the reality, still it is to be known, that still there can (and it is usually so) be two(or more) pieces of hardware, interconnected on one central board, that use fully different code! Compare CPU and GPU programming! Based on that, "assembler" should be replaced with "binary code". 213.196.248.84 (talk) 05:59, 24 April 2008 (UTC)
Yes, it would definitely have been helpful to me if the article contained a statement to that effect: that "assembler" in regards to abstraction layers has this special meaning of a "theoretically-conceived abstraction level" and not one of many existing software assemblers! Does this mean some of the other terms in the diagram are being used in a far less than common meaning as well? As I said above, if there's an "assembler level" between "firmware" and the "kernel," why isn't there something similar between "hardware" and "firmware"? Or is that conveniently left as something outside the realm of computer architecture; something only for 'hardware vendors' to deal with? Daniel B. Sedory 11:32, 20 April 2007 (UTC)
yes, exactly. "hardware" there just means "out of the scope of interest of computer scientists". But as Engineers and Physicians know, it can be further layered.--BMF81 04:31, 22 April 2007 (UTC)

assembler is not an abstraction layer

it just does its work and goes away, it's not a layer, the image is wrong. 196.218.92.201 17:03, 24 September 2007 (UTC)

  • If you read my questions and comments above, you'd see I was of the same opinion when I first read this article. However, I was told this is how a number of Computer Science professors have taught the subject. Do you have any proof to the contrary? Especially, can you provide some quotes from any textbook on the subject? (Would be good if you'd sign your name too.) Daniel B. Sedory 02:32, 26 September 2007 (UTC)

These last two sections can't be serious. Let these "computer science professors" come forward and try to publish their "theoretically-conceived abstraction level" ideas. Still ROTF over the term itself. What a claptrap. I hope no good money was paid to attend classes where this nonsense was spouted. Vyyjpkmi 03:48, 26 September 2007 (UTC)

  • Assembly is NOT a part of PC architecture, this is ridiculous!! Its an A-level programming language, just as any other language, but is close to hardware. It sticks with hardware, that means coding for x86 and 68K is a bit different, but it still NEEDS a compiler and linker to work as a machine code! Ok, add "Machine Code" layer instead of "Assembly", but still its wrong cause all apps, including firmware, kernel and apps run (injected) as a machine code, with a minor exception of scripts, managed code,etc.-all those things that get interpretated@runtime.

Dear "professors", please do NOT confuse newbies. PC is real and functioning, theories may NOT. So please DO some practice for each theory,.. or at least disasm some stuff, before writing and lobbing such bulls#!t(excuse me!) 213.196.248.84 (talk) 05:38, 24 April 2008 (UTC)

This diagram is too misleading to leave even as a placeholder for a better one.Brock (talk) 14:45, 17 October 2008 (UTC)

Merge Hardware architecture into this article

The Hardware architecture in fact discusses "computer architecture", often assuming "hardware architecture + software architecture = system architecture". The lead section uses a great effort to explain, that computer is not the only thing that runs software. It gives examples of an airplane, Apollo spacecraft and a modern car as pieces of hardware, that are architected to run software, too. I think a car architect rarely calls himself a hardware architect, and in fact rarely designs embedded systems (=computers) that actually run the car's software. He needs a computer architect for that.

If there is in fact a term "hardware architecture" in common use, I doubt it means "architecture of machines that can run software". After merge, the new stub could be created with a proper, sourced definition.

The hardware is defined by wiktionary as:

  1. Metal implements.
  2. Firearm.
  3. Electronic equipment.
  4. The part of a computer that is fixed.
  5. Fixtures, equipment, tools and devices used for general purpose construction.

--Kubanczyk 22:14, 28 October 2007 (UTC)

Hardware architecture is indeed a term in common use within the field of computer science. I feel that redirecting to an article titled Computer Architecture would be confusing to many people. --Rickpock (talk) 18:52, 14 February 2008 (UTC)

dalvir singh —Preceding unsigned comment added by 202.164.44.59 (talk) 09:46, 22 October 2009 (UTC)

Bold textcomputer architecture :-CA is a design of computers,including their instruction set , hardware components & system organization.CA deals with the design of computer & with computer systems

CA=Instruction set arch.+machine organisation  —Preceding unsigned comment added by 124.253.229.168 (talk) 02:54, 23 September 2010 (UTC) 

Open Architecture

I think this article needs to include at least a definition (and probably a link to) "open architecture ", but I'm not quite sure exactly where to put it. Does anyone feel competent ? Darkman101 (talk) 19:25, 5 October 2011 (UTC)

Performance

The article reads: Computer performance can also be measured with the amount of cache a processor has. If the speed, MHz or GHz, were to be a car then the cache is like a traffic light. No matter how fast the car goes, it still will not be stopped by a green traffic light. The higher the speed, and the greater the cache, the faster a processor runs.

Am I the only one who: 1) has no idea what the traffic light example is talking about 2) is pretty sure that very big caches are not a good idea (then they would not be faster than main memory), and that a computer's performance cannot be measured (well) by its cache size? —Preceding unsigned comment added by 165.123.166.155 (talk) 03:28, 3 April 2008 (UTC)


The example of car is suicidal at best. Please dont follow it in real life and do be careful when you come across a stale green light. Getting back to the discussion, the best analogy for cacche would be that of rear spoilers which provide some enhancement to speed but have to be kept clean after use, otherwise they might degrade performance. Just like spoilers cache are useful in certain specialised operations like mathematical functions etc. 0police (talk) 17:09, 4 January 2012 (UTC)

CPU design in 2006

"The performance race in microprocessors, in which they typically compete by increasing the clock frequency and adding more gates, is over," said Alf-Egil Bogen http://pldesignline.com/news/showArticle.jhtml?articleId=177105335

"Designers quietly tap async practices" http://pldesignline.com/news/174402071

"ARM-based clockless processor targets real-time chip designs" Marty Gold, 2006-02-09 http://www.eeproductcenter.com/micro/brief/showArticle.jhtml?articleID=179102696

"antenna-in-package (AiP) system" http://eet.com/news/latest/showArticle.jhtml?articleID=179103090

"Data Forwarding Logic, Zero-Cycle Branches, High Code Density Increase Throughput Per Cycle" http://www.atmel.com/dyn/corporate/view_detail.asp?FileName=AVR32bit_long_2_14.html

Does this article (or its sub-articles) adequately explain the terminology in the above reports?


Please sign your comments with ("0police (talk) 17:24, 4 January 2012 (UTC)") so that others can know how old your suggestions are. Also, why should this article include these reports? Please give a summary suggestion 0police (talk) 17:24, 4 January 2012 (UTC)

Energy v. Power

The top of the article says that simulators calculate "energy in watts". Watt is a unit of power, not energy. I assume power is the correct term here. Rdv (talk) 01:02, 18 March 2012 (UTC)

Computer architecture versus organization

I am currently taking a computer organization class and we went over the difference between computer organization and architecture for almost an hour. Why does a search for 'computer organization' redirect to this page? RyanEberhart 18:48, 15 March 2006 (UTC)

At first blush, to my ear, "computer organization" and "computer architecture" sounds like a difference without a distinction. But I'd appreciate it if you'd tell us what your/your instructor's point of view was on the difference.
Atlant 20:54, 15 March 2006 (UTC)
In general, that which is visible to an assembly programmer is organization, whereby that which is invisible is architecture. For example, the number/size of registers would be organization, but stuff like pipelining, branch prediction, on-the-fly swapping of instruction exeuction etc. is architecture. We have two seperate courses, one on each topic.
-RyanEberhart 02:58, 16 March 2006 (UTC)
Urgh... Speaking as a practioner of computer architecture for 20+ years (and the possible inventor of the terms MacroArchitecure and UISA) that which is visible to an assembly language programmer is ISA (Instruction Set Architecture), or more specifically Assembly Language Instruction Set Architecture.

Stuff like pipelining, etc. is microarchitecture. "Architecture" refers to an abstraction that is supposed to be maintained across generations.

-Andy Glew, —Preceding unsigned comment added by 134.134.136.34 (talk) 00:47, 5 April 2008 (UTC)


Okay, after a lot of thought and frequent reference to my Third Edition Hennessy and Patterson (Computer Architecture, A Quantitative Approach), it seems to me that we might be able to find consensus. H&P are themselves cross-bay academic rivals, and they don't always agree on everything. That, plus their amazing attention to scientific detail, is part of what makes their book probably the most authoritative in the field. From p. 10 of 3rd ed. hardback:

In this book, the word architecture is intended to cover all three aspects of computer design---instruction set architecture, organization, and hardware.

The book clearly uses the disambiguated term "instruction set architecture" (ISA) to define what the wikipedia "computer organization" article, e.g. had been more loosely referring to simply as "computer architecture." ISA is everything connected to the instruction set, including (as above) the number/size of registers. Organization would include pipelining, branch prediction, etc. Architecture, on its own, can include all these things.

With that in mind, I've been actively working to bring the writeups within the articles "computer architecture" and "computer organization" into a more cosmic alignment.

(PS Ryan, it appears to me that Rochester actually uses H&P 3rd ed. as its textbook for ECE201/401 Advanced Computer Architecture (http://www.ece.rochester.edu/courses/ECE201/)...is it possible your instructor was a little bit confused...?)

- Su-steve 23:08, 19 February 2007 (UTC) Basing myself upon: (a) Hennessy and Patterson's 4th edition of Computer Architecture - A Quantitative approach (pages 8, 12)

(b) John P. Hayes's 2nd edition of "Computer Architecture and Organization" (p.47)

(c) Moderation of the meaning given by Hayes regarding distinction between architecture and implementation with the meaning given by Hennessy and Patterson that integrates implementation with architecture, where the latter two detach themselves from past interpretations given by thought similar to that expressed by Hayes in (b)

(d) Expansion of the interpretation of Computer Architecture given at http://en.wikipedia.org/wiki/Microarchitecture

...I would like to propose that Computer Architecture may be graphically described as shown in the figure. The relationship diagram may be expanded by addition of further sub-classifications.

http://commons.wikimedia.org/wiki/File:Relationship_between_Computer_Architecture_and_Computer_Organisation.jpg

Edepa (talk) 21:14, 30 December 2012 (UTC)

Origin of term

The article should the origin of the term "computer architecture", with Blaauw and Brooks in the 1960s. First general usage of the term was in reference to the IBM System/360 series, which were the first line of computers designed from the outset to offer a wide range of models all conforming to a common architectural specification (IBM System/360 Principles of Operation). --Brouhaha 21:44, 23 Jan 2005 (UTC)

Comment on the term:

Although architecture (of buildings) and computers have a much complicated history (in theory to say the least), maybe it is worth mentioning the etymology of the word "architecture" goes back to "architect" from latin Arkhitekton as in "master builder". [1]

Aalomaim (talk) 08:47, 12 April 2013 (UTC)

I also feel that the configurable computing section should be yanked. It is a small part of computer architecture and should be linked rather than having this much dedicated space in the main computer architecture article.

Architecture versus Architecture (Introduction text)

I edited a sentence in the intro. where it mentions the "architecture of buildings," the old sentence implied that building arch. is not logical. it also gave the impression that only computer arch. able to define "systems to serve particular purposes". (too general, building arch can be said to that as well, please let me know if I must elaborate). I tried my best to find a way to differentiate the two kinds of architecture (if that was the point) and I found the easiest way is to differentiate them by "discipline", but this might be redundant. Overall I think there is no need to mention the "architecture of building" in the intro. a link to architecture is sufficient. (except perhaps in the "Origin of the term" or its etymology, see my comment above under "origin of the term")


Aalomaim (talk) 08:47, 12 April 2013 (UTC)

ESCAPE

The link to the ESCAPE cpu simulation doesn't seem to lead anywhere. I did a moderate google search for another link to the software, but I can't seem to find anything. Anyone know where it might be? —Preceding unsigned comment added by 24.236.97.56 (talk) 08:06, 7 December 2008 (UTC)

I see that every mention of the ESCAPE simulation have been removed from this article. [1] I put a link to the current location of the ESCAPE cpu simulation source code into the computer architecture simulator article. To comply with the WP:PRESERVE policy, should this article also link to that source code? --DavidCary (talk) 17:56, 27 January 2014 (UTC)

Archiving talk page

I think it might be useful to archive conversations on this talk page. I've seen other talk pages that use MiszaBot and I might try to configure it for this page unless someone objects. I realize that the talk page is not very active, but I think that it would be easier to work with if old conversations were archived. Gabriel Southern (talk) 20:05, 9 March 2014 (UTC)

Can someone double check the grammar and syntax of the article?

I think I caught all the errors but it would help if someone would go over it again just to make sure there are no major errors and for the most part the article follows this. BestKogNA (talk) 14:16, 11 May 2017 (UTC)

subarchitecture

Should there be, somewhere in Wikipedia, discussion of subarchitecture? Right now, I am thinking of it in the context of Sun4, Sun4c, Sun4m, Sun4u, but many architectures over the years have subarchitectures worth noting. In the Sun4 case, as well as I know, it is mostly differences in MMU design, and so important for the OS, much less for users, but still should go somewhere. Gah4 (talk) 19:48, 10 June 2019 (UTC)

The Sun architectures were system architectures. Sun-1 was 68000-based, with a Sun MMU; Sun-1U/Sun-2 were 68010-based, with a Sun MMU; Sun-3 was 68020-based, with a Sun MMU; Sun-3x was 68030-based, with the on-chip MMU; Sun-4 was based on various SPARC processors, with a Sun MMU and a VMEbus (as earlier Suns did); Sun-4c was based on an LSI Logic SPARC processor, with a Sun-style MMU (as I remember) and an SBus; Sun-4e had the same CPU and MMU, but a VMEbus; Sun-4m was based either on SuperSPARC or hyperSPARC processors, with the on-chip in-memory page-table based Sun Reference MMU, and using the MBus for multi-processor systems; Sun-4d was similar, but used the XDBus for multi-processor systems; etc..
So the differences affected the ISA in some cases (68010 -> 68020, SPARC v7 -> v8 -> v9), affected the MMU in some cases (Sun MMU -> on-chip MMUs of various sorts), and affected only system buses in other cases (VME -> SBus, MBus -> XDBus).
Different Sun subarchitectures fall into different subcategories in Computer architecture#Subcategories:
  • some involve the ISA even if you don't include the MMU (68010 -> 68020, SPARC v7 -> v8 -> v9);
  • some involve the ISA if you include the MMU;
  • some involve only system design (Sun-4m -> Sun-4d, which both used SuperSPARC);
and they may involve microarchitecture if one subarchitecture uses one set of CPUs/MMU designs and another uses a non-overlapping set, but some subarchitectures used multiple CPUs with different microarchitectures.
So I'm not sure where this would fit. Guy Harris (talk) 21:22, 10 June 2019 (UTC)
Seems like two choices are an article of its own, or a section in this article. Gah4 (talk) 23:21, 10 June 2019 (UTC)
Computer architecture seems largely to be talking about CPU architecture. It gives two meanings of "computer architecture". For the first meaning, it speaks of "describing the capabilities and programming model of a computer", and, for the second meaning, it speaks of "instruction set architecture design, microarchitecture design, logic design, and implementation". Both of those sound CPU-centric; they don't mention, for example, I/O buses, which are at least part of the Sun-4 subarchitectures (VMEbus vs. SBus vs. various flavors of PCI for peripherals, MBus vs. XDBus vs. whatever for multiprocessor systems).
If we were to include I/O buses as part of the System/360 architecture, the architecture would be specified by at least two documents - the Principles of Operation and IBM System/360 I/O Interface Channel to Control Unit Original Equipment Manufacturers' Information plus whatever additional internal specifications they have. In that case, perhaps Bus and Tag vs. ESCON vs. FICON could be thought of as distinguishing subarchitectures of S/370, just as the I/O bus is one item distinguishing Sun-4 from Sun-4c from Sun-4e.
In addition, most if not all commodity microprocessors have, at the CPU architecture level, very little in the way of initialization specified - typically, when the CPU is reset, it clears a bunch of CPU state, including MMU state, and jumps to a given location in physical memory. The rest is up to whatever's at that location, which would typically be some firmware in non-volatile memory. There may be system architectural specifications, either explicit or implicit, that govern the behavior of the firmware.
For x86 processors, one such specification is "compatibility with the original PC BIOS plus whatever other stuff has been added on in various specifications such as Advanced Power Management, the MultiProcessor Specification, Plug and Play, and the Advanced Configuration and Power Interface". Another is the EFI specification, possibly with other industry specifications.
For SPARC processors, Sun had their original boot firmware; I don't remember whether any specification was ever published for it. They replaced that with Open Firmware; I'm not sure whether Oracle are still using that on SPARC servers or if they've adopted EFI.
For Alpha processors, the way the standard firmware worked that was originally documented only in manuals only available inside DEC (DEC Semiconductor offered their own documented firmware to customers of Alpha chips). Eventually it was documented; some functions performed by hardware on some other processors, such as page-table walks, were implemented as traps to the PALcode part of firmware on Alpha.
For MIPS processors, there was at one point the Advanced RISC Computing specification.
So there are CPU architecture specifications and system architecture specifications, with some system details relegated to the latter. Most user-mode programming only depends on the CPU architecture; OS development, and peripheral development, depends on the latter as well.
So would subarchitecture specifications would be system architecture specifications based on higher-level CPU architecture, and perhaps partial system architecture, specifications, where the subarchitecture specification standardizes some aspects of the system not covered by the higher-level specifications? And would a given subarchitecture include all machines designed to conform to that subarchitecture's specification? If so, are there examples other than the ones for SPARC-based systems (and Sun's 68k-based systems)? Guy Harris (talk) 01:15, 12 June 2019 (UTC)
I hadn't thought about I/O bus, but in the Sun case that comes when the initializing OS tries to find out which I/O devices are attached. There are sysgen options, including which device drivers to include in the kernel. Having actually done Sun sysgens some years ago, I am not so sure that is related to what Sun calls subarchitecture. For example, within the same subarchitecture, there are systems based on VME bus, and ones that are Sbus. Even more, because we used to use systems that did it, there was an Sbus to VME adapter, to connect VME devices to Sbus hosts. As I remember, the differences come in the /usr/kvm directory, which has symbolic links from the appropriate /usr/bin or /usr/sbin directory. In the case of diskless hosts, you NFS mount the /usr and appropriate /usr/kvm from the server. One server can serve more than one subarchitecture, or even more than one architecture. (Years ago, I had a 4/110 running off a 3/260 NFS server.) I am not against including I/O systems in subarchitecture, but I believe it mostly doesn't apply to Suns. For IBM, there is XA/370 and then ESA/370, which, in addition to 31 bit addressing, have a completely different I/O system. There are some system that can IMPL different microcode to run S/370, XA/370 or ESA/390. RISC-V has many optional parts, though I don't know that anyone describes them as subarchitectures. I don't know ARM well at all, but it might be that it has some. Gah4 (talk) 03:48, 12 June 2019 (UTC)
Yes, I already mentioned the I/O buses; as I indicated in my initial reply, one of the differences between Sun-4 and Sun-4c was the I/O bus (VME for Sun-4, SBus for Sun-4c) and the one difference between Sun-4c and Sun-4e was the I/O bus (VME for Sun-4e - we used it at Auspex as a host processor, so it could work with our other VME boards). So I'm absolutely certain that the I/O bus is one of the components that distinguishes SPARC-based subarchitectures. Others include, as I noted, the MMU (which was not specified as part of SPARC v7 or SPARC v8, and only semi-specified in SPARC v9, although Oracle SPARC Architecture 2015 does specify it), the firmware (Sun-4c introduced OpenBOOT/Open Firmware), and bit-width (Sun-4u introduced 64-bit processors).
So that particular notion of "subarchitectures" might be Sun-specific, meaning it can be handled on the Sun-3 and Sun-4 pages. Guy Harris (talk) 04:28, 12 June 2019 (UTC)
The reason for the question was that I wanted to link to a description of subarchitecture from the Sun pages, assuming that it wasn't just Sun. For Sun, you needed appropriate install tapes, though the differences were small in some cases. Knowing about subarchitecture saved disk space on servers for diskless hosts, as you didn't have to duplicate the parts that were not different. For OS X, each version will say which machines it works with, and which it doesn't. Those differences are most likely subarchitecture, but not specifically mentioned. Disks are big enough now, that we don't notice the wasted space, having to support more than one. I suspect that the difference show up when you try to boot off a disk that was meant for a different system. But maybe also the IA32 MMU hasn't changed over the years. Install systems figure out if they are installing 32 bit or 64 bit, which probably qualifies as a whole architecture, and don't need to know about subarchitechture. Gah4 (talk) 18:54, 12 June 2019 (UTC)

For Sun-3 and Sun-4, the subarchitectures required different kernels and may have required different versions of some platform-dependent system commands and libraries. The bulk of userland code didn't care.

For Macs, the only things that might qualify as "subarchitectures" would be based on the CPU type, e.g. 32-bit PowerPC, 64-bit PowerPC, 32-bit x86, 64-bit x86, and maybe the rumored 64-bit ARM in the future, but those have different instruction set architectures, so I don't see them as "subarchitectures". A given OS release includes all the code necessary to support the Macs it supports, with instruction-set differences handled by fat binaries with executable code for multiple ISAs. Apple eventually drops support for older machines, so they don't bother to include in a release drivers for peripherals that only appear in no-longer-supported machines, and they may compile with the compiler set to generate code using instruction set features that are in currently-supported machines but not in no-longer-supported machines, but it's not as if the dropped machines have a different subarchitecture from the supported machines; I didn't work in the low-level platform group, but I don't think there was any notion of "subarchitectures" at all similar to Sun's, just a notion of particular platform families and platforms within them, e.g. the machine on which I'm typing this is a MacBookPro11,5, with the family being "MacBookPro".

The only major changes to the IA-32 MMU was the addition of the Physical Address Extension (PAE) feature and of the NX bit. Apple never supported 32-bit machines that lacked PAE; if they ever supported machines without the NX bit, that would have been handled at runtime (in the pmap layer), so there weren't any separate kernels for no-NX and NX machines. Guy Harris (talk) 19:56, 12 June 2019 (UTC)

I don't have a running Sun, but I do have a running NFS server with export files for diskless Suns. Looking in /export/exec/sun3/kvm, the main general user commands are ps, pstat, and w. These need to look into some kernel specific data structures, it seems enough that they are different for different subarchitecture. Also, config, eeprom, and format, but those are not normally for general use. Using subarchitecture allows only the parts that have that dependence to be different. /usr/bin/ps is then a symbolic link to /usr/kvm/ps. Looking at an OS X system, there is /bin/ps. On the other hand, looking in /usr/bin there are some files specifying x86-64, some i386, and some dual architecture. It seems that they don't install completely different file sets for 32 bit and 64 bit installs. Gah4 (talk) 21:23, 12 June 2019 (UTC)
ps, pstat, and w might look at HAT layer ("MMU driver", similar to the Mach pmap layer I mentioned) data structures, which would differ between Sun-3 (Sun MMU) and Sun-3x (on-chip PMMU), and between Sun-4 (as I remember, 8KB-page Sun MMU), Sun-4c/Sun-4e (as I remember, 4KB-page Sun MMU), and Sun-4m/Sun-4d (SPARC Reference MMU); it's been a while.
On macOS, however, ps uses sysctls that should hide whatever per-platform dependencies exist and 2) there aren't, as far as I know, any such dependencies in any case. Whether a binary is fat or not, and how fat it is, might depend on the build process for that particular program; there's no reason to ship fat binaries in recent versions of macOS, as they only run on 64-bit x86 Macs, but maybe nobody got around to changing the build rules for those particular programs. That probably changes in Catalina, as fat libraries aren't going to be shipped, because support for 32-bit applications is being dropped. Guy Harris (talk) 21:55, 12 June 2019 (UTC)
So, one of the reasons to use Sun style subarchitecture is to save disk space, and also keep the kernel small. Both more important in the Sun days than now. Otherwise, I believe that changes to the user-mode instruction set, that aren't a whole new architecture, would qualify as subarchitectures. Back to S/360 days, there was the commercial instruction set (decimal), scientific instruction set (floating point), which were each optional on some models. IBM ESA/390 and System/z have, over the years, added instructions. Users (and compiler writers) have to then decide when to support the new ones. The IBM term for this seems to be ARCHLVL, I suspect for for Architectural Level. These are changes to the user mode instruction set. Gah4 (talk) 21:50, 12 June 2019 (UTC)
So there's "subarchitectures" in the sense of system architecture differences that don't affect normal user-mode code (Sun-3 vs. Sun-3x, Sun-4 vs. Sun-4c vs. Sun-4e vs. Sun-4m vs. Sun-4d), and there's "subarchitecture" in the sense of instruction set architecture differences in the form of backwards-compatible additions (SPARC v7 -> SPARC v8, IA-32 from the 80386 to the 32-bit Pentium 4's, x86-64 from the first Opterons/Pentium 4s to the current AMD and Intel processors, S/370 picking up various instructions, z/Architecture ARCHLVL updates, ARM vX.Y, various MIPS/PA-RISC/Alpha extensions, etc.).
S/360 was an example of a third case, where some instructions are add-on options; the PDP-11 had that as well. That continued into the microprocessor era until floating-point units got incorporated into the CPU chip.
I'd consider 32-bit -> 64-bit as a fourth case; it's an "addition" but it's a lot bigger than, say, various streaming SIMD extensions.
So I don't see any straightforward single unifying concept of "subarchitectures" here. Guy Harris (talk) 22:07, 12 June 2019 (UTC)

SPIE

In the discussion above, I mentioned SPIE, which in IBM terms is Specify Program Interrupt Exit. SPIE allows programs to take control when specific interrupts occur, such as the one for an undefined opcode. This allows, in user space, for emulation of features not implemented in hardware, such as instructions added to newer systems. I thought this, or similar ideas, should be discussed somewhere in Wikipedia, but I couldn't find anything even close. It also allows for addressing interrupts, fixed and floating point overflow and divide by zero, and similar exceptions. It is done such that the program can fix the problem, and continue. (Except that imprecise interrupts complicate everything.) Yes this is too specific for this article, but I didn't think of where else to ask. Gah4 (talk) 13:52, 13 June 2019 (UTC)

That's an OS/360^Wz/OS-specific feature. Unix has signals; the Windows NT equivalent is Structured Exception Handling. The general notion of the OS doing callbacks for errors of that sort is discussed in Exception handling#Exception handling facilities provided by the operating system, but that section is a bit of a stub - it seems to give details only about Unix signals, and not many such details. Guy Harris (talk) 20:00, 13 June 2019 (UTC)
I was trying to figure out where to make a redirect for appropriately disambiguated SPIE. One that is done with SPIE is to emulate, in user space, instructions not implemented on a given model. The routine can find the instruction, extract the fields, get the register values, and change the saved registers. Another old favorite (S/360 days) is to fix up misaligned data access. Do unix signals allow one to modify memory and registers, and then return to continue on? Yes with Unix signals described, and without details, it didn't seem appropriate for redirect. Gah4 (talk) 21:16, 13 June 2019 (UTC)
POSIX says signal handlers shouldn't do that, but that's because arbitrary operations, in the general case, might, for example, modify data structures that were in the middle of being updated when the signal occurred (signals can result either from traps such as illegal instruction traps or external signals such as typing ^C). In particular cases, it might happen to be safe; that's how the V6 UNIX floating-point emulator worked - it caught the SIGILL signal, delivered on an illegal instruction trap, and proceeded to decode and interpret instructions until it saw a non-floating-point instruction (so you didn't get the trap handling overhead on every floating-point instruction). (I seem to remember having to tweak that for the PDP-11/34, and spending some time looking at PDP-11/34 microcode listings to figure out what needed to be tweaked.)
Tru64 UNIX on Alpha, as I remember, emulated unaligned accesses in the kernel, rather than handing them to userland to emulate; some other UN*Xes on processors that trapped on unaligned accesses may have done so. Guy Harris (talk) 22:06, 13 June 2019 (UTC)
For IBM, the Fortran library optionally (sysgen select) did the alignment fixup, unless your model had imprecise interrupts, such that it can't find the instruction. (And if it could, too much else might have happened since.) IBM also wrote the routine for extended precision (quad) floating point called by SPIE routines. Gah4 (talk) 22:42, 13 June 2019 (UTC)
Also, Exception_handling_syntax#Assembly_language mentions IBM and the STXIT macro, which seems to come from DOS/VSE. STXIT mentions SPIE and STAE as OS/VS1 additions, but I am pretty sure they trace back to OS/360. The exact order that they were added to the respective systems, I don't know. Gah4 (talk) 21:31, 13 June 2019 (UTC)
STXIT comes from DOS/360; DOS/VSE had it because DOS/360 had it. See the DOS Release 26.2 edition of DOS Supervisor and I/O Macros.
SPIE/STAE date all the way back to OS/360, and OS/VS1 (and VS2 SVS, and VS2 MVS, and...) had it because OS/360 had it. The two OSes just happened to do things differently. See a 1966 edition of IBM System/360 Operating System Control Program Services.
I removed the mention of SPIE/STAE from STXIT because they referred to it as a "later development", replacing STXIT with SPIE/STAE, which is nonsense - as far as I know, OS/360 was originally intended to be the only OS for S/360, with DOS/360 developed as a stopgap when OS/360 was late and too big for smaller machines, so maybe SPIE/STAE existed before STXIT. OS/360 has no sign of STXIT in that earlier OS/360 documentation.
So if STXIT has its own Wikipedia page, I don't see why SPIE/STAE couldn't have their own page as well (preferably not talking about it as an OS/VS1 invention or something ahistorical such as that). It could be mentioned in Exception handling syntax#Assembly language - and probably should be, so nobody thinks STXIT was the one and only such mechanism in S/3x0 operating systems. Guy Harris (talk) 22:27, 13 June 2019 (UTC)
So SPIE and STAE should be one page, or two? Gah4 (talk) 22:42, 13 June 2019 (UTC)
I might be tempted to have one page. On UN*X, SPIE is roughly equivalent to catching the SIGILL, SIGSEGV, SIGBUS, and SIGFPE signals, while STAE is roughly equivalent to catching the SIGABRT and maybe SIGTERM signals, so they're both part of the general "catch abnormal event" mechanism. I could also see separate pages, however, as one is for hardware traps and one is for software abnormal termination indications. Guy Harris (talk) 23:06, 13 June 2019 (UTC)

options

Continuing the subarchitecture discussion, but with a different name. Some differences are not so big as to create an entirely new architecture, but are big enough to know about. For VAX, tradition is for options that are not in the hardware to be emulated in OS software, transparent to the user. IBM did some of that, but not quite as much. There is software emulation for S/360 extended (quad) precision floating point, but implemented through user mode (SPIE) code. As noted, optional floating point for x86 was done in software, usually through user mode INT instructions, which might be patched over (self modifying code) for systems with FP hardware. In some cases, users are supposed to know which options are available on their system. IBM's ARCHLVL is one way to describe them to users. Also as noted above, other architectures have optional features, described to users in different ways. Is this something that the article should include? Gah4 (talk) 23:12, 12 June 2019 (UTC)

If by "this article" you mean the instruction set architecture article, yes, that might make sense.
If by "this article" you mean computer architecture, I'd say "no", as this article isn't focused on instruction sets, nor should it be, given the existence of the instruction set architecture article, and those options are instruction set options. Guy Harris (talk) 01:33, 13 June 2019 (UTC)
Hmmm. Can you remind me why there are two articles? In the Sun case, the differences are mostly MMU, which I suppose isn't strictly instruction set architecture. Gah4 (talk) 13:40, 13 June 2019 (UTC)
Two articles as in instruction set architecture and computer architecture? Probably because there's more to computer architecture than instruction set architecture. Whether or not it's considered part of the ISA, at least some architecture specifications include the way memory mapping works (S/370 and later Principles of Operation and Intel's x86 manuals, for example). Others explicitly don't specify it at all, or don't specify it completely, e.g. the SPARC v7/v8/v9 specifications. The 68k manuals originally didn't, because the 68000, 68010, and 68020 had separate MMU chips. Motorola had the 68451 MMU, and later the 68851 paged MMU; the 68030 implemented a (subset of?) the 68851 functionality on chip.
So you have ISAs such as S/370 and its successors, and x86, where an OS can rely on the MMU working a particular way, and you have ISAs such as 68k and SPARC, where an OS can only do so if the system designer, or system architecture specification, specifies a particular MMU architecture. Thus there were no subarchitectures for the Sun386i, but were subarchitectures for Sun-3 (Sun MMU vs. 68030 MMU) and Sun-4 (8K page Sun MMU, 4K page Sun MMU, SPARC Reference MMU, various other MMUs for SPARC v9 processors).
There are other aspects of the architecture that might or might not be specified by whatever architecture manual(s) there are for the processor. The x86 manual may specify how the MMU works, but it doesn't specify how the ROM monitor works (I can think of at least 3 different ROM monitor styles used on x86 machines - classic BIOS, EFI/UEFI, and Open Firmware as used on some NetApp appliances), or what buses there are on the system (again, x86 systems have used whatever bus the original PC used, the ISA bus, the EISA bus, the Micro Channel bus, various flavors of PCI (parallel and serial), and possibly others. Most merchant microprocessors have, I suspect, had systems built with multiple different buses as well. S/3x0, as noted, has implemented the channel interface with different physical layers (bus and tag, ESCON, FICON), but I think a lot of the way programs perform I/O operations is independent of the physical layer, being specified in the Principles of Operation. On the other hand, recent z/Architecture models also support a different I/O bus; it's called "PCI" or something such as that. :-) (IBM didn't document the "bang on the PCI bus" instructions, but they're used in some Linux kernel source files.)
So the line between "instruction set architecture" and "system architecture" are somewhat blurry; S/3x0 has the broadest official specification I know of (which is easier if the company that specifies the architecture is also the only company making systems using that architecture, as IBM was except during the plug-compatible era). x86's architecture covers less, although Intel have been involved in other specifications that, when combined with the x86 manuals, form a de facto system architecture specification that most systems based on x86 chips implement (even my Mac could have Windows installed on it, booting on the raw hardware, and at least one x86 Mac at Apple when I was there had Solaris on it, as I remember).
But none of that applies to the options being discussed in this section; those are optional instructions, so clearly part of the ISA, and thus appropriate to instruction set architecture. Guy Harris (talk) 19:49, 13 June 2019 (UTC)
Historical note for your information: the Elliott 4100 series of computers (1965 onwards) also used unimplemented operation traps in various ways: the Elliott 4130 model had hardware floating point but the 4120 model did not, so use of floating point instructions on a 4120 caused an unimplemented operation trap to a (system) routine which emulated the required operation; ditto for hardware subscript checking for arrays. The Fortran compiler generated opcodes for complex arithmetic which were then handled in a similar way; it was reported that the 4150 model (development halted when firm taken over by ICT) would have had hardware for complex arithmetic. Murray Langton (talk) 09:09, 14 June 2019 (UTC)

Architecture vs Microarchitecture

> The discipline of computer architecture has three main subcategories: 1) Instruction Set Architecture, 2) Microarchitecture, 3) System Design

According to Coursera lecture (https://www.coursera.org/lecture/comparch/architecture-and-microarchitecture-rgQ8X) (see 19:40) it is possible that architecture is the same and the microarchitecture is different (AMD Phenom X4 vs. Intel Atom).

That is, treating microarchitecture as a part of architecture looks misleading. At least to the average person like me.

When the lecturer refers to "big-A architecture", he's referring to the instruction set architecture. Microarchitecture (subcategory 2), like instruction set architecture (subcategory 1), is a part of the topic of computer architecture. Guy Harris (talk) 19:46, 31 December 2019 (UTC)
Computer architecture is the general category on designing and building computers. As well as I know it, the idea of an architecture originated with IBM and S/360, of having one designed separate from its implementation. Instruction set architecture often, but maybe not always, means the unprivileged instructions seen by ordinary users. IBM's System/370 includes in its Principles of Operation the definition of its virtual storage (paging system). For microprocessors that originated without a MMU, but which used an external MMU, complicated this. Sun used sub-architecture to indicate the MMU and I/O addressing system. While ordinary users aren't supposed to need to know details of the paging system, sometimes they do. For example, it is sometimes useful to have data on page boundaries, requiring you to know the page size. Microarchitecture includes the finer details on what is inside the processor, which again ordinary users shouldn't need to know, though it might affect instruction timing. But again, sometimes you need to know about it, even for ordinary user programming. Gah4 (talk) 02:23, 1 January 2020 (UTC)
Guy Harris: Yes, you are correct. My fault. So, "Same Architecture, Different Microarchitecture" slide is actually "Same ISA, Different Microarchitecture".

System design vs Hardware

From the article:

> The discipline of computer architecture has three main subcategories: 1. Instruction Set Architecture, or ISA <...>, 2. Microarchitecture, or computer organization <...>, 3. System Design

From Hennessy, John; Patterson, David. Computer Architecture: A Quantitative Approach (Third ed.):

> In this book the word architecture is intended to cover all three aspects of computer design—instruction set architecture, organization, and hardware.

From Hennessy, John; Patterson, David. Computer Architecture: A Quantitative Approach (Fifth ed.), nothing different:

> In this book, the word architecture covers all three aspects of computer design—instruction set architecture, organization or microarchitecture, and hardware.

So, in the book we have "hardware", whereas in the article we have "system design". My questions:

1. Is it correct to say that these terms ("hardware" and "system design") are used as synonyms?

2. In case the answer to the first question is "yes", then isn't it better to somehow mention "hardware" as a "system design" synonym?

I don't think that they are so close to synonyms. Some might say that hardware is anything that isn't software. Firmware (microcode) starts to complicate that. Otherwise, system design is the high-level putting together of parts, such as connections to I/O devices. I suppose that is related to hardware, but it isn't just hardware. Does hardware just include digital logic, or does it also include power supplies, backplanes, and cabinets? Gah4 (talk) 20:32, 1 January 2020 (UTC)
Can we say that the third aspect of computer architecture according to the article ("system design") is not the same as the third aspect of computer architecture according to the book ("hardware")? --92.100.51.47 (talk) 23:19, 1 January 2020 (UTC)
The sixth edition of the book doesn't say that. It says, in section 1.3 "Defining Computer Architecture":
A few decades ago, the term computer architecture generally referred to only instruction set design. Other aspects of computer design were called implementation, often insinuating that implementation is uninteresting or less challenging.
We believe this view is incorrect. The architect’s or designer’s job is much more than instruction set design, and the technical hurdles in the other aspects of the project are likely more challenging than those encountered in instruction set design. We’ll quickly review instruction set architecture before describing the larger challenges for the computer architect.
We use the term instruction set architecture (ISA) to refer to the actual programmer-visible instruction set in this book. The ISA serves as the boundary between the software and hardware. ...
...
The implementation of a computer has two components: organization and hardware. The term organization includes the high-level aspects of a computer’s design, such as the memory system, the memory interconnect, and the design of the internal processor or CPU (central processing unit—where arithmetic, logic, branching, and data transfer are implemented). The term microarchitecture is also used instead of organization. For example, two processors with the same instruction set architectures but different organizations are the AMD Opteron and the Intel Core i7. Both processors implement the 80 x 86 instruction set, but they have very different pipeline and cache organizations.
...
Hardware refers to the specifics of a computer, including the detailed logic design and the packaging technology of the computer. Often a line of computers contains computers with identical instruction set architectures and very similar organizations, but they differ in the detailed hardware implementation. For example, the Intel Core i7 (see Chapter 3) and the Intel Xeon E7 (see Chapter 5) are nearly identical but offer different clock rates and different memory systems, making the Xeon E7 more effective for server computers.
In this book, the word architecture covers all three aspects of computer design—instruction set architecture, organization or microarchitecture, and hardware.
So:
  • The instruction set architecture is an abstract description of how the machine is programmed. It covers userland programming, and may cover some or all aspects of low-level operating system programming. The latter may cover some system aspects beyond the CPU, such as the I/O system or details of the boot process beyond "what happens when you reset the CPU", as it does in the System/3x0 and z/Architecture ISA.
  • The organization appears to be a description of the details of how the ISA is implementated, but is still somewhat abstract; as they say, there's another layer giving the logic design and packaging technology.
  • The hardware covers everything below that.
The main part of the book doesn't cover system design beyond the CPU and memory subsystem. The I/O subsystems are covered a bit in chapter 6, "Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism", and in the appendices; they don't seem to speak of what parts of that are "organization" and what parts are "hardware", but lines can probably be drawn similar to the lines separating "organization" and "hardware" in the CPU and memory subsystems. Guy Harris (talk) 00:57, 2 January 2020 (UTC)

And the third edition says much the same thing.

In this version, the three subcategories are "instruction set architecture", "computer organization", and "hardware", with Hennessy and Patterson given as a citation. In this edit, "Hardware" was removed; in this edit, "System design" was added.

The editor in question indicates, in another edit summary at that time, that they think "hardware" isn't part of architecture; H&P appear to think otherwise and, since we're giving them as a source, I vote we revert to "hardware", or "hardware design", and clarify it, if necessary, to match what H&P say. Guy Harris (talk) 02:55, 2 January 2020 (UTC)

Computer architecture types

Should this article mention things like the von Neumann and Harvard architectures, both of which have articles?

Yes. This article needs considerable work. --Robert Merkel 13:38, 17 Sep 2004 (UTC)

Configurable computing

I yanked a whole section on configurable computing. It's an interesting idea, but it's a fairly minor part of computer architecture and makes the article less readable for a non-expert. This whole article needs a rewrite. --Robert Merkel 13:38, 17 Sep 2004 (UTC)

I do not agree with the remove. The section can either be improved or titled in a way that it is recognized as not beeing escential for understanding the current style - however the outlook is important for those that are prepared to work themself inot it. By the way the rest is not for nonexperts either! Togo 02:41, 20 Sep 2004 (UTC)
You're missing my point. I am stating that a whole paragraph on configurable computing, in the context of the article as it stands, gave a very misleading view of the importance of the topic to the broader field of computer architecture. As to the reading level of the article, it *should*, at least in its introduction, be accessible to the nonexpert. --Robert Merkel 04:05, 20 Sep 2004 (UTC)
I agree with the removal. At most this page should have a brief mention and link to Reconfigurable computing. --Brouhaha 21:47, 23 Jan 2005 (UTC)