|WikiProject Computing / Hardware||(Rated Start-class, Mid-importance)|
Cray I faster or slower than ILLIAC IV?
The article on Cray Inc. states that "The Cray-1 was a major success when it was released, faster than all computers at the time except for the ILLIAC IV." But this article on ILLIAC IV states that "(ILLIAC IV) was finally ready for operation in 1976, after a decade of development that was now massively late, massively over budget, and outperformed by existing commercial machines like the Cray-1." These two articles seem to contradict each other. —The preceding unsigned comment was added by 126.96.36.199 (talk • contribs) 21:18, 25 April 2006 (UTC)
- Not only that, but the ILLIAC article contradicts itself. It says at the top of the page that the Cray-1 was faster, but says at the bottom of the page that the Illiac began operation in 1976, the same year the Cray-1 was released with roughly the same performance. --Blainster 08:25, 26 April 2006 (UTC)
I never had a chance to use the ILLIAC IV, but I know many people who did and had an office in the building which housed it (at the moment I am across the street). So the issue boils down to what many people call an "apples-to-apples" comparison. If you are an armchair supercomputer watcher as most are, you only look at clock speed. If that were true then any latter (more recent) machine will generaly be faster. I never directly used a Cray-1 or a Cray-1S either, but I did use the X-MP/2 (upgraded to an X-MP/4) which succeeded the 1S which replaced the IV.
The ILLIAC had a slower individual processor element (PE) or control unit (CU) than the Cray-1. The serious computational philosophy question is whether one can add parallel work in sum. Software engineers like Fred Brooks say no. Managers of budgets today have to say yes (is 64 slower elements comparable to a machine 64 times faster than an individual element? No simple answer).
Additionally, speed ignores memory: both RAM and disk in this case. The ILLIAC had woefully little memory. It was only usable with the fixed head disks holding the main part of the problem and the processing unit memories acting as cache and the programmer acting as memory manager. Serious (real, not toy) supercomputing problems take up all memory (only toy problems sat in processor memory).
To this day the ILLIAC was faster in I/O, with at best a weak comparison to Connection Machine Data Vaults (non fixed head disks, much slower), than any other machine since. Apparently the much touted optical memory as tertiary store never worked. I have seen some of the optical strips (it was not disk).
--enm 1 jun 2006
"destined to be the last" really need to be edited out, as even the page for ILLIAC notes the ILLIAC 6.
--enm 1 jun 2006
- I'm not sure I see the point in your comment? I assure you as the primary author of the article, I certainly don't compare machines based on clock speed. I thought the article was fairly fair on this. If the article reads this way, and it doesn't seem to, please edit away! Maury 04:24, 28 November 2006 (UTC)
- No, I am not the author of the article. Every one but the knowledgeable have contributed to this article (seriously). It appears mostly a rehash of the literature and second hand knowledge that people read or heard. Comparison is done on the basis of real codes and balanced systems. People remember CPUs and their clock cycles, they rarely remember the other parts of systems like storage. Too much in this article needs a rewrite for an old system which I never used which I don't have time for.... --enm 188.8.131.52 (talk) 16:07, 16 September 2008 (UTC)
This last comment is correct. When I recently discovered this page I found that it was almost entirely based on what can only be called rumor or a complete fiction, not even second hand information. Not only was the history wrong, but so was the computer science. As a member of the Illiac IV group at Illinois, worked on the machine in Paoli and was involved in the campus demonstrations (my office was fire-bombed, luckily it didn't go off), I have now updated it (and I conferred with other members of the project in doing it). More of course could be said, but what is there now is accurate. Jeanjour (talk) 15:04, 21 December 2012 (UTC) John Day
1 Tbit optical storage device?
I flagged the mention of a 1 Tbit optical storage device as "citation needed". That's roughly 3 orders of magnitude more storage than state of the art magnetic disk drives in 1976, i.e., it's hard to believe that the claimed capacity is anywhere near accurate. Paul Koning (talk) 18:29, 14 October 2013 (UTC)
I have added a reference to the description of the "laser memory", the paper can be found at a New Zealand university website (unable to include the URL due to some blockage). Quoting the paper, it has the following description:
- 1) Laser memory: The B6500 supervises a 10^12-bit write-once read-only
- laser memory developed by the Precision Instrument Company. The beam
- from an argon laser records binary data by burning microscopic holes
- in a thin film of metal coated on a strip of polyester sheet, which is
- carried by a rotating drum. Each data strip can store some 2.9 billion
- bits. A "strip file" provides storage for 400 data strips containing
- more than a trillion bits. The time to locate data stored on any one
- of the 400 strips is 5 s. Within the same strip data can be located in
- 200 ms. The read and record rate is four million bits per second on
- each of two channels. A projected use of this memory will allow the
- user to "dump" large quantities of programs and data into this storage
- medium for leisurely review at a later time; hard copy output can
- optionally be made from files within the laser memory.
What was the contribution of the Illiac IV?
This video The Illiac-IV lecture - Bay Area Computer History Perspectives Series provides a long list of the problems that the Illiac IV solved and how the results were still used beyond the life of the machine. — Preceding unsigned comment added by Nigwil (talk • contribs) 09:25, 20 October 2013 (UTC)
Location, style, polish, WP:NOR
I changed "outside San Francisco" in the lead to "Moffett Airfield in Mountain View, California". Lots of things (luckily) are "outside San Francisco", including the Taj Mahal.
I added the detail of ILLIAC IV having been housed in B. N-233 at Ames and included a reference for this. I made a couple of stylistic changes.
I'm far from the most doctrinaire editor, but this page's references are shockingly few. It seems to have been written from editors' personal recollections. What a surprise! This is a problem with very many articles on the history of computing and computing machinery, where corroborating documents are often thin on the ground but where some grizzled principals intent on preserving history haven't yet snuffed it. I have no doubt that many of the article's assertions are correct but surely more of them can be connected with some reasonable reference than currently appear.
Many other puzzling and/or clumsy bits appear, e.g.,
- “...the best they could muster was 250 MFLOPS, with peaks of 150.”
- “The machine was never delivered to Illinois, arriving in 1972.”
- “Rumor has it that...”
- “However keeping with the ploughing analogy consider what you would want behind your tractor would you want...”
and so on. Run-on sentences are one thing but "[highest] was 250 with peaks of 150"?
- “The machine was never delivered to Illinois, arriving in 1972.” I've adjusted the prose around that item, but kept the 1972 delivery date for now; note that our article intro gives a 1971 delivery date.
- Of course, as a one-of-a-kind research machine, the process of installing the many components, assembling the SIMD configuration, and performing incremental testing along the way undoubtedly took some time, so there is no single date (or even single year) we can quote; we may need to give a range, and/or list any milestones which we can find sources for.
- 20:13, 26 September 2014 (UTC)
The wording at the beginning of the "Background" section:
- ... adding as many instructions as possible to the machine's CPU, a concept known as "orthogonality" ...
is incorrect, cf. its link target, Orthogonality#Computer_science. Quite to the contrary, an "orthogonal" instruction set lacks redundancy, i.e. is as small as possible for the given purpose. Would some computer scientist (I am not) kindly improve the wording? Thanks, --HReuter (talk) 01:37, 13 December 2014 (UTC)