Talk:History of computing hardware

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Former featured article History of computing hardware is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
Main Page trophy This article appeared on Wikipedia's Main Page as Today's featured article on June 23, 2004.
WikiProject Computing / Early (Rated B-class, Top-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Top  This article has been rated as Top-importance on the project's importance scale.
Taskforce icon
This article is supported by Early computers task force (marked as High-importance).
Wikipedia Version 1.0 Editorial Team / v0.5
WikiProject icon This article has been reviewed by the Version 1.0 Editorial Team.
Taskforce icon
This article has been selected for Version 0.5 and subsequent release versions of Wikipedia.
B-Class article B  This article has been rated as B-Class on the quality scale.

Old discussions[edit]

Old discussions have been moved to archives - use the navigation box to switch between them. I used the much nicer {{archives}} and {{archivesnav}} templates as found on the Personal computer talk pages to spruce up navigation a little. Remember when creating new archive pages that they must have a space in the title - talk:History of computing hardware/Archive 3 would be the next page, for example. --Wtshymanski (talk) 01:35, 25 September 2009 (UTC)


Call me a massive geek, but surely the C64 and Amiga deserve some mention in here. The advancement in personal computers isn't just down to the number of transistors - those computers added some really creative features (particularly with sound hardware) which we now take for granted. Their rise and fall (there's a certain book by a congruent title) is a huge chapter in the history of computing, surely..

Copyedits needed for 2nd generation section[edit]

In the interests of keeping this article a featured article, might we move the latest contribution on 2nd generation computers to the talk page and work on the English prose before re-instating it to the article page? --Ancheta Wis (talk) 14:17, 2 January 2008 (UTC)

Hi, thats my work! What is wrong with it? (talk) 14:35, 2 January 2008 (UTC)
Hi, I replied on your talk page to redirect here.
  1. The expert you allude to on your first paragraph was Thomas J. Watson
  2. The Von Neumann pre-print of 1947 went all over the world. That is how Israel built its first computer, for example. Russia did the same. What the 1954 date on Italy's first computer shows is that they built either on Von Neumann's architecture or they studied other documents. In any case, a citation would be good.
  3. Did the IBM 1401 use only transistors?
  4. Does tenths of thousands mean 10000 or 100?
In any case, I think you see what I mean. The English needs copyediting. I am not referring to your content, with the exception that we need citations. --Ancheta Wis (talk) 15:19, 2 January 2008 (UTC)
I have commented out the contribution, as the English needs copyediting. The sentences are disconnected, the timeline of development for second generation is nonsequential, there is no flow from one statement to the next. --Ancheta Wis (talk) 10:47, 17 February 2008 (UTC)

computers are an amazing creation of sensation. —Preceding unsigned comment added by Sckater (talkcontribs) 21:50, 8 March 2008 (UTC)


i was wondering whether the Antikythera mechanism is the first computer, cause their are lot articles that make that claim.Tomasz Prochownik (talk) 21:05, 23 April 2008 (UTC)

Follow the link for the latest thinking on the matter. --Ancheta Wis (talk) 23:18, 23 April 2008 (UTC)

Citations needed[edit]

Fellow editors, User:Ragesoss has noted that we are building back up the citations for this FA. When this article was first formed, the rise in standards for Featured Articles had not yet occurred. Since I have been volunteered for this, there will be an American bias to the footnotes I am contributing; please feel to contribute your own sources.

Please feel free to step up and add more citations in the form of the following markup: <ref>Your citation here</ref>. You can add this markup anywhere[1] in the article, and our wiki software will push it to the <references/> position in the article page, individually numbered and highlighted when you click on the ^. As an illustration, I placed this markup on the talk page so that new users can even practice on this talk page.

In my opinion, the best source is Bell and Newell (1971)[2], which is already listed in the article. I do not have time to visit the local university library, so my own contributions are from sources which I have on my own bookshelves; this may be appropriate since the seminal period 1945-1950 will probably be viewed as the heyday of first generation of electronic digital computers, which blossomed in the US, 1945-1950.[3],[4],[5],[6],[7],[8],[9] I recognize that there will need to be more citations from the Association for Computing Machinery and the IEEE Transactions, but that will have to come from those editors who are in the Wikiproject on computing. In particular, the Radiation Laboratory of MIT published a series of books The M.I.T. Radiation Laboratory Series[10] which are the foundation for computing hardware, in tandem with the Manhattan Project; what is common to these projects is that they involved groups of cooperating contributors.[11] Before the howls of outrage subside, please note that the exact forms of computer hardware had not yet been selected in this period, but since the technologists were already in place for other purposes, it was a small step to the forms of hardware we see today.[12],[13],[14],[15],[16],[17], [18] The forms of hardware could easily have gone in other directions, and our current computers would have been different from what could have been.[19] [20]

New users (especially those with a CS or EE background ), please feel free to contribute your citations. Wikipedia:Five Pillars summarize the guidelines for editors, and your cheatsheet for markup can be found here. Users can append comments to the foot of this talk page, signed with the signature markup: --~~~~

Casual readers might note that the references which will be added to this article can be purchased quite cheaply on the Internet (typically for a few dollars), which in sum would amount to a nice education in this subject. --Ancheta Wis (talk) 09:31, 3 May 2008 (UTC)

We are up to 59 footnotes. You can examine the edit history to see how the citations were embedded in the article, as well as study this section, for examples on how to do it. --Ancheta Wis (talk) 10:01, 6 May 2008 (UTC)

User:SandyGeorgia has noted that the citations are expected to have a certain format. Everyone is welcome to improve the citations. --Ancheta Wis (talk) 01:42, 7 May 2008 (UTC)

It appears that the footnote macro is space-sensitive. For example <ref name=IBM_SMS/ > works, but <ref name=IBM_SMS/> causes error messages unless a space is added after the trailing slash. To see this, look at this diff --Ancheta Wis (talk) 09:42, 9 May 2008 (UTC)

Sample citation format from User:Wackymacs:[21]

  • This one was formatted incorrectly. There should be a "|" in between the url and the accessdate like this:[22]

References sample illustration[edit]

  1. ^ Your citation here
  2. ^ Gordon Bell and Allen Newell (1971) Computer Structures: readings and examples ISBN 0-07-004357-4
  3. ^ Herman Goldstine's 1947 First Draft of a Report on the EDVAC, which was mimeographed and distributed worldwide, had a global effect, producing von Neumann-architecture computer systems world-wide. For example, the first computer in Israel was built this way.
  4. ^ Federal Telephone and Radio Corporation (1943, 1946, 1949), Reference Data for Radio Engineers
  5. ^ The Jargon File, version 4.4.7 The Jargon file
  6. ^ Charles Belove, ed. (1986) Handbook of modern electronics and electrical engineering, ISBN 0-471-09754-3
  7. ^ Sybil P. Parker, ed. (1984) McGraw-Hill encyclopedia of electronics and computers ISBN 0-07-045487-6
  8. ^ Arthur B. Glaser and Gerald E. Subak-Sharpe (1977), Integrated Circuit Engineering ISBN 0-201-07427-3
  9. ^ Richard H. Eckhouse, Jr. and L. Robert Morris (1979), Minicomputer Systems: organization, programming, and applications (PDP-11) ISBN 0-13-583914-9
  10. ^ For example, John F. Blackburn (1947), Components Handbook, Volume 17, M.I.T. Radiation Laboratory Series, Lexington, MA: Boston Technical Publishers
  11. ^ "I must say that I did not design Windows NT -- I was merely one of the contributors to the design of the system. As you read this book, you will be introduced to some, but not all, of the other contributors. This has been a team effort and has involved several hundred person-years of effort." -- Dave Cutler, Director, Windows NT Development, in the foreword to Inside Windows NT, ISBN 1-55615-481-X, by Helen Custer, p. xix.
  12. ^ Ron White (1995), How Computers Work ISBN 1-56276-344-X
  13. ^ Scott Mueller (2002), Upgrading and repairing PCs ISBN 0-7897-2683-1 CHECK_THIS_ISBN
  14. ^ Harry Newton (1998), Newton's Telecom Dictionary ISBN 1-57820-023-7
  15. ^ George McDaniel, ed. (1993), IBM Dictionary of Computing ISBN 0-07-031489-6
  16. ^ Paul Horowitz & Winfield Hill(1989). The Art of Electronics ISBN 0-521-37095-7
  17. ^ David A. Patterson and John L. Hennessy (1998), Computer Organization and Design ISBN 1-55860-428-6
  18. ^ Alan V. Oppenheim and Ronald W. Shafer (1975), Digital Signal Processing ISBN 0-13-214635-5
  19. ^ W.J. Eckert (1940), Punched card methods in scientific computation, Lancaster, PA: Lancaster Press
  20. ^ Robert Noyce's Unitary circuit, US patent 2981877, "Semiconductor device-and-lead structure", issued 1961-04-25, assigned to Fairchild Semiconductor Corporation 
  21. ^ Jones, Douglas W. accessdate=2008-05-15 "Punched Cards: A brief illustrated technical history". The University of Iowa. 
  22. ^ Jones, Douglas W. "Punched Cards: A brief illustrated technical history". The University of Iowa. Retrieved 2008-05-15. 

Zuse and Von Neumann[edit]

According to Hennesey and Patterson, Von Neumann knew about the details of Zuse' floating-point proposal. This suggests that the sentence 'Zuse was largely ignored' should be stricken. Any objections? --Ancheta Wis (talk) 10:30, 5 May 2008 (UTC)

Zuse did not implement the floating-point design he patented in 1939, before WWII ended. Von Neumann was aware of Zuse's patent and refused to include it in his Princeton machine, as documented in the seminal paper (Burks, Goldstine and von Neumann, 1946). -- Hennesey and Patterson p.313, note "A decimal floating point unit was available for the IBM 650, and [binary floating-point hardware was available for] 704, 709, 7090, 7094, ... ". "As a result, everybody had floating point, but every implementation was different." .

To this day, floating point operations are less convenient, less reliable, and more difficult to implement (in both hardware and software). -Ancheta Wis (talk) 08:07, 10 May 2008 (UTC)

'First electronic computer'?[edit]

This assertion is made about the Colossus in this article. It is also made about the ACE in that article. THERE CAN BE ONLY ONE! Twang (talk) 18:59, 10 May 2008 (UTC)

On the other hand, the article also states "Defining a single point in the series as the "first computer" misses many subtleties." thank you for BEING BOLD! You are welcome to contribute to the article and the talk page! --Ancheta Wis (talk) 20:34, 10 May 2008 (UTC)
Not to be too pedantic, but the article is an example of how a recurring need (in this case, the need to calculate) gets met multiple ways, at multiple times, by multiple people trying to solve a problem. For example, Pascal was trying to help his dad collect taxes; ENIAC was used to fight a war by calculating the trajectories of artillery shells; Zuse was trying to ease the burden of his engineering work; Colossus was trying to decode secret messages; IBM was trying to extend the use of its punch card machines for business purposes; Maurice Wilkes was excited about the possibilities of the First Draft of the Design for EDVAC. You get the idea: it's asking 'What does the first mean?'. As we now know from spacetime, time depends on the observer - what does first mean in that case? It only has meaning in the context of a thread. Thus clearly, Maurice Wilkes came after ENIAC, but before the implementation of EDVAC. Colossus was secret, so it was part of a different thread, by definition. And in the article, there is evidence that von Neumann knew something of the ideas of Zuse, so the design and architecture of EDVAC is after Zuse. However, you cannot say that the implemented EDVAC is after Wilkes' machine implementation - they are parallel threads which branched after Wilkes was influenced by the First Draft. These ideas are part of Lee Smolin's book Three roads to quantum gravity ISBN 0-465-07835-4 pp.53-65. (As you can see, classical logic needs to be reformulated. The world is not monotonic.) I don't have Smolin's book in front of me so I can't give you a page number right now. And I can't put what I just wrote in the article because I don't have a citation other than Smolin, which isn't explicitly about computing hardware (it's about physical processes in general). --Ancheta Wis (talk) 21:07, 10 May 2008 (UTC)
Just following up about ACE, the Automatic Computing Engine. It's the same idea. Turing owed nothing to EDVAC. So there are other editors who have the same kind of reasoning as Smolin's work, stated above. However, just Turing's knowledge that EDVAC is possible said a lot to him -- the ACE solution also has to obey the laws of physics, like EDVAC; thus the ACE problem solvers had a lot less work to do when solving their specific issues on the way to a goal.
These kinds of problems, about priority and independence, are being solved with clean rooms, where developers work in isolation from other implementers. This is all faintly antique for anyone in the open source movement; all that has to be done in open source is to include the provenance of the code base, to keep it Open.
That's where Wikipedia can make its mark on world culture: we can keep everyone honest about who owes what to whom, by citing our sources. This article clearly states that von Neumann owed much of his First Draft to Zuse, Eckert/Mauchly (who owe something to Atanasoff/Berry) and the rest of the inventors who came before him. And Wilkes (and the rest of the world) owe much to von Neumann, etc. Since Turing's ACE does not have priority over Wilkes' machines, the ACE article should probably heavily qualify the meaning of first in its text. That brings us to Emil Post, the American logician who is independent of Turing, but who waited too long to publish. (He had his ideas 15 years before Turing's 1936 publication...) --Ancheta Wis (talk) 21:39, 10 May 2008 (UTC)

Contributions welcomed.[edit]

Fellow editors, you are welcome to make your contribution to this article. See the sections above for examples on adding citations. Be Bold.

--Ancheta Wis (talk) 10:43, 11 May 2008 (UTC)

ENIAC 1,000 times faster than its contemporaries[edit]

The article currently states "(Electronic Numerical Integrator and Computer) .... it was 1,000 times faster than its contemporaries." As it is stated that ENIAC was Turing complete, if it had been programmed to break "Tunny" would it have been 1,000 times faster than Colossus? If not then this sentence needs changing. --PBS (talk) 10:08, 13 May 2008 (UTC)

If we are comparing electromechanical relays to vacuum tubes then the statement is correct. But Tunny came after ENIAC, so it is a descendant, and not a contemporary, which would have been Z1 (the only unclassified project).
You might change the article page, for example, replacing contemporaries with Z1 in the statement. Citations are welcomed. This page needs more contributors! --Ancheta Wis (talk) 03:35, 15 May 2008 (UTC)
The sentence has been changed. --Ancheta Wis (talk) 08:41, 19 May 2008 (UTC)

The number of pictures[edit]

Ancheta Wis, you're doing amazing work here - but don't you think the article should have less pictures? — Wackymacs (talk ~ edits) 06:23, 15 May 2008 (UTC)

Thank you for your kind words. I propose to comment out Herman Hollerith, the Jacquard loom, the Manchester Baby, and others.
Editors, you are welcome to contribute to this article and talk page. Be Bold. Citations wanted.
--Ancheta Wis (talk) 10:06, 15 May 2008 (UTC)
Good work. Still too many. Some images obscure section headings (in other words, push them out of order). Also, per WP:MOS, images should not be placed directly under a section heading on the left side. — Wackymacs (talk ~ edits) 10:10, 15 May 2008 (UTC)


It is no good adding lots of citations, when half of them are not formatted properly with the citation templates provided. Please see Wikipedia:Citation templates. All web citations should use the Cite web template, and must have an access date. Also, a lot of the current citations look questionable, and some are useless. (For example, the two citations in the lead explaining hardware and software) - Why? Wikipedia has articles on both of these. — Wackymacs (talk ~ edits) 10:45, 15 May 2008 (UTC)

So the next step is to revisit the citations, using the sample you have provided and reformat. As part of the history of this article, when we did this, the footnote software had not yet reached its current state. I hope it is stable enough to rely on for the future. I have no objection to go back and revisit the footnotes, as I am a believer in the spiral development process. --Ancheta Wis (talk) 08:06, 16 May 2008 (UTC)
The "Example 2 article text" appears to be a codification of the usage of ordinary wiki markup practices over the years. I propose reformatting the existing citations into that style. I must say that it appears to place human editors into the position of data entry clerks for the care and feeding of the citation monster. After reading Wikipedia:Citation templates, my reaction is that this article/policy? will evolve.
My personal preference is for "Example 2 article text", and my guess is that any of the items in Wikipedia:Citation templates is acceptable to the FA reviewers. True statement? --Ancheta Wis (talk) 08:29, 16 May 2008 (UTC)
You can either use {{Citation}} for everything, or a mixture of {{cite news}}, {{cite web}}, {{cite book}}, and so on. Both methods are acceptable at FA. — Wackymacs (talk ~ edits) 08:54, 16 May 2008 (UTC)
My last re-format using the cite template ate the name of a co-author. I have to go now, and will return to this issue later. --Ancheta Wis (talk) 16:53, 17 May 2008 (UTC)
This diff shows 27119 net bytes (a 33% increase) have been added to the article since 29 April 2008. I have attempted to address the concerns of Wackymacs (1c) and SandyGeorgia (1a) in the meantime. --Ancheta Wis (talk) 10:50, 19 May 2008 (UTC)
All book footnotes should have specific page numbers. Ancheta Wis, can you start adding page numbers (assuming you have the books which are referenced in footnotes)? — Wackymacs (talk ~ edits) 16:50, 5 June 2008 (UTC)
My books are largely in my basement with the exception of the 40-lb. box I dragged upstairs for the article. But some of the books I have not looked at since I left the semiconductor industry some decades ago, which does not mean I do not remember where I learned the fact, and which book title I have already cited. I am thinking of Mead and Conway, to be specific. To avoid time pressure, because I cannot predict where (in what box, as is probably apparent, I own thousands of books, not to mention 3 editions of Britannica) I will unearth the book, I will simply comment out those book refs which lack the page numbers. I will also try to conserve on byte in the references for the sake of the page limit. --Ancheta Wis (talk) 00:12, 6 June 2008 (UTC)

Replaced the {{cite}} with {{Citation}}. Retained {{Ref patent}} on the recommendation of the Citations people. The notes now use {{harvnb}} Harvard-style references. --Ancheta Wis (talk) 06:46, 19 June 2008 (UTC)

Looks good. Are you going to be adding page numbers to the books which are missing them? — Wackymacs (talk ~ edits) 07:37, 19 June 2008 (UTC)
Thank you. No book which is in the Notes is missing page numbers, as far as I know. But when I unearth such information I will augment the article. Some books in the References section are there for cultural reasons, such as Bell and Newell, which is the single most important source, in my opinion. --Ancheta Wis (talk) 02:11, 20 June 2008 (UTC)

For the record I am aware that Lord Bowden's first name is not Lord. But I am forced into this by the strictures of the Citation system while using Harvard references. The Ref patent template also does not appear to play well with the References section. That is the reason that I have the 3 patent citations in a hybrid, one style for the Notes, and the Last, First names preceding the Ref patent template in the References section. --Ancheta Wis (talk) 12:12, 19 June 2008 (UTC)

SandyGeorgia, the harvnb templates still need last|year, but I notice that the 'last=' was missing from the Intel and IEEE. I restored the Citation|last=IEEE and then noticed that the Citation|last=Intel was changed as well. How is the Harvard-style referencing method going to work, in this case? --Ancheta Wis (talk) 01:38, 2 July 2008 (UTC)

First light[edit]

We need a name akin to the concept of first light of an observatory telescope; I propose the denotation first good run, and wish to apply it to Baby's first good run, June 21, 1948, 60 years ago. --Ancheta Wis (talk) 23:00, 21 June 2008 (UTC)

I am wary of defining such "firsts" in computing, bearing in mind the statement in this article that "Defining a single point in the series as the "first computer" misses many subtleties".TedColes (talk) 16:42, 22 June 2008 (UTC)
Thank you for your considered response. What I refer to is 'the comparison of an expectation to an observation', to use William Shockley's phrase. For example, there were 'screams of joy' when the first p-system for UCSD Pascal compiled itself (the expectation). In my mind, that qualifies as a first good run. Another might be the attainment of 1 peta-flop operation for IBM Roadrunner, just last month. For Baby, the resulting convergence of dots on the Williams tube to the expected location was the first good run. And since the phrase is ostensive, meaning relative to the situation, akin to 'baby's first word', I can see that what the proud parent might view as a triumph might be viewed as something more akin to Michael Faraday's response 'and what is the use of a new-born baby'. Might it be better to use a more prosaic phrase like 'first run'? --Ancheta Wis (talk) 19:20, 22 June 2008 (UTC)
Herbert Simon once said 'There is no substitute for a working program'. Maybe the phrase might be 'first working program' for Baby. --Ancheta Wis (talk) 19:39, 22 June 2008 (UTC)

Harvard Mark I – IBM ASCC Turing Complete?[edit]

It seems like the table titled

"Defining characteristics of some early digital computers of the 1940s (See History of computing hardware)"

has a mistake. In the row about the Harvard Mark I – IBM ASCC in the column "Turing Complete" the link (1998) is clearly copied and pasted from the row about Z3. I don't know if Harvard Mark I was turing complete but the reference is wrong for sure. I am not familiar with the markup that references this table (obviously across multiple pages) and could not remove the information. Can someone else do it.

Stilgar (talk) 07:40, 25 June 2008 (UTC)

The same reference to Rojas applies to both electromechanical computers, which ran from tape of finite length, and whose programs are of finite length. Rojas shows that it is possible to simulate branching with pre-computed unconditional jumps. That would apply to both Z3 and Mark I. --Ancheta Wis (talk) 08:36, 25 June 2008 (UTC)

I don't agree with extending the Rojas conclusion to another machine. Isn't it more complicated? It sounds like a piece of original research that hasn't been published. Zebbie (talk) 23:30, 22 August 2008 (UTC)

Rojas wrong about Turing Complete?[edit]

As a separate issue, I think Rojas' conclusion was wrong. Turing's most important contribution to computer science was to postulate "halting problem." Simply put, you can't tell how long a program will take to finish. Therefore Turing defined his Turing machine with the conditional branch. Rojas conclusion, again paraphrased, was: you can write a program without conditionals, but you have to make the tape as long as the program run time is.

1. Rojas is redefining a Turing machine to have no conditionals.  I'd argue that is no longer a Turing machine.
2. Rojas' new machine has to know in advance how long the program will run.  Turing would argue you cannot know this.

Zebbie (talk) 23:30, 22 August 2008 (UTC)

The Rojas conclusion applies to jobs which include a while wrapper (code with a loop). The branches were needed to halt the program (the job) in any case. Otherwise the program could only terminate when the program encountered a HALT. A conditional branch to a location containing HALT would do this also. Such a program would stay in the potentially infinite loop until the operator manually terminated the job.
Jump tables are a technique to accomplish branches.
The length of time needed to complete a program can be known only to the programmer. I have had associates who had to re-submit jobs because the nervous operator terminated one which ran over 24 hours. But the program was correct and terminated by itself the next time after the operator let it run to completion. --Ancheta Wis (talk) 17:52, 23 August 2008 (UTC)
On a related note, the 'carry' operation used in the most elementary calculators from centuries ago is a type of 'branch'. I learned this from Hennessey and Patterson's books on Computer Organization. --Ancheta Wis (talk) 13:44, 24 August 2008 (UTC)

Broken links[edit]

As it stands, this still doesn't meet the 2008 FA criteria standards. I just ran the link checker tool on this article, and found some broken links (many are used as references):

The broken links will need to be replaced with other reliable sources, preferably books. — Wackymacs (talk ~ edits) 07:53, 6 July 2008 (UTC)

Problems with References[edit]

At the moment, it seems page numbers are being given in the 'References' section instead of in the footnotes where they should be. — Wackymacs (talk ~ edits) 08:18, 6 July 2008 (UTC)

Why the special section?[edit]

Why is there a special section for 'American developments' and not one for 'British developments', or any other country? Are Americans special?

--Bias Detector-- 21st July 2008 —Preceding unsigned comment added by (talk) 16:45, 21 July 2008 (UTC)

See the article: "There were three parallel streams of computer development in the World War II era; the first stream largely ignored, and the second stream deliberately kept secret." 1)=Zuse 2)=secret UK 3)=ENIAC etc. --Ancheta Wis (talk) 18:14, 21 July 2008 (UTC)

Shannon's thesis[edit]

Claude Shannon founded digital design. Open any electrical engineering book and you will see what Shannon did. This is a link to his thesis. --Ancheta Wis (talk) 10:07, 27 January 2009 (UTC)

"In his 1937 MIT master's thesis, A Symbolic Analysis of Relay and Switching Circuits, Claude Elwood Shannon 'proved' that Boolean algebra and binary arithmetic could be used to simplify the arrangement of the electromechanical relays then used in telephone routing switches, then turned the concept upside down and also proved that it should be possible to use arrangements of relays to solve Boolean algebra problems."

This isn't the same as "implementing" a circuit. However ground-breaking his thesis, it provided a proof, not an implementation. Follow the wikilinks. All we have is words to communicate here; we do need to be able to understand what they mean to make progress on this issue. --TraceyR (talk) 10:42, 27 January 2009 (UTC)

Thank you for taking this to the talk page, which I propose be the venue for improving the article: "In 1937, Shannon produced his master's thesis[61] at MIT that implemented Boolean algebra using electronic relays and switches for the first time in history." In this sentence, implemented refers to George Boole's work, which Shannon reduced to practice. Proof was established in the nineteenth century, before Shannon, by Boole. In other words, Shannon implemented Boole, with Boolean logic gates. In turn, successive generations of engineers re-implemented these logic gates in successive, improved technologies, which computing hardware has taken to successively higher levels of abstraction.

As a metaphor, take Jimbo Wales' statement of principle for Wikipedia. All successive editors implement Wales' vision. In the same way, Shannon implemented Boole.

If you have improvements to the article, I propose we work through them on the talk page. --Ancheta Wis (talk) 11:18, 27 January 2009 (UTC)

I think I see the disconnect: some things can be viewed as purely academic and theoretical; Boole's system of logic might be viewed in this light. But when Shannon expressed Boole's concepts in hardware (which had been done in an ad-hoc way earlier) he showed AT&T that there was another way to build the PSTN, which at one time was completely composed of humans doing the switching of telephone conversations. Today of course, this is all automated. So Shannon's accomplishment was essentially to provide an alternative vocabulary for the existing practice and mindset of the telephone company which in 1937 was analog circuitry. --Ancheta Wis (talk) 11:34, 27 January 2009 (UTC)

Here is a proposed sentence and reference:

--Ancheta Wis (talk) 12:54, 27 January 2009 (UTC)

That looks fine. Go with it. --TraceyR (talk) 10:26, 28 January 2009 (UTC)

Emil Post's machines[edit]

I need to put in a plug for Emil Post's work. His formulation of the Turing machine is simpler and Post was actually earlier than Turing, but he failed to publish early enough. That is actually the reason I left in the 'and others'. But, c'est la vie. Maybe the Post-Turing machine will gain currency in future Category:Computational models. --Ancheta Wis (talk) 18:36, 28 January 2009 (UTC)

Perhaps Post's "worker" can be regarded as a "machine" or perhps not. Either way, the evidence seems to point to Turing's 'On Computable Numbers' paper as having had considerable influence on subsequent developments. If you think it only fair to revise my edit, is 'others' (plural) the right word—maybe just refer to Post. I think there is a serious omission from the article in that it does not make any reference to Turing's Automatic Computing Engine design, which had important differences to von Neumann's 'First Draft' design. Incidentally, it is easy to underestimate the very close transatlantic co-operation during the second world war—Hodges says that Turing cited the von Neumann's paper in his own 1945/46 ACE paper.TedColes (talk) 23:06, 28 January 2009 (UTC)
I have No Problem with your edits as I respect your work. Perhaps we can also use the first-hand memoirs from First-Hand:History of the IEEE Global History Network to entice more editors to contribute here. --Ancheta Wis (talk) 01:47, 29 January 2009 (UTC)
I was not aware of Networking the History of Technology—I am not a member. But it looks like a potentially excellent and authoritative source. TedColes (talk) 06:56, 29 January 2009 (UTC)

Shannon and Stibnitz[edit]

Since Stibnitz is mentioned in the same paragraph as Shannon, there is a suggestion that Stibnitz's work was based on Shannon's thesis. If this is the case, perhaps this should be stated explicitly (and mentioned in the Stibnitz article too). If not, maybe a new paragraph is needed. --TraceyR (talk) 14:02, 29 January 2009 (UTC)

Or, since they both worked for Bell labs, connect with more text.
A new paragraph would be less work. --Ancheta Wis (talk) 15:03, 29 January 2009 (UTC)
If Stibnitz knew of Shannon's thesis and used it in his work, the article ought to reflect this. Is there citable evidence to enable this link to be made? That both worked for Bell is certainly circumstantial evidence, but is it enough to make the link?--TraceyR (talk) 15:21, 29 January 2009 (UTC)

Hatnote mess[edit]

Many of the section hatnotes are a little non-sequitorous. Others "belong" in other sections. I don't have the time to sift through them all myself though. –OrangeDog (talkedits) 18:37, 29 January 2009 (UTC)

Voltages ... were ... digitized[edit]

The lead summary states: "Eventually the voltages or currents were standardized, and then digitized". Could someone explain how voltages or currents were digitized. In what way(s) was this breakthrough made? I thought that my PC used 'analogue' power. Many thanks. --TraceyR (talk) 07:42, 30 April 2009 (UTC)

You can look up the voltages in the successor to the TTL databook. The logic series was 5400, then 7400, then 4000, etc. The 1970s 7400 Low power: "1.65 to 3.3V". We need an article about this, from 28V for relays, successively lower as power consumption became greener. Maybe WtShymanski can step in? --Ancheta Wis (talk) 21:25, 30 April 2009 (UTC)
When looking up DTL (1961) I see the levels were -3V and ground. So you can see the voltages were digitized from the beginning. --Ancheta Wis (talk) 00:41, 1 May 2009 (UTC)
Here is a handy table for the different logic families. --Ancheta Wis (talk) 00:50, 1 May 2009 (UTC)
If anyone can tell me what that paragraph is supposed to be saying, I'll buy him/her a donut. The whole lead is garbage and must be rewritten. For every Von Neumann chip out there there's probably a half-dozen Harvard-style chips - let's not lie excessively in the first paragraph. --Wtshymanski (talk) 18:49, 24 September 2009 (UTC)
Ever notice how a perfectly clear Wikipedia article, by gentle stages, eventually becomes something that looks like the transcript of the speech of a cat lady having a bad day? One's confidence in the ever-upward spiral of Wikiprogress is shaken. List all the synonyms, show how it's spelled in different varities of English, and, perhaps, include a diatribe on how it wsa *really* invented by Tesla/a Hungarian/a Canadian/an ET - put all that in the first sentence with enough links and footnotes, and you're well on the way to mania. --Wtshymanski (talk) 20:17, 24 September 2009 (UTC)

Delay line memory[edit]

I appreciate ArnoldReinhold's edits; they show that the flat memory model is a definite advance on the delay line memory model that early programmers had to deal with; however the current style of programming did not arise from nothing. If the deleted edits were unclear, then we might have to give an example of the contortions that programmers had to go through when solving a problem in the early days. Hardware-independent programming did not exist in the early days. Even today, operating system-independent programming is not a given: the API is typically OS dependent. In the absence of contributions to the article in this vein, consider how one would have to program if the items in memory were to decay before they were reused -- one would be forced to refresh critical data before the delay time had elapsed. --Ancheta Wis (talk) 19:01, 24 May 2009 (UTC)

You seem to be implying that refreshing memory was the programmer's responsibility, which it wasn't. A better example might be the programming contortions required to access the early magnetic drums. --Malleus Fatuorum 19:07, 24 May 2009 (UTC)
That was the point of the deleted text (on accessing the magnetic drums). --Ancheta Wis (talk) 21:37, 24 May 2009 (UTC) rvv --Ancheta Wis (talk) 05:25, 1 September 2009 (UTC)


I reached this article looking for a reference to the MOSAIC computer (Ministry of Supply Automatic Integrator and Calculator) and wondered if the following Introduction might be short enough and apposite:

Computing hardware subsumes (1) machines that needed separate manual action to perform each arithmetic operation, (2) punched card machines, and (3) stored program computers. The history of (3) relates largely to (a) the organization of the units to perform input and output, to store data and to combine it, into a complete machine (computer architecture), (b) the electronic components and mechanical devices that comprise these units, (3) higher levels of organization into 21st century supercomputers. Increases in speed and memory capacity, and decreases in cost and size in relation to compute power, are major features of the history.

Five lines instead of 36. The present Introduction could become the first section, headed say Overview, and the pre-stored program coverage extended to mention the abacus, the National Accounting Machines that "cross footed" under control of a "form bar" that facilitated table construction using difference methods, and machines of mid 20th century typified by the Brunswiga (not sure of spelling) and Marchand. The overlap of punched card and stored program computers, by dint of control panels and then card programmed computers could be mentioned. Michael P. Barnett (talk) 01:47, 24 December 2010 (UTC)

Used your 5-line suggestion for the Introduction. Please feel free to incorporate the remainder of your contribution into the article. Thank you for your suggestions. --Ancheta Wis (talk) 11:54, 26 December 2010 (UTC)

Argument at IEEE 754-1985[edit]

There is currently a slow edit war at IEEE 754-1985. I put down the Z3 as the first working computer as is in this article and it was reverted. I pointed out this article as a better venue to argue matters about history but they can't be bothered to do that so I'm doing it instead. Discussion at Talk:IEEE 754-1985#Z3 first working computer. 17:50, 8 February 2011 (UTC)

Punched cards derived from Jean baptist Falcon (1728)[edit]

Please put in a note, that the idea of punched card driven looms originated from french mechanic Jean Baptist Falcon in 1728, although Falcon never successed in building one by himself. —Preceding unsigned comment added by (talk) 15:12, 13 February 2011 (UTC)

I can't see why, they were a development of the perforated paper rolls being used for the purpose and he didn't make it work. Who used the perforated paper rolls first and when would be more relevant. Also relevant at this level of detail possibly would be the barrels with pins which were used before that for controlling automatons, and as far as I know Hero of Alexandria used them first. Dmcq (talk) 19:07, 13 February 2011 (UTC)

Transition from analog to digital[edit]

I propose to rename the analog section in order to preserve the content that was removed.

Alternatively, perhaps a new section with this name might be inserted to contain that content. --Ancheta Wis (talk) 11:15, 5 May 2011 (UTC)

The business about accuracy is practically irrelevant. Digital computers are more convenient. It is like the difference between solving geometry problems the Greek way and solving them using Cartesian coordinates. The Cartesian coordinates may be more long winded in some cases but they just work. Dmcq (talk) 11:09, 9 May 2011 (UTC)
Noise is relevant. A usable signal to noise ratio is the fundamental reason that digital circuits are more accurate than analog circuits. --Ancheta Wis (talk) 11:22, 9 May 2011 (UTC)
You mean precision, not accuracy. Doesn't matter how many bits you have in the number if it's the wrong number. --Wtshymanski (talk) 13:21, 9 May 2011 (UTC)
Yes, you are quite right about the distinction. Thank you. --Ancheta Wis (talk) 13:58, 9 May 2011 (UTC)
I believe the original idea behind ENIAC was that it should emulate a differential analyser but that idea was abandoned early on as lacking in vision. Even Ada Lovelace and Babbage knew better Dmcq (talk) 17:12, 9 May 2011 (UTC)

Was the Harvard Mark I "Turing Complete"? -- Revisited[edit]

We currently label the Mk I as NOT Turing complete - presumably because of a lack of jump instructions. There was some discussion of this on this talk page back in 2008.

It must be noted that:

  • Turing completeness says that a machine is considered to be Turing complete if it can emulate a Turing complete machine.
  • One instruction set computer points out that a machine that implements nothing more than the 'SUBLEQ' instruction is Turing complete.
  • Harvard Mark I says that the Mk I could run from a looped paper tape - so even without a formal jump instruction, it could run the same set of instructions over and over indefinitely.
  • The following program demonstrates that it is possible to emulate a SUBLEQ machine with code inside a single 'while(1)' loop - which the Harvard Mark I could have implemented via paper tape looping:
// Initialization:
typedef unsigned char byte ;
int lut [ 256 ] =
     1, 1, 1, 1, 1, 1, 1, ....  // 128 ones.
     0, 0, 0, 0, 0, 0, 0, ....  // 128 zeroes.
   } ;
byte mem [...whatever...] = { ...whatever... } ;  // The initial state of memory in the SUBLEQ machine
int PC = 0 ; // The SUBLEQ machine's program counter.
// Runtime:
while ( 1 )  // (Implemented via a paper tape loop)
  // Read instruction operands from the program counter location.
  int a = mem[PC++] ;
  int b = mem[PC++] ;
  int c = mem[PC++] ;
  // Perform subtraction:
  mem[b] -= mem[a] ;
  // Use lookup table to extract sign of mem[b] so that:
  // c is multiplied by 1 and added to the program counter if mem[b]<=0
  // c is multiplied by 0 and added to the program counter if mem[b]>0.
  PC += lut[mem[b]+128] * c ;

Ergo, the Harvard Mark I was indeed Turing Complete. This is rather important IMHO. SteveBaker (talk) 15:26, 3 May 2012 (UTC)

Why is 'Turing completeness' important? It is not synonymous with 'general purpose' - and that certainly would not be claimed for the Harvard Mark I. --TedColes (talk) 15:43, 3 May 2012 (UTC)
It's important because the Church-Turing thesis says that all computers that are turing complete are equivalent (given enough time and memory). If the Mk I is Turing complete - then (with enough time and memory) it could emulate any modern computer - so we'd have to say that it should be considered to be "general purpose". Turing completeness is what truly separates the modern concept of the word "computer" from some mere calculator or sequencer. SteveBaker (talk) 16:52, 3 May 2012 (UTC)
It seems that you are correct. However, we have a problem. Here in Wikipedia, we can't publish original research. What we need is a reliable source that claims this (or the contrary). ---- CharlesGillingham (talk) 20:02, 3 May 2012 (UTC)
I'm painfully aware that this discovery is my own WP:OR - and therefore problematic without reliable sources. However, the entire section History_of_computing_hardware#Early_computer_characteristics has not one single reference - so why should this article have an unreferenced falsehood rather than an unreferenced truth? We do state as a fact that the Mk I is not Turing complete - and that is stated without references. Per WP:V we can only do that if this statement is uncontroversial. Well, following my reasoning, it most certainly is controversial because both you and I agree that it's untrue. Hence until/unless we can find a WP:RS we have three alternatives:
  1. Leave the article as it is - with an unreferenced, controversial (and seemingly false) statement.
  2. Change the article to say that the Mk I is indeed Turing complete - leaving an unreferenced (but evidently true and hopefully uncontroversial) statement.
  3. Remove that table (or at least the "Turing complete" column or the "Mk I" row) on the grounds that it is "is likely to be challenged" and has no WP:RS to back it up (per WP:V).
I don't think (1) is acceptable - so we either need to change the (unreferenced) "No" to an equally unreferenced "Yes" - or nuke the table (per WP:V) on the grounds that it's both un-sourced and controversial. Ideally of course we should find a reliable source - but until we do, the article shouldn't contain an unreferenced statement that we now know to be false.
SteveBaker (talk) 12:46, 7 May 2012 (UTC)
Turing completeness is clearly controversial, so I would favour removing that column form the table. The nuclear option of deleting the whole table seems extreem, particularly as the transcluded form has been removed from a whole host of articles. As regards the lack of references, readers can look to the articles about the individual machines. --TedColes (talk) 17:04, 7 May 2012 (UTC)

I think the Turing completeness column is useful to our readers as a rough guide to how the technology evolved. The controversial entries should have a footnote that says later researchers have attempted to show the machines in question were Turing complete but those capabilities were not envisioned when the machines were developed and used. --agr (talk) 10:39, 9 May 2012 (UTC)

Only if all the entries can be verified from independant sources, and not original research, should this column be retained.--TedColes (talk) 11:38, 9 May 2012 (UTC)
This table only useful as a "rough guide" if it actually contains true facts. Before I edited it, the article said that the Mk I is definitely not Turing complete - which was clearly false. That's not a "useful rough guide" - it's a misleading untruth!
The historical matter of whether the machine's developers were trying to make the machine Turing complete is moot because the Church-Turing thesis wasn't widely accepted or its implications understood until Kleene's paper was published in the early 1950's...six or more years after the Harvard Mk I entered service. Before Church-Turing, it really didn't seem to matter a damn whether a machine was Turing complete or not because nobody knew that Turing-completeness was the key to making a fully general-purpose computer. They couldn't have known how important that is - and therefore were unlikely to build specific features into their machines to ensure that it crossed that threshold. It's not like researchers were pushing steadily towards Turing-completeness - so the column of Yes's and No's doesn't really exhibit a trend in the design of early computers.
Neither I, nor WP:V have any problem with putting unsourced material into the encyclopedia provided that it's not controversial. You don't need to find sources for "The sky is blue", "2+2=4" or "My laptop is Turing complete". But as soon as a statement becomes controversial, you either have to find references for it or remove it. Personally, I'm 100% convinced that the Harvard Mk I was Turing complete - and IMHO our article wasn't just controversial, it was downright wrong. But my argument alone should suffice to convince everyone that the statement that the Mk I is not Turing complete is at the very least controversial. So no matter what, the article can't say that.
The decision then comes down to either:
  • If everyone accepts my argument (above) - then a "Yes" next to the Harvard Mk I isn't controversial - and we can change the article to say that without a reference (although that would still be nice to have)
  • One or more people here disagree with (or don't understand) my argument - so the table is controversial whether it says "Yes" or "No". Since it's unreferenced material - it must be deleted in order to resolve the controversy.
SteveBaker (talk) 13:21, 9 May 2012 (UTC)
If we don't know we should just put in a dash, we don't have to say yes or no. I know some people just can't stand uncertainty so will argue forever about grey things like that and personally I'm no fan of the Turning column so I wouldn't miss it. The real point is that people couldn't be bothered with anything like that, Zuse for instance wanted to produce programmable scientific calculators that individual engineers or small groups could use, for that price was a main constraint. Colossus was built to crack codes. Universality just wasn't one of the things the early pioneers were interested in. You compare them against the Manchester Baby which was easily universal but totally impractical and built just to test out some ideas especially the Williams tube memory. Universality doesn't require much as can be seen from the game of Life, I think the Baby can be celebrated as the first computer with a modern structure having a stored program rather than all the configuring of ENIAC which was an automated bunch of electronic tabulators in effect. If anything I'd put down the main innovation in them or what they were for rather than the Turing completeness column. Perhaps change the 'Programming' column to 'Description' and add under the Baby for instance "Testbed for Williams tube memory. First stored program computer." Dmcq (talk) 16:53, 9 May 2012 (UTC)

Flamm citations[edit]

To anon I patched in a phrase in the new footnote 1 which I hope matches your intent. Please feel free to alter my patch to your contribution. --Ancheta Wis   (talk | contribs) 03:25, 25 January 2013 (UTC)

In the same light, I propose to use 'accelerated' rather than 'underpinned' in your contribution because the article makes it clear that there were funding sources other than military contract, in both US and Germany. I do not deny that IC-based computers in military systems (1958-1960s) were materially funded by US (& likely USSR) contracts. --Ancheta Wis   (talk | contribs) 04:06, 25 January 2013 (UTC)

Sorry, going to be a pain! With regard to the USSR, I feel the word underpinned to describe government involvement is already an understatement; I feel underpinned is also the appropriate term to use for development elsewhere. Also the sources say that the investment from the private sector pales into insignificance when compared the resources ploughed in from government. Just so it's not my word (all quotes below are from reviews of Flamm's studies): "As Flamm points out, one of the crucial features of the computer was the role played by government and universities in the early stages of research and development when a great deal of 'Knightian' uncertainty (as opposed to risk) deterred private companies from significant commitment. ... [In Japan,] the Ministry of International Trade and Industry was crucial". An "insignificant" commitment from the private sector, according to the sources cited, for early stages of computer development and the computer market. According to Flamm, who, at least in my understanding, is my understanding of what we must accurately represent, governments more than "accelerated" the development and commercial viability—it wouldn't have happened without them. "the U.S. government, especially its military arm, has played a crucial role at critical times in bringing the computer out of the experimental stage to its present strong position in the marketplace". Again: "the government's multifaceted involvement ... [included that of] research sponsor, principal customer, and regulator of computing technology". And again: "government support is crucial because of the financial disincentives for private investors to be involved in long-term Research and Development". So I'm cheering for a slightly more emphatic term than accelerate, at least for early development and the creation of a viable market! (talk) 00:06, 26 January 2013 (UTC)
I appreciate your response, and have reverted my wording. Thank you for your precis of the Flamm works.
Computing, IC engineering, Arpanet, Quantum cryptography, and so forth, would look very different with different/ alternative funding histories. And these topics are germane to the article. -Ancheta Wis   (talk | contribs) 00:47, 26 January 2013 (UTC)


What category (or categories) is appropriate for machines that use integrated circuits, but don't put the entire processor on a single chip? In other words, what category covers what History of computing hardware (1960s–present) calls "Third generation" computers?

In other words, what category goes in the blank of the following?:

--DavidCary (talk) 14:55, 23 August 2013 (UTC)

Perhaps category: minicomputers ? --DavidCary (talk) 14:55, 23 August 2013 (UTC)

The category: minicomputers covers a many of them, but it doesn't cover other multi-chip processors such as the Apollo Guidance Computer, the Cray-1, the first hardware prototype of the Motorola 6800, etc.

Should we start a new category, perhaps category: integrated circuit processors? --DavidCary (talk) 14:55, 23 August 2013 (UTC)

Archimedes' method[edit]

Archimedes' method of performing calculations was the use of mechanical balance (think see-saw) of countable items versus the object being measured. This method was used for estimating the number of grains of sand in the universe, etc. (see the the sand reckoner).

Thus Archimedes' method of calculation was very concrete, as befits his status as engineer, inventor, and physicist. For this reason I propose to add his method to history of computing rather than to this article. I am pretty sure there is already a main article about this. --Ancheta Wis   (talk | contribs) 02:19, 30 September 2013 (UTC)

That sounds reasonable to me. Bubba73 You talkin' to me? 02:28, 30 September 2013 (UTC)
Done. --Ancheta Wis   (talk | contribs) 02:40, 30 September 2013 (UTC)

Claim that Zuse is commonly known as the "inventor of computer" is wrong.[edit]

The lede previously claimed that Zuse was commonly known as *the* "inventor of the computer" and the only citations given are to discussions in blogs. Published histories of computing have variously proposed that the "inventor of the computer" is Babbage (who designed the first programmable computer), Aiken (for the Harvard Mark1 which was a highly influential electromechanical computer designed and built around the time of Zuse's Z3), Atanasoff (for the first electronic digital computer), Eckert and von Neumann (for the stored program concept), and several other milestones. Zuse's Z3 could certainly could support the claim of his being the creator of the first working electromechanical programmable computer, but this does not imply that he is commonly known as the inventor of the computer. Wikipedia articles should not be used to push non-mainstream views.

For now I have moved this claim down to section on Zuse's computer for now, but I think that either a separate section discussing the complex issue of who was *the* inventor of the computer should be added, or this claim should be removed (in any case, the claim needs reputable citations, not just blogs). (talk) 16:33, 22 December 2013 (UTC)


Hi, the article is rather chaotic and unorganized. It's very difficult for a casual reader to make sense of the important developments and stages. There's is also lots of important information that is missing.Noodleki (talk) 19:19, 7 January 2014 (UTC)

Noodleki, Thank you for responding! Now, using the WP:BRD protocol, I propose reverting myself, and adding inline tags to indicate what ought to be worked out?
To all editors, comments on my proposal? In other words, start with Noodleki's changes, and tag Noodleki's edits with concerns.
  1. For example, I think it is POV to call the earliest known computing devices primitive.
  2. The invention of zero isn't even marked in the article, and zero was momentous, in my opinion.
  3. The recognition that the carry operation was a form of branching...
  4. The upcoming quantum computers are only briefly mentioned, etc., etc.
  5. Software is only tangentially mentioned. ...
  6. Or ... some other proposal ...
  7. Such as agreeing on an outline of the changes? --Ancheta Wis   (talk | contribs) 20:35, 7 January 2014 (UTC)

Hi, I understood from the above that you would revert. I think your suggestions equally apply to the version as it stands, although I think software wouldn't necessarily come under this article's purview. Thanks.Noodleki (talk) 21:20, 8 January 2014 (UTC)

Noodleki, you are welcome to interpolate inline tags, or other comment on the talk page. Regarding your vision of the article, I would be interested in exactly what missing items you are noting. The development of the hardware follows the published history, for example. A retrospective view necessarily distorts what actually happened. If we were to follow Babbage's dream, for example, we would have seen steam-powered computation. But that is not the way computation actually developed. --Ancheta Wis   (talk | contribs) 01:02, 9 January 2014 (UTC)
I'm afraid I don't understand what you mean about inline tags. You said above that you propose to revert yourself, but you don't seem to be doing this. The changes in the article are layout improvement and better organization of material, and more information on key developments such as Babbage and Colossus.Noodleki (talk) 11:31, 9 January 2014 (UTC)
The WP:BRD protocol requires a discussion - the reverter should explain the reasons behind his/her revert, which is something you aren't doing. Your suggestions apply equally to the article as it stands, and I've already explained the basis for my changes. You also agreed earlier to revert it yourself, and I don't understand why you are not doing this.Noodleki (talk) 11:33, 12 January 2014 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── Noodleki, I am waiting for the other editors to respond. Your changes for Babbage fit nicely in the nineteenth c. and I suggest that you add them to that section. However I do not agree with your characterization of 'chaotic' and suggest to you that there is a logical flow in the article already. It goes a bit far to place as much emphasis on Babbage as your version, as his design required repeatable manufacturing tolerances beyond the capacities of the nineteenth c. It took another century. __Ancheta Wis   (talk | contribs) 12:00, 12 January 2014 (UTC)

I think Babbage is underemphasized. After all, he was the first to describe a proper computer. '1801: punched card technology 1880s: punched card data storage' is a very strange set of sections and there is far too much emphasis on Hollerith, who's invention was a simple calculating device, similar to Pascal's invention. The article also lacks a 'flow' - it's very disjointed and doesn't explain clearly the important stages. The layout could be greatly improved, the intro shortened, the last section removed as there is a dedicated article for it already. There is also little information on analog computers. All these deficiencies were removed with my edit.Noodleki (talk) 15:06, 12 January 2014 (UTC)
Babbage's work is one stage in the history of computing hardware. There is more to computing than Babbage. You are welcome to flesh out Babbage's role, but he is not center stage today. The current article states clearly that the pace of development is unabated. --Ancheta Wis   (talk | contribs) 04:42, 14 January 2014 (UTC)
Here is an example of an inline tag.[discuss] --Ancheta Wis   (talk | contribs) 04:55, 14 January 2014 (UTC)
As an example of the pace of computing hardware development, there are multiple streams of development for qubits which are in progress right now. There is no clear winner for implementation, philosophical explanation, or technological exploitation yet. But large amounts of money are being risked right now, as in the Babbage case. IBM is taking yet another path, to make things even more interesting. __Ancheta Wis   (talk | contribs) 12:57, 14 January 2014 (UTC)
I'm not suggesting Babbage is 'center-stage'. I don't know why you bring up qubits - that could go in the Post-1960 article. Anyway, you still haven't provided an explanation for your revert, and you haven't reversed it, despite saying you would. So, I will provisionally put those changes back in, and you can point out problems that you might have with inline citations. ? . Noodleki (talk) 12:04, 16 January 2014 (UTC)