|WikiProject Computing / Software / Hardware||(Rated Start-class, Mid-importance)|
|WikiProject Microsoft Windows / Computing||(Rated Start-class, Low-importance)|
|This page was nominated for deletion on 8 February 2009 (UTC). The result of the discussion was no consensus.|
- On x86, not if the OS enables protected memory. The "16 bits" is then merely the "displacement" from the base address of the current memory segment. Jeh (talk) 07:43, 19 June 2017 (UTC)
Too MSDOS/Windows/Intel focussed
page should include limits of other technologies/architectures, or be renamed 'Wintel RAM limits' or something similar. — Preceding unsigned comment added by 22.214.171.124 (talk) 20:23, 1 August 2012 (UTC)
- Trying to fix this. As usual, we have a list of specs and no mention of *why* the specs exist. Many technology articles in Wikipedia read as if we'd raked through an Area 51 crash.
- A 64-bit address space is enough to give everyone on earth nearly 2 gigabytes of RAM - only problem is, even with 1 nanosecond RAM it would take 595 years to do the first pass through RAM on a power-on self test. In 1981 my desktop PC ran happily with 150 ns RAM and 385 kbytes - today you can get multiple gigabytes, but the speed is hardly 10 times more. --Wtshymanski (talk) 15:07, 24 August 2012 (UTC)
"Pins" counts are mostly wrong, also misleading
Modern CPUs do not have separate "memory address pins". The memory addresses are multiplexed onto the same pins as the data from memory transactions.
Even in the days when there were separate "memory address pins" you can't count the pins and say the maximum addressable memory is two to that number bytes. First, the assertion early on that all of the address pins must be valid at the same time is false. Address pins can be shared, for example between a set of high bits and a set of low bits, with an accompanying pin to indicate which is valid. The memory subsystem can latch these into two halves of a parallel register that contains the entire address. Thus the number of effective memory address bits could be double the number of "memory address pins".
For another example, on the Pentium Pro and later Xeons that first supported PAE, PAE does increase the physical address width to 36 bits - but there were only 33 address pins on the CPU, numbered A3 through A36. Bits A0, A1, and A2 never leave the CPU, as RAM is always read and written in 8-byte chunks. These CPUs do allow physical memory to be addressed at the byte level, but anything smaller than an eight-byte chunk is all resolved in the CPU-to-memory cache. Jeh (talk) 07:48, 19 June 2017 (UTC)
more on pin count
In fact... the opening sentence of the "CPU addressing limits" section:
For performance reasons, all the parallel address lines of an address bus must be valid at the same time, otherwise access to memory would be delayed and performance would be seriously reduced.
is both unsourced and highly dubious. We do have serial interfaces to memory now, wherein addresses and data flow over the same wires. There are such things as multiplexed memory address/data pins. And even before such interfaces, not every CPU was designed with a goal of reducing this supposed performance problem. In some cases memory access was fast enough even without presenting all the bits of an address in parallel.
And this assertion is the basis for this entire section, rendering the entire section questionable.
CPU address limits may be further constrained by the chipset. A large number of chipsets for Pentium Pro, and later the 32-bit Xeons, do not support the entire possible address width of their processors. Then there are motherboard limitations such as the sheer number of RAM module sockets available and the maximum size of a RAM module. These issues are hinted at in the lede but not mentioned in the article body - which means they shouldn't be in the lede. The section on CPU address limits leaves the reader with the impression that whatever the "address pin" count on the CPU is, that's the RAM limit and that's it. Wrong.
This entire article reads more like an essay written by someone in an AP high school class who maybe completed a project in digital logic. I do not believe it is salvageable in its present form.
All references to limits of specific processors and operating systems should be left to their respective articles. There is no need to duplicate that information here. There is perhaps room for an article describing the principles by which RAM addresses are limited, but it will need to be far better-referenced than this one. I suspect the material would be just a couple of paragraphs that would better be placed in something like the Computer memory article, though that one is also sadly lacking references. Jeh (talk) 16:03, 19 June 2017 (UTC)
And then there's the operating systems section
Which starts out:
The first major operating system for microcomputers ...
Why is this the starting point? What about OSs for minicomputers, superminis, mainframes? Nobody cares, other than historically, about the RAM limits of CP/M. If we do care about history - and as an encyclopedia, we should - then we should cover older OSs, and non-personal computer OSs. If we don't care about history then all of the "no longer current" stuff can go.
As with the chip-by-chip stuff there is little reason here to enumerate operating systems and describe the RAM limits of each - this info should go in each OS's article. Various OSs are different enough from each other that a simple list "this one supports this much, that one supports that much" is grossly inadequate. Again, there is perhaps room for an article that describes the evolution of memory support in OSs, why various decisions were made, etc. But a simple list of product characteristics is not what WP articles should be. Especially not when that information is already in the products' articles. Jeh (talk) 16:13, 19 June 2017 (UTC)