Random-access memory
File:DDR SDRAM.jpg | |
Connects to | PCB or motherboard via one of:
|
---|---|
Types | SDRAM DDR RDRAM DDR 2 DDR 3 |
Common manufacturers | Micron Technology Samsung Kingston Technology Corsair Memory Mushkin |
Random access memory (usually known by its acronym, RAM) is a type of data storage used in computers. It takes the form of integrated circuits that allow the stored data to be accessed in any order — that is, at random and without the physical movement of the storage medium or a physical reading head.
The word "random" refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.[1] This contrasts with storage mechanisms such as tapes, magnetic discs and optical discs, which rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than the data transfer, and the retrieval time varies depending on the physical location of the next item.
Terminology
Originally, RAM referred to a type of solid-state memory, and the majority of this article deals with that, but physical devices which can emulate true RAM (or, at least, are used in a similar way) can have "RAM" in their names: for example, DVD-RAM.
RAM is usually writable as well as readable, so "RAM" is often used interchangeably with "read-write memory". The alternative to this is "ROM", or Read Only Memory. Most types of RAM lose their data when the computer powers down. "Flash memory" is a ROM/RAM hybrid that can be written to, but which does not require power to maintain its contents. RAM is not strictly the opposite of ROM, however. The word random indicates a contrast with serial access or sequential access memory.
"Random access" is also the name of an indexing method: hence, disk storage is often called "random access" because the reading head can move relatively quickly from one piece of data to another, and does not have to read all the data in between. However the final "M" is crucial: "RAM" (provided there is no additional term as in "DVD-RAM") always refers to a solid-state device.
Many CPU-based designs actually have a memory hierarchy consisting of registers, on-die SRAM caches, DRAM, paging systems, and virtual memory or swap space on a hard-drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the "random access" term in RAM. Even within a hierarchy level such as DRAM, the specific row/column/bank/rank/channel/interleave organization of the components make the access time variable, although not to the extent that rotating storage media or a tape is variable.
Overview
The key benefit of RAM over types of storage which require physical movement is that retrieval times are short and consistent. Short because no physical movement is necessary, and consistent because the time taken to retrieve a piece of data does not depend on its current distance from a physical head; it requires practically the same amount of time to access any piece of data stored in a RAM chip. Most other technologies have inherent delays for reading a particular bit or byte. The disadvantage of RAM over physically moving media is cost, and the loss of data when power is turned off.
Because of this speed and consistency, RAM is used as 'main memory' or primary storage: the working area used for loading, displaying and manipulating applications and data. In most personal computers, the RAM is not an integral part of the motherboard or CPU—it comes in the easily upgraded form of modules called memory sticks or RAM sticks about the size of a few sticks of chewing gum. These can quickly be removed and replaced should they become damaged or too small for current purposes. A smaller amount of random-access memory is also integrated with the CPU, but this is usually referred to as "cache" memory, rather than RAM.
Modern RAM generally stores a bit of data as either a charge in a capacitor, as in dynamic RAM, or the state of a flip-flop, as in static RAM. Some types of RAM can detect or correct random faults called memory errors in the stored data, using RAM parity and error correction codes.
Many types of RAM are volatile, which means that unlike some other forms of computer storage such as disk storage and tape storage, they lose all data when the computer is powered down. For these reasons, nearly all PCs use disks as "secondary storage". Small PDAs and music players (up to 8 GiB in Jan 2007) may dispense with disks, but rely on flash memory to maintain data between sessions of use.
Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. Unless the memory used is non-volatile, a RAM disk loses the stored data when the computer is shut down. However, volatile memory can retain its data when the computer is shut down if it has a separate power source, usually a battery.
If a computer becomes low on RAM during intensive application cycles, the computer can resort to so-called virtual memory. In this case, the computer temporarily uses hard drive space as additional memory. Constantly relying on this type of backup memory it is called thrashing, which is generally undesirable, as virtual memory lacks the advantages of RAM. In order to reduce the dependency on virtual memory, more RAM can be installed.
Recent developments
Currently, several types of non-volatile RAM are under development, which will preserve data while powered down. The technologies used include carbon nanotubes and the magnetic tunnel effect.
In summer 2003, a 128 KiB magnetic RAM chip manufactured with 0.18 µm technology was introduced. The core technology of MRAM is based on the magnetic tunnel effect. In June 2004, Infineon Technologies unveiled a 16 MiB prototype again based on 0.18 µm technology.
Nantero built a functioning carbon nanotube memory prototype 10 GiB array in 2004.
In 2006, Solid state memory came of age, especially when implemented as "Solid state disks", with capacities exceeding 150 gigabytes and speeds far exceeding traditional disks. This development has started to blur the definition between traditional random access memory and disks, dramatically reducing the difference in performance.
Memory wall
The "memory wall" is the growing disparity between CPU and memory speeds. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance. [2]
Currently, CPU speed improvements have slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in their Platform 2015 documentation (PDF):
“First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat (more on power consumption below). Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called Von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.”
The RC delays in signal transmission were also noted in Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures which projects a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014. The data on Intel Processors clearly shows a slowdown in performance improvements in recent processors. However, Intel's new processors, Core 2 Duo (codenamed Conroe) show a significant improvement over previous Pentium 4 processors; due to a more efficient architecture, performance increased while clock rate actually decreased.
DRAM packaging
For economic reasons, the large (main) memories found in personal computers, workstations, and non-handheld game-consoles (such as Playstation and Xbox) normally consists of dynamic RAM (DRAM). Other parts of the computer, such as cache memories and data buffers in hard disks, normally use static RAM (SRAM).
See also
- CAS latency (CL)
- DIMM
- DVD-RAM
- Dual-channel architecture
- Error-correcting code (ECC)
- Registered/Buffered memory
- Compact Flash
- PC card
- Static RAM (SRAM)
- STT RAM (Spin Torque Transfer RAM)
- Non-Volatile RAM (NVRAM)
- Dynamic RAM (DRAM)
- Fast Page Mode DRAM
- EDO RAM or Extended Data Out DRAM
- XDR DRAM
- SDRAM or Synchronous DRAM
- DDR SDRAM or Double Data Rate Synchronous DRAM; now being replaced by DDR2 SDRAM
- RDRAM or Rambus DRAM
Notes and references
- ^ Strictly speaking, modern types of DRAM are therefore not truly (or technically) random access, as data are read in burst; the name DRAM has stuck however.
- ^ The term was coined in Hitting the Memory Wall: Implications of the Obvious (PDF).
External links
- How RAM Works – Article by Jeff Tyson and Dave Coustan