In computer operating systems, paging is one of the memory management schemes by which a computer stores and retrieves data from the secondary storage for use in main memory. In the paging memory-management scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. The main advantage of paging over memory segmentation is that it allows the physical address space of a process to be noncontiguous. Before paging came into use, systems had to fit whole programs or their whole segments into storage contiguously, which caused various storage and fragmentation problems.
Paging is an important part of virtual memory implementation in most contemporary general-purpose operating systems, allowing them to use secondary storage[a] for data that does not fit into physical random-access memory (RAM).
- 1 Page faults
- 2 Page replacement algorithms
- 3 Thrashing
- 4 Sharing
- 5 Terminology
- 6 Implementations
- 7 Performance
- 8 Reliability
- 9 Addressing limits on 32-bit hardware
- 10 See also
- 11 Notes
- 12 References
- 13 External links
The main functions of paging are performed when a program tries to access pages that are not currently mapped to physical memory (RAM). This situation is known as a page fault. The operating system must then take control and handle the page fault, in a manner invisible to the program. Therefore, the operating system must:
- Determine the location of the data in secondary storage.
- Obtain an empty page frame in RAM to use as a container for the data.
- Load the requested data into the available page frame.
- Update the page table to refer to the new page frame.
- Return control to the program, transparently retrying the instruction that caused the page fault.
If there is not enough available RAM when obtaining an empty page frame, a page replacement algorithm is used to choose an existing page frame for eviction. If the evicted page frame has been dynamically allocated during execution of a program, or if it is part of a program's data segment and has been modified since it was read into RAM (in other words, if it has become "dirty"), it must be written out to a location in secondary storage before being freed. Otherwise, the contents of the page's frame in RAM are the same as the contents of the page in its secondary storage, so it does not need to be written out to secondary storage. If, at a later stage, a reference is made to that memory page, another page fault will occur and another empty page frame must be obtained so that the contents of the page in secondary storage can be again read into RAM.
Efficient paging systems must determine the page frame to empty by choosing one that is least likely to be needed within a short time. There are various page replacement algorithms that try to do this. Most operating systems use some approximation of the least recently used (LRU) page replacement algorithm (the LRU itself cannot be implemented on the current hardware) or a working set-based algorithm.
To further increase responsiveness, paging systems may employ various strategies to predict which pages will be needed soon. Such systems will attempt to load pages into main memory preemptively, before a program references them.
Page replacement algorithms
When pure demand paging is used, page loading only occurs at the time of the data request, and not before. In particular, when demand paging is used, a program usually begins execution with none of its pages pre-loaded in RAM. Pages are copied from the executable file into RAM the first time the executing code references them, usually in response to page faults. As a consequence, pages of the executable file containing code not executed during a particular run will never be loaded into memory.
This technique, sometimes also called "swap prefetch", preloads a process's non-resident pages that are likely to be referenced in the near future (taking advantage of locality of reference). Such strategies attempt to reduce the number of page faults a process experiences. Some of those strategies are "if a program references one virtual address which causes a page fault, perhaps the next few pages' worth of virtual address space will soon be used" and "if one big program just finished execution, leaving lots of free RAM, perhaps the user will return to using some of the programs that were recently paged out".
Free page queue
The free page queue is a list of page frames that are available for assignment after a page fault. Some operating systems[b] support page reclamation; if a page fault occurs for a page that had been stolen and the page frame was never reassigned, then the operating system avoids the necessity of reading the page back in by assigning the unmodified page frame.
Some operating systems periodically look for pages that have not been recently referenced and add them to the Free page queue, after paging them out if they have been modified.
Unix operating systems periodically use sync to pre-clean all dirty pages, that is, to save all modified pages to hard disk. Windows operating systems do the same thing via "modified page writer" threads.
Pre-cleaning makes starting a new program or opening a new data file much faster. The hard drive can immediately seek to that file and consecutively read the whole file into pre-cleaned page frames. Without pre-cleaning, the hard drive is forced to seek back and forth between writing a dirty page frame to disk, and then reading the next page of the file into that frame.
Most programs reach a steady state in their demand for memory locality both in terms of instructions fetched and data being accessed. This steady state is usually much less than the total memory required by the program. This steady state is sometimes referred to as the working set: the set of memory pages that are most frequently accessed.
Virtual memory systems work most efficiently when the ratio of the working set to the total number of pages that can be stored in RAM is low enough that the time spent resolving page faults is not a dominant factor in the workload's performance. A program that works with huge data structures will sometimes require a working set that is too large to be efficiently managed by the page system resulting in constant page faults that drastically slow down the system. This condition is referred to as thrashing: pages are swapped out and then accessed causing frequent faults.
An interesting characteristic of thrashing is that as the working set grows, there is very little increase in the number of faults until the critical point (when faults go up dramatically and the majority of the system's processing power is spent on handling them).
An extreme example of this sort of situation occurred on the IBM System/360 Model 67, and IBM System/370 through z/Architecture, mainframe computers. An execute instruction that crosses a page boundary could point to a move instruction that also crosses a page boundary, and the move instruction could move data from a source that crosses a page boundary to a target of data that also crosses a page boundary. The total number of pages thus being used by this particular instruction is eight, and all eight pages must be present in memory at the same time. If the operating system will allocate less than eight pages of actual memory in this example, when it attempts to swap out some part of the instruction or data to bring in the remainder, the instruction will again page fault, and it will thrash on every attempt to restart the failing instruction.
To decrease excessive paging, and thus possibly resolve thrashing problem, a user can do any of the following:
- Increase the amount of RAM in the computer (generally the best long-term solution).
- Decrease the number of programs being concurrently run on the computer.
In multi-programming or in multi-user environment it is common for many users to be executing the same program. If individual copies of these programs were given to each user, much of the primary storage would be wasted. The solution is to share those pages that can be shared.
Sharing must be carefully controlled to prevent one process from modifying data that another process is accessing. In most systems the shared programs are divided into separate pages i.e. coding and data are kept separate. This is achieved by having page map table entries of different processes point to the same page frame, that page frame is shared among those processes.
Historically, paging sometimes referred to a memory allocation scheme that used fixed-length pages as opposed to variable-length segments, without implicit suggestion that virtual memory techniques were employed at all or that those pages were transferred to disk.  Such usage is rare today.
Some modern systems use the term swapping along with paging. Historically, swapping referred to moving from/to secondary storage a whole program at a time, in a scheme known as roll-in/roll-out.   In the 1960s, after the concept of virtual memory was introduced—in two variants, either using segments or pages—the term swapping was applied to moving, respectively, either segments or pages, between secondary storage and memory. Today with the virtual memory mostly based on pages, not segments, swapping became a fairly close synonym of paging, although with one difference.[dubious ]
In systems that support memory-mapped files, when a page fault occurs, a page may be then transferred to or from any ordinary DASD file, not necessarily a dedicated space. Page in is transferring a page from secondary storage to RAM. Page out is transferring a page from RAM to secondary storage. Swap in and out only refer to transferring pages between RAM and dedicated swap space or swap file or scratch disk space, and not any other place on secondary storage.
On Windows NT based systems, dedicated swap space is known as a page file and paging/swapping are often used interchangeably.
The first computer to support paging was the Atlas, jointly developed by Ferranti, the University of Manchester and Plessey. The machine had an associative (content-addressable) memory with one entry for each 512 word page. The Supervisor handled non-equivalence interruptions[c] and managed the transfer of pages between core and drum in order to provide a one-level store to programs.
Windows 3.x and Windows 9x
Paging has been a feature of Microsoft Windows since Windows 3.0 in 1990. Windows 3.x creates a hidden file named 386SPART.PAR or WIN386.SWP for use as a swap file. It is generally found in the root directory, but it may appear elsewhere (typically in the WINDOWS directory). Its size depends on how much swap space the system has (a setting selected by the user under Control Panel → Enhanced under "Virtual Memory"). If the user moves or deletes this file, a blue screen will appear the next time Windows is started, with the error message "The permanent swap file is corrupt". The user will be prompted to choose whether or not to delete the file (whether or not it exists).
Windows 95, Windows 98 and Windows Me use a similar file, and the settings for it are located under Control Panel → System → Performance tab → Virtual Memory. Windows automatically sets the size of the page file to start at 1.5× the size of physical memory, and expand up to 3× physical memory if necessary. If a user runs memory-intensive applications on a system with low physical memory, it is preferable to manually set these sizes to a value higher than default.
The file used for paging in the Windows NT family is pagefile.sys. The default location of the page file is in the root directory of the partition where Windows is installed. Windows can be configured to use free space on any available drives for pagefiles. It is required, however, for the boot partition (i.e. the drive containing the Windows directory) to have a pagefile on it if the system is configured to write either kernel or full memory dumps after a Blue Screen of Death. Windows uses the paging file as temporary storage for the memory dump. When the system is rebooted, Windows copies the memory dump from the pagefile to a separate file and frees the space that was used in the pagefile.
|This section is outdated. (July 2014)|
In the default configuration of Windows, the pagefile is allowed to expand beyond its initial allocation when necessary. If this happens gradually, it can become heavily fragmented which can potentially cause performance problems. The common advice given to avoid this is to set a single "locked" pagefile size so that Windows will not expand it. However, the pagefile only expands when it has been filled, which, in its default configuration, is 150% the total amount of physical memory.[not in citation given] Thus the total demand for pagefile-backed virtual memory must exceed 250% of the computer's physical memory before the pagefile will expand.
The fragmentation of the pagefile that occurs when it expands is temporary. As soon as the expanded regions are no longer in use (at the next reboot, if not sooner) the additional disk space allocations are freed and the pagefile is back to its original state.
Locking a pagefile size can be problematic if a Windows application requests more memory than the total size of physical memory and the pagefile, leading to failed requests to allocate memory that may cause applications and system processes to fail. Also, the pagefile is rarely read or written in sequential order, so the performance advantage of having a completely sequential page file is minimal. However, a large pagefile generally allows use of memory-heavy applications, with no penalties beside using more disk space. While a fragmented pagefile may not be an issue by itself, fragmentation of a variable size page file will over time create a number of fragmented blocks on the drive, causing other files to become fragmented. For this reason, a fixed-size contiguous pagefile is better, providing that the size allocated is large enough to accommodate the needs of all applications.
The required disk space may be easily allocated on systems with more recent specifications, i.e. a system with 3 GB of memory having a 6-gigabyte fixed-size pagefile on a 750 GB disk drive, or a system with 6 GB of memory and a 16 GB fixed-size pagefile and 2 TB of disk space. In both examples the system is using about 0.8% of the disk space with the pagefile pre-extended to its maximum.
Defragmenting the page file is also occasionally recommended to improve performance when a Windows system is chronically using much more memory than its total physical memory. This view ignores the fact that, aside from the temporary results of expansion, the pagefile does not become fragmented over time. In general, performance concerns related to pagefile access are much more effectively dealt with by adding more physical memory.
Unix and Unix-like systems
Unix systems, and other Unix-like operating systems, use the term "swap" to describe both the act of moving memory pages between RAM and disk, and the region of a disk the pages are stored on. In some of those systems, it is common to dedicate an entire partition of a hard disk to swapping. These partitions are called swap partitions. Many systems have an entire hard drive dedicated to swapping, separate from the data drive(s), containing only a swap partition. A hard drive dedicated to swapping is called a "swap drive" or a "scratch drive" or a "scratch disk". Some of those systems only support swapping to a swap partition; others also support swapping to files.
From the end-user perspective, swap files in versions 2.6.x and later of the Linux kernel are virtually as fast as swap partitions; the limitation is that swap files should be contiguously allocated on their underlying file systems. To increase performance of swap files, the kernel keeps a map of where they are placed on underlying devices and accesses them directly, thus bypassing the caching and avoiding the filesystem overhead. However, Red Hat recommends swap partitions to be used. When residing on HDDs, which are rotational magnetic media devices, one benefit of using swap partitions is the ability to place them on contiguous HDD areas that provide higher data throughput or faster seek time. However, the administrative flexibility of swap files can outweigh certain advantages of swap partitions. For example, a swap file can be placed on any mounted file system, can be set to any desired size, and can be added or changed as needed. Swap partitions, however, are not that flexible; for example, a swap partition cannot be enlarged without using partitioning or volume management tools, which introduce various complexities and potential downtimes.
The Linux kernel supports a virtually unlimited number of swap backends (devices or files), supporting at the same time assignment of backend priorities. When the kernel needs to swap pages out of physical memory, it uses the highest-priority backend with available free space. If multiple swap backends are assigned the same priority, they are used in a round-robin fashion (which is somewhat similar to RAID 0 storage layouts), providing improved performance as long as the underlying devices can be efficiently accessed in parallel.
Solaris allows swapping to raw disk slices as well as files. The traditional method is to use slice 1 (i.e. the second slice) on the OS disk to house swap. Swap setup is managed by the system boot process if there are entries in the "vfstab" file, but can also be managed manually through the use of the "swap" command. While it is possible to remove, at runtime, all swap from a lightly loaded system, Sun does not recommend it. Recent additions to the ZFS file system allow creation of ZFS devices that can be used as swap partitions. Swapping to normal files on ZFS file systems is not supported.
AmigaOS 4.0 introduced a new system for allocating RAM and defragmenting physical memory. It still uses flat shared address space that cannot be defragmented. It is based on slab allocation method and paging memory that allows swapping. Paging was implemented in AmigaOS 4.1 but may lock up system if all physical memory is used up. Swap memory could be activated and deactivated any moment allowing the user to choose to use only physical RAM.
The backing store for a virtual memory operating system is typically many orders of magnitude slower than RAM. Additionally, using mechanical storage devices introduces delay, several milliseconds for a hard disk. Therefore it is desirable to reduce or eliminate swapping, where practical. Some operating systems offer settings to influence the kernel's decisions.
- Linux offers the
/proc/sys/vm/swappinessparameter, which changes the balance between swapping out runtime memory, as opposed to dropping pages from the system page cache.
- Windows 2000, XP, and Vista offer the
DisablePagingExecutiveregistry setting, which controls whether kernel-mode code and data can be eligible for paging out.
- Mainframe computers frequently used head-per-track disk drives or drums for page and swap storage to eliminate seek time, and several technologies to have multiple concurrent requests to the same device in order to reduce rotational latency.
- Flash memory has a finite number of erase-write cycles (see Limitations of flash memory), and the smallest amount of data that can be erased at once might be very large (128 KiB for an Intel X25-M SSD ), seldom coinciding with pagesize. Therefore, flash memory may wear out quickly if used as swap space under tight memory conditions. On the attractive side, flash memory is practically delayless compared to hard disks, and not volatile as RAM chips. Schemes like ReadyBoost and Intel Turbo Memory are made to exploit these characteristics.
Swap space size
In some older virtual memory operating systems, space in swap backing store is reserved when programs allocate memory for runtime data. Operating system vendors typically issue guidelines about how much swap space should be allocated.
Swapping can decrease system reliability by some amount. If swapped data gets corrupted on the disk (or at any other location, or during transfer), the memory will also have incorrect contents after the data has later been returned.
Addressing limits on 32-bit hardware
Paging is one way of allowing the size of the addresses used by a process, which is the process's "virtual address space" or "logical address space", to be different from the amount of main memory actually installed on a particular computer, which is the physical address space.
Main memory smaller than virtual memory
In most systems, the size of a process's virtual address space is much larger than the available main memory. The amount of physical main memory available is limited by the number of address bits on the address bus that connects the CPU to main memory. There might be fewer physical address bits than virtual address bits; for example, the i386SX CPU internally uses 32-bit virtual addresses but has only 24 pins connected to the address bus, limiting addressing to at most 16 MB of physical main memory. Even on systems that have the same or more physical address bits as virtual address bits, often the actual amount of physical main memory installed is much less than the size that can potentially be addressed, for financial reasons or because the hardware address map reserves large regions for I/O or other hardware features, so main memory cannot be placed in those regions.
Main memory the same size as virtual memory
It is not uncommon to find 32-bit computers with 4 GB of RAM, the maximum amount of RAM addressable unless the page table entry format supports physical addresses larger than 32 bits. For example, on 32-bit x86 processors, the Physical Address Extension (PAE) feature is required to access more than 4 GB of RAM. For some machines, e.g., the IBM S/370 in XA mode, the upper bit was not part of the address and only 2 GB could be addressed.
Paging and swap space can be used beyond this 4 GB limit, due to it being addressed in terms of disk locations rather than memory addresses.
While 32-bit programs on machines with linear address spaces will continue to be limited to the 4 GB they're capable of addressing, because they each exist in their own virtual address space, a group of programs can together grow beyond this limit.
On machines with segment registers, e.g., the access registers on an IBM System/370 in ESA mode, the address space size is limited only by OS constraints, e.g., the need to fit the mapping tables into the available storage.
Main memory larger than virtual address space
A few computers have a main memory larger than the virtual address space of a process, such as the Magic-1, some PDP-11 machines, and some systems using 32-bit x86 processors with Physical Address Extension. This nullifies a significant advantage of virtual memory, since a single process cannot use more main memory than the amount of its virtual address space. Such systems often use paging techniques to obtain secondary benefits:
- The "extra memory" can be used in the page cache to cache frequently used files and metadata, such as directory information, from secondary storage.
- If the processor and operating system support multiple virtual address spaces, the "extra memory" can be used to run more processes. Paging allows the cumulative total of virtual address spaces to exceed physical main memory.
- A process can mmap its data structures to main memory-backed files, such as files on the Linux tmpfs file system.
The size of the cumulative total of virtual address spaces is still limited by the amount of secondary storage available.
|Wikisource has original text related to this article:|
- Arpaci-Dusseau, Remzi H.; Arpaci-Dusseau, Andrea C. (2014), Operating Systems: Three Easy Pieces [Chapter: Paging] (PDF), Arpaci-Dusseau Books External link in
- Belzer, Jack; Holzman, Albert G.; Kent, Allen, eds. (1981). "Virtual memory systems". Encyclopedia of computer science and technology 14. CRC Press. p. 32. ISBN 0-8247-2214-0.
- Deitel, Harvey M. (1983). An Introduction to Operating Systems. Addison-Wesley. pp. 181, 187. ISBN 0-201-14473-5.
- Belzer, Jack; Holzman, Albert G.; Kent, Allen, eds. (1981). "Operating systems". Encyclopedia of computer science and technology 11. CRC Press. p. 433. ISBN 0-8247-2261-2.
- Belzer, Jack; Holzman, Albert G.; Kent, Allen, eds. (1981). "Operating systems". Encyclopedia of computer science and technology 11. CRC Press. p. 442. ISBN 0-8247-2261-2.
- Cragon, Harvey G. (1996). Memory Systems and Pipelined Processors. Jones and Bartlett Publishers. p. 109. ISBN 0-86720-474-5.
- Sumner, F. H.; Haley, G.; Chenh, E. C. Y. (1962). "The Central Control Unit of the 'Atlas' Computer". Information Processing 1962. IFIP Congress Proceedings. Proceedings of IFIP Congress 62. Spartan.
- "The Atlas". University of Manchester: Department of Computer Science.
- "Atlas Architecture". Atlas Computer. Chilton: Atlas Computer Laboratory.
- Kilburn, T.; Payne, R. B.; Howarth, D. J. (December 1961). "The Atlas Supervisor". Computers - Key to Total Systems Control. Conferences Proceedings. Volume 20, Proceedings of the Eastern Joint Computer Conference Washington, D.C. Macmillan. pp. 279–294.
- Kilburn, T.; Edwards, D. B. G.; Lanigan, M. J.; Sumner, F. H. (April 1962). "One-Level Storage System". IRE Transactions Electronic Computers (Institute of Radio Engineers).
- Tsigkogiannis, Ilias (December 11, 2006). "Crash Dump Analysis". driver writing != bus driving. Microsoft. Retrieved 2008-07-22.
- "Windows Sysinternals PageDefrag". Sysinternals. Microsoft. November 1, 2006. Retrieved 2010-12-20.
- "How to determine the appropriate page file size for 64-bit versions of Windows". Support (15.1 ed.). Microsoft. February 7, 2011. Retrieved 2007-12-26.
- ""Jesper Juhl": Re: How to send a break? - dump from frozen 64bit linux". LKML. 2006-05-29. Retrieved 2010-10-28.
- "Andrew Morton: Re: Swap partition vs swap file". LKML. Retrieved 2010-10-28.
- Chapter 7. Swap Space - Red Hat Customer Portal "Swap space can be a dedicated swap partition (recommended), a swap file, or a combination of swap partitions and swap files."
- "swapon(2) – Linux man page". linux.die.net. Retrieved 2014-09-08.
- John Siracusa (October 15, 2001). "Mac OS X 10.1". Ars Technica. Retrieved 2008-07-23.
- AmigaOS Core Developer (2011-01-08). "Re: Swap issue also on Update 4 ?". Hyperion Entertainment. Retrieved 2011-01-08.
- E.g., Rotational Position Sensing on a Block Multiplexor channel
- "Aligning filesystems to an SSD’s erase block size | Thoughts by Ted". Thunk.org. 2009-02-20. Retrieved 2010-10-28.
- Bill Buzbee. "Magic-1 Minix Demand Paging Design". Retrieved December 9, 2013.
- IBM System/370 Extended Architecture Principles of Operation. Second Edition. IBM. January 1987. SA22-7085-1.
- Windows Server - Moving Pagefile to another partition or disk by David Nudelman
- How Virtual Memory Works from HowStuffWorks.com (in fact explains only swapping concept, and not virtual memory concept)
- Linux swap space management (outdated, as the author admits)
- Guide On Optimizing Virtual Memory Speed (outdated, and contradicts section 1.4 of this wiki page, and (at least) references 8, 9, and 11.)
- Virtual Memory Page Replacement Algorithms
- Windows XP. How to manually change the size of the virtual memory paging file
- Windows XP. Factors that may deplete the supply of paged pool memory
- SwapFs driver that can be used to save the paging file of Windows on a swap partition of Linux.