In computing, a page cache, often called a disk cache, is a "transparent" cache of disk-backed pages kept in main memory (RAM) by the operating system for quicker access. A page cache is implemented in kernels with the paging memory management, and is mostly transparent to applications.
Usually all physical memory that is not directly allocated to applications is used by the operating system for the page cache. Since the memory would otherwise be idle and is trivially reclaimed when applications request it, there is generally no associated performance penalty and the operating system might even report such memory as "free".
Hard disk read/write speeds are low and random accesses require expensive disk seeks compared to main memory—this is why RAM upgrades usually yield significant improvements in computers' speed and responsiveness, because more disk pages can be cached in memory. Separate disk caching is provided on the hardware side, by dedicated RAM or NVRAM chips located either in disk controller (inside a hard disk drive; properly called disk buffer) or in a disk array controller. Such memory should not be confused with page cache.
All the cache pages which are modified after being brought into the physical memory, are called dirty pages. Since non-dirty pages in the page cache have identical copies in secondary storage (e.g. hard disk, Flash disk), discarding and re-using their space is much quicker than paging out application memory, and is often preferred over first flushing the dirty pages into secondary storage and then using their space. Executable binaries, such as applications and libraries, are also typically accessed through page cache and mapped to individual process spaces using virtual memory (this is done through the mmap system call on Unix-like operating systems). This not only means that the binary files are shared between separate processes, but also that unused parts of binaries will be flushed out of main memory eventually, leading to memory conservation.
Since cached pages can be easily evicted and re-used, some operating systems, notably Windows NT, even report the page cache usage as "free" memory, while the memory is actually allocated to disk pages. This has led to some confusion about the utilization of page cache in Windows.
||This section may contain parts that are misleading. (September 2014)|
When an application requests some data, first the corresponding page is searched in page cache. If the requested page is not loaded in physical memory, a page fault occurs. After page faults some page from physical memory is swapped out and its space is used to load new page from disk. Page replacement algorithms are used to decide the page to be swapped out. Fewer page faults usually leads to better performance.
Page cache and disk writes
The page cache also aids in writing to a disk. Pages that have been modified in memory for writing to disk, are marked "dirty" and have to be flushed to disk before they can be freed. When a file write occurs, the page backing the particular block is looked up. If it is already found in cache, the write is done to that page in physical memory. Otherwise, when the write perfectly falls on page size boundaries, the page is not even read from disk, but allocated and immediately marked dirty. Otherwise, the page(s) are fetched from disk replacing already existing pages in physical memory and requested modifications are done. A file that is created or opened in the page cache, but not written to, might result in a zero byte file at a later read.
However, not all cached pages can be written to — often, program code is mapped as read-only or copy-on-write; in the latter case, modifications to code will only be visible to the process itself and will not be written to disk.
- "Glossary - TechNet Library". Microsoft.