Jump to content

Memory-mapped file

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Poco a poco (talk | contribs) at 20:22, 14 October 2009 (iw es). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A memory-mapped file is a segment of virtual memory which has been assigned a direct byte-for-byte correlation with some portion of a file or file-like resource. This resource is typically a file that is physically present on-disk, but can also be a device, shared memory object, or other resource that the operating system can reference through a file descriptor. Once present, this correlation between the file and the memory space permits applications to treat the mapped portion as if it were primary memory.

Benefits

The primary benefit of memory mapping a file is increased I/O performance, especially when used on small files.[citation needed] Accessing memory mapped files is faster than using direct read and write operations for two reasons. Firstly, a system call is orders of magnitude slower than a simple change of program's local memory. Secondly, in most operating systems the memory region mapped actually is the kernel's page cache (file cache), meaning that no copies need to be created in user space.

Certain application level memory-mapped file operations also perform better than their physical file counterparts. Applications can access and update data in the file directly and in-place, as opposed to seeking from the start of the file or rewriting the entire edited contents to a temporary location. Since the memory-mapped file is handled internally in pages, linear file access (as seen, for example, in flat file data storage or configuration files) requires disk access only when a new page boundary is crossed, and can write larger sections of the file to disk in a single operation.

A possible benefit of memory-mapped files is a "lazy loading", thus using small amounts of RAM even for a very large file. Trying to load the entire contents of a file that is significantly larger than the amount of memory available can cause severe thrashing as the operating system reads from disk into memory and simultaneously pages from memory back to disk. Memory-mapping may not only bypass the page file completely, but the system only needs to load the smaller page-sized sections as data is being edited, similarly to demand paging scheme used for programs.

The memory mapping process is handled by the virtual memory manager, which is the same subsystem responsible for dealing with the page file. Memory mapped files are loaded into memory one entire page at a time. The page size is selected by the operating system for maximum performance. Since page file management is one of the most critical elements of a virtual memory system, loading page sized sections of a file into physical memory is typically a very highly optimized system function.[1]

Drawbacks

The major reason to choose memory mapped file I/O is for performance. One should nevertheless keep in mind the tradeoff that is being made. The standard I/O approach is costly due to system call overhead and memory copying. The memory mapped approach has its cost in minor page faults - when a block of data is loaded in page cache, but not yet mapped in to the process's virtual memory space. Depending on the circumstances, memory mapped file I/O can actually be substantially slower than standard file I/O.[2]

Another drawback of memory mapped files relates to a given architecture's address space: a file larger than the addressable space can only have portions mapped at a time, complicating reading it. For example, a 32-bit architecture such as Intel's IA-32 can only directly address 4 GiB files. This drawback is avoided in the case of devices addressing memory when an IOMMU is present.

Common uses

Perhaps the most common use for a memory-mapped file is the process loader in most modern operating systems (including Microsoft Windows and Unix-like systems.) When a process is started, the operating system uses a memory mapped file to bring the executable file, along with any loadable modules, into memory for execution. Most memory-mapping systems use a technique called demand paging, where the file is loaded into physical memory in subsets (one page each), and only when that page is actually referenced.[3] In the specific case of executable files, this permits the OS to selectively load only those portions of a process image that actually need to execute.

Another common use for memory-mapped files is to share memory between multiple processes. In modern protected mode operating systems, processes are generally not permitted to access memory space that is allocated for use by another process. (A program's attempt to do so causes invalid page faults or segmentation violations.) There are a number of techniques available to safely share memory, and memory-mapped file I/O is one of the most popular. Two or more applications can simultaneously map a single physical file into memory and access this memory. For example, the Microsoft Windows operating system provides a mechanism for applications to memory-map a shared segment of the system's page file itself and share data via this section.

Platform support

Most modern operating systems or runtime environments support some form of memory-mapped file access. The function mmap()[4], which creates a mapping of a file given a file descriptor, starting location in the file, and a length, is part of the POSIX specification, so the wide variety of POSIX-compliant systems, such as UNIX, Linux, Mac OS X [5] or OpenVMS, support a common mechanism for memory mapping files. The Microsoft Windows operating systems also support a group of API functions for this purpose, such as CreateFileMapping() [6].

The Boost C++ Libraries provide a portable implementation of memory-mapped files for Microsoft Windows and POSIX-compliant platforms.[7]

The Java programming language provides classes and methods to access memory mapped files, such as FileChannel.

Ruby has a gem (library) called Mmap, which implements memory-mapped file objects.

Since version 1.6, Python has included a mmap module in its Standard Library[8]. Details of the module vary according to whether the host platform is Windows or Unix-like.

The Microsoft .NET runtime environment does not natively include managed access to memory mapped files, but there are third-party libraries which do so [9]. However, first-class support for memory mapped files is planned for .NET 4.0.[10]

References

  1. ^ http://msdn2.microsoft.com/en-us/library/ms810613.aspx, "What Do Memory-Mapped Files Have to Offer?".
  2. ^ http://lists.freebsd.org/pipermail/freebsd-questions/2004-June/050371.html, read vs. mmap (or io vs. page faults) by Matthew Dillon
  3. ^ http://www.linux-tutorial.info/modules.php?name=Tutorial&pageid=89, "Demand Paging"
  4. ^ Memory Mapped Files
  5. ^ Apple - Mac OS X Leopard - Technology - UNIX
  6. ^ CreateFileMapping Function (Windows)
  7. ^ http://www.boost.org/doc/libs/1_37_0/libs/iostreams/doc/classes/mapped_file.html
  8. ^ ""New Modules in 1.6"". Archived from the original on 30 December 2006. Retrieved 23 December 2008.
  9. ^ DotNet
  10. ^ http://blogs.msdn.com/bclteam/archive/2009/05/22/what-s-new-in-the-bcl-in-net-4-beta-1-justin-van-patten.aspx