dm-cache

From Wikipedia, the free encyclopedia
Jump to: navigation, search
dm-cache
Developer(s) Joe Thornber, Heinz Mauelshagen and Mike Snitzer
Written in C
Operating system Linux
Type Linux kernel features
License GNU General Public License
Website kernel.org

As part of the Linux kernel, dm-cache is a device mapper target allowing creation of hybrid volumes, written by Joe Thornber, Heinz Mauelshagen and Mike Snitzer. It allows one or more fast storage devices, such as flash-based solid-state drives (SSDs), to act as a cache for one or more slower hard disk drives (HDDs).

The design of dm-cache requires three physical storage devices (containing actual data, cache data and metadata) for the creation of one hybrid volume. Operating modes and cache policies, with latter in the form of separate modules, determine the way caching is actually performed.

Overview[edit]

In dm-cache, solid-state drives (SSDs) are used as an additional level of indirection while accessing hard disk drives (HDDs), generally improving the speed by utilizing fast flash-based SSDs as caches for slower mechanical HDDs with rotational magnetic media. That way, costly speed of SSDs becomes combined with cheap storage capacity of slower HDDs.[1] Also, dm-cache can be used to improve performance and reduce the load of storage area networks.[2][3]

Configurable operating modes and cache policies determine how dm-cache works internally. Operating modes select the way data is kept in sync between an HDD and an SSD. Cache policies, in the form of separate modules, provide the algorithms for selecting which blocks are promoted (moved from HDD to SSD), demoted (moved from SSD to HDD), cleaned etc.[4]

When configured to use multiqueue cache policy (which is the default), dm-cache uses SSDs for storing data associated with performed random reads and writes, capitalizing on near-zero seek times of SSDs and avoiding such I/O operations as typical HDD performance bottlenecks. Data associated with sequential reads and writes is not cached on SSDs, in order to avoid cache invalidation during such operations; performance-wise, sequential I/O operations are much more suitable for HDDs, due to their mechanical nature. Not caching the sequential I/O also helps in extending lifetime of the SSDs used as caches.[5]

History[edit]

Another dm-cache project with similar goals was announced by Eric Van Hensbergen and Ming Zhao in 2006, as the result of an internship work at IBM.[6]

Later, Joe Thornber, Heinz Mauelshagen and Mike Snitzer got their own take on the concept, resulting in inclusion of dm-cache into the Linux kernel mainline; it was merged in kernel version 3.9, released on 28 April 2013.[4][7]

Design[edit]

Mapped virtual device (a hybrid volume) is created by specifying three physical storage devices:[5]

  • origin device – provides slow primary storage (usually an HDD)
  • cache device – provides a fast cache (usually an SSD)
  • metadata device – records blocks placement and their dirty flags, as well as other internal data required by a policy (per-block hit counts etc.); such a device can not be shared between hybrid volumes, and it is recommended to be mirrored.

Block size, equaling to the size of a caching extent, is configurable only during the creation of a hybrid volume. Recommended sizes are 256–1024 KB, while they have to be multiples of 64. Having caching extents bigger than HDD sectors is a compromise between the size of metadata, and the possibility for wasting cache space. Having too small caching extents increases the metadata size, both in the metadata device and in kernel memory. Having too large metadata extents increases the amount of wasted cache space, due to whole extents being cached even in case of high hit rates only for some of their parts.[4][8]

Operating modes supported by dm-cache are write-back (which is the default), write-through, and pass-through. In the write-back operating mode, writes to cached blocks are going to the cache device only, with such blocks marked as dirty in the metadata. For the write-through operating mode, write requests are not returned as completed until data reaches both the origin and cache device, with no clean blocks becoming marked as dirty. In the pass-through operating mode, all reads are performed directly from the origin device (missing the cache), and all writes are going directly to the origin device (any cache write hits also cause cache blocks invalidation). Pass-through mode allows a hybrid volume to be activated when state of the cache device is not known to be coherent with the origin device.[4][9]

Rate of the data migration performed in both directions (data promotions and demotions) can be kept throttled down to a configured speed; that way, normal I/O to the origin and cache devices can be preserved. Decommissioning of a hybrid volume, as well as shrinking of a cache device, is performed by using the cleaner policy (see the section below), which effectively flushes all dirty blocks from the cache device to the origin device.[4][5]

Cache policies[edit]

As of February 2014, two cache policies are distributed with the Linux kernel mainline:[4][5]

multiqueue
This policy has two sets of 16 queues; one set for entries waiting for the cache, and other set for entries already in the cache. Cache entries in the queues are aged based on their associated logical time. Selection of entries going into the cache is based on variable thresholds, and queue selection is based on the hit count of an entry. This policy aims to take different cache miss costs into account, and to adjust to varying load patterns automatically. Sequential I/O operations are tracked internally, so they can be routed around the cache; large contiguous I/O operations are left to be performed by the origin device as such data access patterns are suitable for HDDs.
cleaner
This policy writes back all dirty blocks in a cache. After that is performed, a hybrid volume can be decommissioned, or a cache device can be shrunk.

See also[edit]

References[edit]

  1. ^ Petros Koutoupis (2013-11-25). "Advanced Hard Drive Caching Techniques". linuxjournal.com. Retrieved 2013-12-02. 
  2. ^ "dm-cache: Dynamic Block-level Storage Caching". Florida International University. Retrieved 2013-10-09. 
  3. ^ Dulcardo Arteaga; Douglas Otstott; Ming Zhao. "Dynamic Block-level Cache Management for Cloud Computing Systems" (PDF). Florida International University. Retrieved 2013-12-02. 
  4. ^ a b c d e f Joe Thornber; Heinz Mauelshagen; Mike Snitzer (2014-02-01). "Documentation/device-mapper/cache.txt". Linux kernel documentation. kernel.org. Retrieved 2014-02-15. 
  5. ^ a b c d Joe Thornber; Heinz Mauelshagen; Mike Snitzer (2014-02-01). "Documentation/device-mapper/cache-policies.txt". Linux kernel documentation. kernel.org. Retrieved 2014-02-06. 
  6. ^ Eric Van Hensbergen; Ming Zhao (2006-11-28). "Dynamic Policy Disk Caching for Storage Networking" (PDF). IBM Research Report. IBM. Retrieved 2013-12-02. 
  7. ^ "1.3. SSD cache devices". Linux kernel 3.9. kernelnewbies.org. 2013-04-28. Retrieved 2013-10-07. 
  8. ^ Jake Edge (2013-05-01). "LSFMM: Caching – dm-cache and bcache". LWN.net. Retrieved 2013-10-07. 
  9. ^ "kernel/git/torvalds/linux.git: dm cache: add passthrough mode". Linux kernel source tree. kernel.org. 2013-11-11. Retrieved 2014-02-06. 

External links[edit]