Logical Volume Manager (Linux)

From Wikipedia, the free encyclopedia
Jump to: navigation, search
"Logical Volume Manager" redirects here. It is not to be confused with logical volume management.
Linux Logical Volume Manager
Original author(s) Heinz Mauelshagen[1]
Stable release 2.02.109[2] / 5 August 2014; 2 months ago (2014-08-05)
Written in C
Operating system Linux
License GNU GPL
Website sources.redhat.com/lvm2/

LVM is a logical volume manager for the Linux kernel that manages disk drives and similar mass-storage devices. Heinz Mauelshagen wrote the original code in 1998, taking its primary design guidelines from the HP-UX's volume manager.[citation needed]

The installers for the CrunchBang, CentOS, Debian, Fedora, Gentoo, Mandriva, MontaVista Linux, openSUSE, Pardus, Red Hat Enterprise Linux, Slackware, SLED, SLES, Linux Mint, Kali Linux, and Ubuntu distributions are LVM-aware and can install a bootable system with a root filesystem on a logical volume.

Common uses[edit]

LVM is commonly used for the following purposes:

  • Managing large hard disk farms by allowing disks to be added and replaced without downtime or service disruption, in combination with hot swapping.
  • On small systems (like a desktop at home), instead of having to estimate at installation time how big a partition might need to be in the future, LVM allows file systems to be easily resized later as needed.
  • Performing consistent backups by taking snapshots of the logical volumes.
  • Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID 0, but more similar to JBOD), allowing for dynamic volume resizing.
  • the Ganeti solution stack relies on the Linux Logical Volume Manager

LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an abstraction of continuity and ease-of-use for managing hard drive replacement, re-partitioning, and backup.

Features[edit]

The LVM can:

  • Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones.
  • Resize logical volumes (LV) online by concatenating extents onto them or truncating extents from them.
  • Create read-only snapshots of logical volumes (LVM1).
  • Create read-write snapshots of logical volumes (LVM2).
  • Create RAID logical volumes (available in newer LVM implementations): RAID 1, RAID 5, RAID 6, etc.[3]
  • Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0.
  • Configure a RAID 1 backend device (a PV) as write-mostly, resulting in reads being avoided to such devices unless necessary.[4]
  • Allocate thin-provisioned logical volumes from a pool.[5]
  • Move online logical volumes between PVs.
  • Split or merge volume groups in situ (as long as no logical volumes span the split). This can be useful when migrating whole logical volumes to or from offline storage.
  • Create hybrid volumes by using the dm-cache target, which allows one or more fast storage devices, such as flash-based solid-state drives (SSDs), to act as a cache for one or more slower hard disk drives (HDDs).[6]

The LVM will also work in a shared-storage cluster (where disks holding the PVs are shared between multiple host computers), but requires an additional daemon to propagate state changes between cluster nodes.

Implementation[edit]

Inner workings of the version 1 of LVM. In this diagram, PE stands for a Physical Extent.
Relationship between various elements of the LVM.

LVM keeps a metadata header at the start of every physical volume, each of which is uniquely identified by a UUID. Each PV's header is a complete copy of the entire volume group's layout, including the UUIDs of all other PVs, the UUIDs of all logical volumes and an allocation map of PEs to LEs. This simplifies data recovery in the event of PV loss.

In the 2.6-series of the Linux Kernel, the LVM is implemented in terms of the device mapper, a simple block-level scheme for creating virtual block devices and mapping their contents onto other block devices. This minimizes the amount of relatively hard-to-debug kernel code needed to implement the LVM. It also allows its I/O redirection services to be shared with other volume managers (such as EVMS). Any LVM-specific code is pushed out into its user-space tools, which merely manipulate these mappings and reconstruct their state from on-disk metadata upon each invocation.

To bring a volume group online, the "vgchange" tool:

  1. Searches for PVs in all available block devices.
  2. Parses the metadata header in each PV found.
  3. Computes the layouts of all visible volume groups.
  4. Loops over each logical volume in the volume group to be brought online and:
    1. Checks if the logical volume to be brought online has all its PVs visible.
    2. Creates a new, empty device mapping.
    3. Maps it (with the "linear" target) onto the data areas of the PVs the logical volume belongs to.

To move an online logical volume between PVs on the same Volume Group, use the "pvmove" tool:

  1. Creates a new, empty device mapping for the destination.
  2. Applies the "mirror" target to the original and destination maps. The kernel will start the mirror in "degraded" mode and begin copying data from the original to the destination to bring it into sync.
  3. Replaces the original mapping with the destination when the mirror comes into sync, then destroys the original.

These device mapper operations take place transparently, without applications or file systems being aware that their underlying storage is moving.

Caveats[edit]

Until Linux kernel 2.6.31,[7] write barriers were not supported (fully supported in 2.6.33). This means that the guarantee against filesystem corruption offered by journaled file systems like ext3 and XFS was negated under some circumstances.[8]

See also[edit]

References[edit]

  1. ^ "LVM README". 2003-11-17. Retrieved 2014-06-25. 
  2. ^ "lvm2.git - Upstream Logical Volume Manager repository". git.fedorahosted.org. Retrieved 2014-08-05. 
  3. ^ "4.4.15. RAID Logical Volumes". Access.redhat.com. Retrieved 2014-06-20. 
  4. ^ "Controlling I/O Operations on a RAID1 Logical Volume". redhat.com. Retrieved 16 June 2014. 
  5. ^ "2.3.5. Thinly-Provisioned Logical Volumes (Thin Volumes)". Access.redhat.com. Retrieved 2014-06-20. 
  6. ^ "Using LVM’s new cache feature". Retrieved 2014-07-11. 
  7. ^ "Bug 9554 - write barriers over device mapper are not supported". 2009-07-01. Retrieved 2010-01-24. 
  8. ^ "Barriers and journaling filesystems". LWN. 2008-05-22. Retrieved 2008-05-28. 

Further reading[edit]

  1. Lewis, AJ (2006-11-27). "LVM HOWTO". Linux Documentation Project. Retrieved 2008-03-04. .
  2. US patent 5129088, Auslander, et al., "Data Processing Method to Create Virtual Disks from Non-Contiguous Groups of Logically Contiguous Addressable Blocks of Direct Access Storage Device", issued 1992-7-7  (fundamental patent).
  3. "RedHat Linux: What is Logical Volume Manager or LVM?". techmagazinez.com. 6 August 2013. Retrieved 4 September 2013. 
  4. "LVM2 Resource Page". sourceware.org. 8 June 2012. Retrieved 4 September 2013. 
  5. "How-To: Install Ubuntu on LVM partitions". Debuntu.org. 28 July 2007. Retrieved 4 September 2013. 
  6. "Logical Volume Manager". markus-gattol.name. 13 July 2013.