Logical Volume Manager (Linux)

From Wikipedia, the free encyclopedia
  (Redirected from LVM2)
Jump to: navigation, search
"Logical Volume Manager" redirects here. It is not to be confused with logical volume management.
Linux Logical Volume Manager
Original author(s) Heinz Mauelshagen[1]
Stable release 2.02.114[2] / 28 November 2014; 5 months ago (2014-11-28)
Written in C
Operating system Linux
License GNU GPL
Website sources.redhat.com/lvm2/

LVM is a Device Mapper target which provides logical volume management for the Linux systems. Heinz Mauelshagen wrote the original code in 1998, taking its primary design guidelines from the HP-UX's volume manager.[3] The installers for the most modern distributions are LVM-aware enough to be able to have the root filesystem be on a logical volume. [4][5][6]

Common uses[edit]

LVM is commonly used for the following purposes:

  • Managing large hard disk farms by allowing disks to be added and replaced without downtime or service disruption, in combination with hot swapping.
  • On small systems (like a desktop at home), instead of having to estimate at installation time how big a partition might need to be in the future, LVM allows file systems to be easily resized later as needed.
  • Performing consistent backups by taking snapshots of the logical volumes.
  • Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID 0, but more similar to JBOD), allowing for dynamic volume resizing.

LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an abstraction of continuity and ease-of-use for managing hard drive replacement, re-partitioning, and backup.

The Ganeti solution stack relies on the Linux Logical Volume Manager.

Features[edit]

Basic Functionality[edit]

  • Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones.
  • Resize logical volumes online by concatenating extents onto them or truncating extents from them.
  • Move online logical volumes between PVs.
  • Create read-only snapshots of logical volumes (LVM1).
  • Create read-write snapshots of logical volumes (LVM2).
  • Split or merge volume groups in situ (as long as no logical volumes span the split). This can be useful when migrating whole logical volumes to or from offline storage.
  • LVM Objects can be tagged for administrative convenience.[7]
  • Volume groups and logical volumes can be made active as the underlying devices become available through use of the lvmetad daemon.[8]

Advanced Functionality[edit]

RAID[edit]

  • Create RAID logical volumes (available in newer LVM implementations): RAID 1, RAID 5, RAID 6, etc.[11]
  • Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0.
  • Configure a RAID 1 backend device (a PV) as write-mostly, resulting in reads being avoided to such devices unless necessary.[12]

High Availability[edit]

The LVM will also work in a shared-storage cluster (where disks holding the PVs are shared between multiple host computers), but can require an additional daemon to broker access to the metadata by way of some form of locking.

  • CLVM
A distributed lock manager is used to broker concurrent access to LVM metadata. Anytime a node in the cluster needs to modify the LVM metadata, it must secure permission from its local clvmd which is in constant contact with other clvmd daemons in the cluster and can communicate a desire to get a lock on a particular set of objects.
  • HA-LVM
Cluster-awareness is left to the application providing the high availability function. For LVM's part, HA-LVM can used clvm as a locking mechanism, or can continue to use the default file locking and reduce "collisions" by restricting access to only those LVM objects which have the appropriate LVM tags. Since this avoids contention rather than mitigates it, this solution is simpler but does not allow for concurrent access. As such, it is usually only considered useful in Active-Passive configurations.

It should be noted that the above only resolve issues with LVM's access to the storage. The filesystem chosen to be layered onto of the logical volume must either support clustering itself (such as with GFS2, VxFS) or the filesystem must only be mounted by a single cluster node at any time (such as in an active-passive configuration).

Volume Group Allocation Policy[edit]

Since a Volume Group is the pool of extents drawn from during allocation default allocation policies are set on the volume group and may be inherited by individual LV's. If an allocation policy is not configured on a volume group when an allocation event occurs, LVM will attempt the strictest policy (contiguous) first and then progress towards the most liberal policy defined for the LVM object until allocation finally succeeds. In RAID configurations, almost all policies are applied to each leg in isolation. For example, cling won't use a physical volume if it's only used by one of the other legs in the RAID setup since the RAID'd logical volume will have put each leg on different volumes, making the other PV's unavailable to any given leg.

Available allocation policies are:

  • continguous which forces all LE's in a given logical volume to be adjacent and ordered. This eliminates fragmentation but severely reduces one's ability to expand a logical volume.
  • cling which forces new LE's to only be allocated on physical volumes already used by an LV. This can help reduce vulnerability of particular LV's should a device go down as well as help mitigate fragmentation.
  • normal Near indiscriminate selection of physical extents. The only restriction is that it will attempt to keep parallel legs (such as those of a RAID setup) from sharing a physical device.
  • anywhere No restrictions whatsoever. Highly dangerous in RAID setup as it ignores isolation requirements thus undercuts most of the benefit of RAID. For linear volumes, it can result in increased fragmentation.

Implementation[edit]

Typically the first megabyte of each physical volume contains a mostly ASCII encoded structure referred to as an "LVM header" or "LVM head". Prior defaults would write the LVM head in the first and last megabyte of each physical volume for redundancy (in case of a failed sector) however this was later changed to only the first megabyte. Each PV's header is a complete copy of the entire volume group's layout, including the UUIDs of all other PVs, the UUIDs of all logical volumes and an allocation map of PEs to LEs. This simplifies data recovery in the event a physical volume is lost.

Basic example of an LVM head
Inner workings of the version 1 of LVM. In this diagram, PE stands for a Physical Extent.
Relationship between various elements of the LVM.


In the 2.6-series of the Linux Kernel, the LVM is implemented in terms of the device mapper, a simple block-level scheme for creating virtual block devices and mapping their contents onto other block devices. This minimizes the amount of relatively hard-to-debug kernel code needed to implement the LVM. It also allows its I/O redirection services to be shared with other volume managers (such as EVMS). Any LVM-specific code is pushed out into its user-space tools, which merely manipulate these mappings and reconstruct their state from on-disk metadata upon each invocation.

To bring a volume group online, the "vgchange" tool:

  1. Searches for PVs in all available block devices.
  2. Parses the metadata header in each PV found.
  3. Computes the layouts of all visible volume groups.
  4. Loops over each logical volume in the volume group to be brought online and:
    1. Checks if the logical volume to be brought online has all its PVs visible.
    2. Creates a new, empty device mapping.
    3. Maps it (with the "linear" target) onto the data areas of the PVs the logical volume belongs to.

To move an online logical volume between PVs on the same Volume Group, use the "pvmove" tool:

  1. Creates a new, empty device mapping for the destination.
  2. Applies the "mirror" target to the original and destination maps. The kernel will start the mirror in "degraded" mode and begin copying data from the original to the destination to bring it into sync.
  3. Replaces the original mapping with the destination when the mirror comes into sync, then destroys the original.

These device mapper operations take place transparently, without applications or file systems being aware that their underlying storage is moving.

Caveats[edit]

  • There currently exists no online or offline defragmentation program for LVM. This is mitigated somewhat by fragmentation only happening if a volume is expanded and by applying the aforementioned allocation policies but fragmentation still occurs. If one wants to reduce the current level of fragmentation, you must identify the non-contiguous segments and use the pvmove command to manually rearrange the extents. [15]
  • At the time of this writing, the Ubuntu installer does not support the creation of LVM objects. The LVM configuration must exist prior to beginning the install. [16]

See also[edit]

References[edit]

  1. ^ "LVM README". 2003-11-17. Retrieved 2014-06-25. 
  2. ^ "lvm2.git - Upstream Logical Volume Manager repository". git.fedorahosted.org. Retrieved 2014-12-15. 
  3. ^ "LVM README". 2003-11-17. Retrieved 2014-06-25. 
  4. ^ "7.1.2 LVM Configuration with YaST". 12 July 2011. Retrieved 2015-05-22. 
  5. ^ "HowTo: Set up Ubuntu Desktop with LVM Partitions". 1 June 2014. Retrieved 2015-05-22. 
  6. ^ "9.15.4 Create LVM Logical Volume". 8 October 2014. Retrieved 2015-05-22. 
  7. ^ "Tagging LVM2 Storage Objects". Micro Focus International. Retrieved 21 May 2015. 
  8. ^ "The Metadata Daemon". Red Hat Inc. Retrieved 22 May 2015. 
  9. ^ "Using LVM’s new cache feature". Retrieved 2014-07-11. 
  10. ^ "2.3.5. Thinly-Provisioned Logical Volumes (Thin Volumes)". Access.redhat.com. Retrieved 2014-06-20. 
  11. ^ "4.4.15. RAID Logical Volumes". Access.redhat.com. Retrieved 2014-06-20. 
  12. ^ "Controlling I/O Operations on a RAID1 Logical Volume". redhat.com. Retrieved 16 June 2014. 
  13. ^ "Bug 9554 - write barriers over device mapper are not supported". 2009-07-01. Retrieved 2010-01-24. 
  14. ^ "Barriers and journaling filesystems". LWN. 2008-05-22. Retrieved 2008-05-28. 
  15. ^ "will pvmove'ing (an LV at a time) defragment?". 2010-04-29. Retrieved 2015-05-22]].  Check date values in: |accessdate= (help)
  16. ^ "HowTo: Set up Ubuntu Desktop with LVM Partitions". 1 June 2014. Retrieved 2015-05-22. 

Further reading[edit]

  1. Lewis, AJ (2006-11-27). "LVM HOWTO". Linux Documentation Project. Retrieved 2008-03-04. .
  2. US patent 5129088, Auslander, et al., "Data Processing Method to Create Virtual Disks from Non-Contiguous Groups of Logically Contiguous Addressable Blocks of Direct Access Storage Device", issued 1992-7-7  (fundamental patent).
  3. "RedHat Linux: What is Logical Volume Manager or LVM?". techmagazinez.com. 6 August 2013. Retrieved 4 September 2013. 
  4. "LVM2 Resource Page". sourceware.org. 8 June 2012. Retrieved 4 September 2013. 
  5. "How-To: Install Ubuntu on LVM partitions". Debuntu.org. 28 July 2007. Retrieved 4 September 2013. 
  6. "Logical Volume Manager". markus-gattol.name. 13 July 2013.