Logical Volume Manager (Linux)

From Wikipedia, the free encyclopedia
  (Redirected from LVM2)
Jump to: navigation, search
Linux Logical Volume Manager
Original author(s) Heinz Mauelshagen[1]
Stable release 2.02.114[2] / 28 November 2014; 6 months ago (2014-11-28)
Written in C
Operating system Linux
License GNU GPL
Website sources.redhat.com/lvm2/

LVM is a Device Mapper target which provides logical volume management for the Linux systems. Heinz Mauelshagen wrote the original code in 1998, taking its primary design guidelines from the HP-UX's volume manager.[3] The installers for the most modern distributions are LVM-aware enough to be able to have the root filesystem be on a logical volume.[4][5][6]

Common uses[edit]

LVM is commonly used for the following purposes:

  • Managing large hard disk farms by allowing disks to be added and replaced without downtime or service disruption, in combination with hot swapping.
  • On small systems (like a desktop at home), instead of having to estimate at installation time how big a partition might need to be in the future, LVM allows file systems to be easily resized later as needed.
  • Performing consistent backups by taking snapshots of the logical volumes.
  • Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID 0, but more similar to JBOD), allowing for dynamic volume resizing.

LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an abstraction of continuity and ease-of-use for managing hard drive replacement, re-partitioning, and backup.

The Ganeti solution stack relies on the Linux Logical Volume Manager.

Features[edit]

Basic functionality[edit]

  • Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones.
  • Resize logical volumes online by concatenating extents onto them or truncating extents from them.
  • Move online logical volumes between PVs.
  • Create read-only snapshots of logical volumes (LVM1).
  • Create read-write snapshots of logical volumes (LVM2).
  • Split or merge volume groups in situ (as long as no logical volumes span the split). This can be useful when migrating whole logical volumes to or from offline storage.
  • LVM Objects can be tagged for administrative convenience.[7]
  • Volume groups and logical volumes can be made active as the underlying devices become available through use of the lvmetad daemon.[8]

Advanced functionality[edit]

  • Allocate thin-provisioned logical volumes from a pool.[10]
  • On newer version of device-mapper, LVM is integrated with the rest of device-mapper enough to ignore the individual paths that back a dm-multipath device if devices/multipath_component_detection=1 is set in lvm.conf. This prevents LVM from activating volumes on an individual path instead of the multipath device as well as suppress messages about duplicate physical volumes that were an annoyance previously.[11]

RAID[edit]

  • Create RAID logical volumes: RAID 1, RAID 5, RAID 6, etc.[12]
  • Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0.
  • Configure a RAID 1 backend device (a PV) as write-mostly, resulting in reads being avoided to such devices unless necessary.[13]
  • Recovery rate can be limited using lvchange --raidmaxrecoveryrate and/or lvchange --raidminrecoveryrate so that I/O performance is maintained within acceptable limits during a rebuild of a RAID-ed logical volume.

High availability[edit]

The LVM also works in a shared-storage cluster (where disks holding the PVs are shared between multiple host computers), but can require an additional daemon to broker access to the metadata by way of some form of locking.

  • CLVM
A distributed lock manager is used to broker concurrent access to LVM metadata. Anytime a node in the cluster needs to modify the LVM metadata, it must secure permission from its local clvmd which is in constant contact with other clvmd daemons in the cluster and can communicate a desire to get a lock on a particular set of objects.
  • HA-LVM
Cluster-awareness is left to the application providing the high availability function. For LVM's part, HA-LVM can used clvm as a locking mechanism, or can continue to use the default file locking and reduce "collisions" by restricting access to only those LVM objects which have the appropriate LVM tags. Since this avoids contention rather than mitigates it, this solution is simpler but does not allow for concurrent access. As such, it is usually only considered useful in Active-Passive configurations.
  • lvmlockd
A currently unstable component that is designed to replaced clvmd by making the locking of LVM objects transparent to the rest of LVM and does not rely on a distributed lock manager. [14]

It should be noted that the above only resolve issues with LVM's access to the storage. The filesystem chosen to be layered onto of the logical volume must either support clustering itself (such as with GFS2, VxFS) or the filesystem must only be mounted by a single cluster node at any time (such as in an active-passive configuration).

Volume group allocation policy[edit]

LVM volume groups must contain a default allocation policy for new volumes created from it. This can later be changed per-logical volume using the lvconvert -A command or on the volume group itself via vgchange --alloc. In order to minimize fragmentation, LVM will attempt the strictest policy (contiguous) first and then progress towards the most liberal policy defined for the LVM object until allocation finally succeeds. In RAID configurations, almost all policies are applied to each leg in isolation. For example, even if a logical volume has a policy of cling, if you were to expand the filesystem LVM won't use a physical volume if it's already used by one of the other legs in the RAID setup. The RAID'd logical volume will put each leg on different physical volumes, making the other PV's unavailable to any other given leg. If this was the only option available expansion of the logical volume would be made to fail. In this sense, the logic behind cling will only apply to expanding each of the individual legs of the array.

Available allocation policies are:

  • contiguous which forces all LE's in a given logical volume to be adjacent and ordered. This eliminates fragmentation but severely reduces one's ability to expand a logical volume.
  • cling which forces new LE's to only be allocated on physical volumes already used by an LV. This can help mitigate fragmentation as well as reduce vulnerability of particular LV's should a device go down by reducing the likelihood that other LV's also have extents on that physical volume.
  • normal Near indiscriminate selection of physical extents. The only restriction is that it will attempt to keep parallel legs (such as those of a RAID setup) from sharing a physical device.
  • anywhere No restrictions whatsoever. Highly dangerous in RAID setup as it ignores isolation requirements thus undercuts most of the benefit of RAID. For linear volumes, it can result in increased fragmentation.

Implementation[edit]

Typically the first megabyte of each physical volume contains a mostly ASCII encoded structure referred to as an "LVM header" or "LVM head". Prior defaults would write the LVM head in the first and last megabyte of each physical volume for redundancy (in case of a failed sector) however this was later changed to only the first megabyte. Each PV's header is a complete copy of the entire volume group's layout, including the UUIDs of all other PVs, the UUIDs of all logical volumes and an allocation map of PEs to LEs. This simplifies data recovery in the event a physical volume is lost.

Basic example of an LVM head
Inner workings of the version 1 of LVM. In this diagram, PE stands for a Physical Extent.
Relationship between various elements of the LVM.

In the 2.6-series of the Linux Kernel, the LVM is implemented in terms of the device mapper, a simple block-level scheme for creating virtual block devices and mapping their contents onto other block devices. This minimizes the amount of relatively hard-to-debug kernel code needed to implement the LVM. It also allows its I/O redirection services to be shared with other volume managers (such as EVMS). Any LVM-specific code is pushed out into its user-space tools, which merely manipulate these mappings and reconstruct their state from on-disk metadata upon each invocation.

To bring a volume group online, the "vgchange" tool:

  1. Searches for PVs in all available block devices.
  2. Parses the metadata header in each PV found.
  3. Computes the layouts of all visible volume groups.
  4. Loops over each logical volume in the volume group to be brought online and:
    1. Checks if the logical volume to be brought online has all its PVs visible.
    2. Creates a new, empty device mapping.
    3. Maps it (with the "linear" target) onto the data areas of the PVs the logical volume belongs to.

To move an online logical volume between PVs on the same Volume Group, use the "pvmove" tool:

  1. Creates a new, empty device mapping for the destination.
  2. Applies the "mirror" target to the original and destination maps. The kernel will start the mirror in "degraded" mode and begin copying data from the original to the destination to bring it into sync.
  3. Replaces the original mapping with the destination when the mirror comes into sync, then destroys the original.

These device mapper operations take place transparently, without applications or file systems being aware that their underlying storage is moving.

Caveats[edit]

  • There currently exists no online or offline defragmentation program for LVM. This is mitigated somewhat by fragmentation only happening if a volume is expanded and by applying the aforementioned allocation policies. Fragmentation still occurs, however, and if one wants to reduce the current level of fragmentation, they must identify the non-contiguous segments and use the pvmove command to manually rearrange the extents.[17]
  • At the time of this writing, the Ubuntu installer does not support the creation of LVM objects. The LVM configuration must exist prior to beginning the install.[18]
  • On most current LVM implementations, there is only one copy of the LVM head saved to each physical volume. This can make the volumes more susceptible to failed sectors. This behavior can be overridden using `vgconvert --pvmetadatacopies`. If LVM can not read a proper header using the first head it will then check the end of the volume for a backup header. Failing that it won't activate the physical volume rendering any logical volumes with extents there inaccessible. If the metadata becomes corrupt for some reason, most distributions will keep a running backup at /etc/lvm/backup which will enable you to use vgcfgrestore to re-write the LVM head.

See also[edit]

References[edit]

  1. ^ "LVM README". 2003-11-17. Retrieved 2014-06-25. 
  2. ^ "lvm2.git - Upstream Logical Volume Manager repository". git.fedorahosted.org. Retrieved 2014-12-15. 
  3. ^ "LVM README". 2003-11-17. Retrieved 2014-06-25. 
  4. ^ "7.1.2 LVM Configuration with YaST". 12 July 2011. Retrieved 2015-05-22. 
  5. ^ "HowTo: Set up Ubuntu Desktop with LVM Partitions". 1 June 2014. Retrieved 2015-05-22. 
  6. ^ "9.15.4 Create LVM Logical Volume". 8 October 2014. Retrieved 2015-05-22. 
  7. ^ "Tagging LVM2 Storage Objects". Micro Focus International. Retrieved 21 May 2015. 
  8. ^ "The Metadata Daemon". Red Hat Inc. Retrieved 22 May 2015. 
  9. ^ "Using LVM’s new cache feature". Retrieved 2014-07-11. 
  10. ^ "2.3.5. Thinly-Provisioned Logical Volumes (Thin Volumes)". Access.redhat.com. Retrieved 2014-06-20. 
  11. ^ "4.101.3. RHBA-2012:0161 — lvm2 bug fix and enhancement update". Retrieved 2014-06-08. 
  12. ^ "4.4.15. RAID Logical Volumes". Access.redhat.com. Retrieved 2014-06-20. 
  13. ^ "Controlling I/O Operations on a RAID1 Logical Volume". redhat.com. Retrieved 16 June 2014. 
  14. ^ "Re: LVM snapshot with Clustered VG [SOLVED]". 15 Mar 2013. Retrieved 2015-06-08. 
  15. ^ "Bug 9554 - write barriers over device mapper are not supported". 2009-07-01. Retrieved 2010-01-24. 
  16. ^ "Barriers and journaling filesystems". LWN. 2008-05-22. Retrieved 2008-05-28. 
  17. ^ "will pvmove'ing (an LV at a time) defragment?". 2010-04-29. Retrieved 2015-05-22]].  Check date values in: |accessdate= (help)
  18. ^ "HowTo: Set up Ubuntu Desktop with LVM Partitions". 1 June 2014. Retrieved 2015-05-22. 

Further reading[edit]

  1. Lewis, AJ (2006-11-27). "LVM HOWTO". Linux Documentation Project. Retrieved 2008-03-04. .
  2. US patent 5129088, Auslander, et al., "Data Processing Method to Create Virtual Disks from Non-Contiguous Groups of Logically Contiguous Addressable Blocks of Direct Access Storage Device", issued 1992-7-7  (fundamental patent).
  3. "RedHat Linux: What is Logical Volume Manager or LVM?". techmagazinez.com. 6 August 2013. Retrieved 4 September 2013. 
  4. "LVM2 Resource Page". sourceware.org. 8 June 2012. Retrieved 4 September 2013. 
  5. "How-To: Install Ubuntu on LVM partitions". Debuntu.org. 28 July 2007. Retrieved 4 September 2013. 
  6. "Logical Volume Manager". markus-gattol.name. 13 July 2013.