Logical volume management

From Wikipedia, the free encyclopedia
  (Redirected from Logical volume)
Jump to: navigation, search

In computer storage, logical volume management or LVM provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes. In particular, a volume manager can concatenate, stripe together or otherwise combine partitions (or block devices in general) into larger virtual ones that administrators can re-size or move, potentially without interrupting system use.

Volume management represents just one of many forms of storage virtualization; its implementation takes place in a layer in the device-driver stack of an OS (as opposed to within storage devices or in a network).


Linux Logical Volume Manager (LVM) v1

Most volume-manager implementations share the same basic design. They start with physical volumes (PVs), which can be either hard disks, hard disk partitions, or Logical Unit Numbers (LUNs) of an external storage device. Volume management treats each PV as being composed of a sequence of chunks called physical extents (PEs). Some volume managers (such as that in HP-UX and Linux) have PEs of a uniform size; others (such as that in Veritas) have variably-sized PEs that can be split and merged at will.

Normally, PEs simply map one-to-one to logical extents (LEs). With mirroring, multiple PEs map to each LE. These PEs are drawn from a physical volume group (PVG), a set of same-sized PVs which act similarly to hard disks in a RAID1 array. PVGs are usually laid out so that they reside on different disks and/or data buses for maximum redundancy.

The system pools LEs into a volume group (VG). The pooled LEs can then be concatenated together into virtual disk partitions called logical volumes or LVs. Systems can use LVs as raw block devices just like disk partitions: creating mountable file systems on them, or using them as swap storage.

Striped LVs allocate each successive LE from a different PV; depending on the size of the LE, this can improve performance on large sequential reads by bringing to bear the combined read-throughput of multiple PVs.

Administrators can grow LVs (by concatenating more LEs) or shrink them (by returning LEs to the pool). The concatenated LEs do not have to be contiguous. This allows LVs to grow without having to move already-allocated LEs. Some volume managers allow the re-sizing of LVs in either direction while online. Changing the size of the LV does not necessarily change the size of a filesystem on it; it merely changes the size of its containing space. A file system that can be resized online is recommended in that it allows the system to adjust its storage on-the-fly without interrupting applications.

PVs and LVs cannot be shared between or span different VGs (although some volume managers may allow moving them at will between VGs on the same host). This allows administrators conveniently to bring VGs online, to take them offline or to move them between host systems as a single administrative unit.

VGs can grow their storage pool by absorbing new PVs or shrink by retracting from PVs. This may involve moving already-allocated LEs out of the PV. Most volume managers can perform this movement online; if the underlying hardware is hot-pluggable this allows engineers to upgrade or replace storage without system downtime.


Hybrid Volume[edit]

Hybrid volumes are any volume which intentionally and opaquely makes use of two separate physical volumes. For instance, one may have a workload that consists of random seeks and so would like to use an SSD to permanently store frequently used data or recently written data onto the SSD and use higher capacity rotational media for long term storage of rarely needed data. On Linux, this is implemented using either bcache or dm-cache while on Mac OS X this is implemented using Fusion Drive. ZFS implements this functionality by allowed administrators to configure multi-level read/write caching.

This should not to be confused with a hybrid drives which is a physical combination of solid state and rotational media. Hybrid volumes as discussed here are a purely logical construct.


Some volume managers also implement snapshots by applying copy-on-write to each LE. In this scheme, the volume manager will copy the LE to a copy-on-write table just before it is written to. This preserves an old version of the LV—the snapshot—which systems can later reconstruct by overlaying the copy-on-write table atop the current LV. It's important to note that unless the volume management supports both thin provisioning and discard, once an LE in the origin volume is written to, it is permanently stored in the snapshot volume. If the snapshot volume was made smaller than its origin (a common practice) this would render the snapshot inoperable and it would have to be removed.

Snapshots can be useful for backing up self-consistent versions of volatile data like table files from a busy database, or for rolling back large changes (such as an operating system upgrade) in a single operation. This has a similar effect as rendering a disk quiescent and is similar to VSS in Windows.

Many volume managers will also allow for the creation of writable snapshots. Read-write snapshots are branching snapshots because they implicitly allow diverging versions of an LV. Some Linux-based Live CD systems also use snapshots to simulate read-write access on a read-only compact disc.


Vendor Introduced in Volume manager Allocate anywhere[1] Snapshots RAID 0 RAID 1 RAID 5 RAID 10 Thin provisioning Notes
IBM AIX 3.0 (1989) Logical Volume Manager Yes Yes[2] Yes Yes No Yes[3] Refers to PEs as PPs (physical partitions), and to LEs as LPs (logical partitions). Does not have a copy-on-write snapshot mechanism; creates snapshots by freezing one volume of a mirror pair.
Hewlett-Packard HP-UX 9.0 HP Logical Volume Manager Yes Yes Yes Yes No Yes
FreeBSD Vinum Volume Manager Yes No Yes Yes Yes FreeBSD from version 7.0 supports ZFS volume Manager (with some limitations): ZFS - FreeBSD Wiki
NetBSD Logical Volume Manager Yes No Yes Yes No No NetBSD from version 6.0 supports ZFS volume Manager and its own re-implementation of Linux LVM. Re-implementation is based on a BSD licensed device-mapper driver and uses a port of Linux lvm tools as the userspace part of LVM. There is no need to support RAID5 in LVM because of NetBSD superior RAIDFrame subsystem.
Linux 2.2 Logical Volume Manager version 1 Yes Yes Yes Yes No No
Linux 2.4 Enterprise Volume Management System Yes Yes Yes Yes Yes No
Linux 2.6 and above Logical Volume Manager version 2 Yes Yes Yes Yes Yes Yes Yes
Linux 2.6 and above BTRFS Yes Yes Yes Yes No No n/a Filesystem with integrated volume management.
Silicon Graphics IRIX or Linux XVM Volume Manager Yes Yes Yes Yes Yes
Sun Microsystems SunOS Solaris Volume Manager (was Solstice DiskSuite). No No Yes Yes Yes Yes Refers to PVs as volumes (which can be combined with RAID0, RAID1 or RAID5 primitives into larger volumes), to LVs as soft partitions (which are contiguous extents placeable anywhere on volumes, but which cannot span multiple volumes), and to VGs as disk sets.
Sun Microsystems Solaris 10 ZFS Yes Yes Yes Yes Yes Yes Yes Filesystem with integrated volume management.
Veritas[4] Cross-OS Veritas Volume Manager (VxVM) Yes Yes Yes Yes Yes Yes Refers to LVs as volumes, to VGs as disk groups; has variably-sized PEs called subdisks and LEs called plexes.
Microsoft Windows 2000 and later NT-based operating systems Logical Disk Manager Yes Yes[5] Yes Yes Yes No No Does not have a concept of PEs or LEs; can only RAID0, RAID1, RAID5 or concatenate disk partitions into larger volumes; file systems must span whole volumes.
Windows 8 Storage Spaces[6] Yes Yes No Yes Yes No Yes Higher-level logic than RAID1 and RAID5 - multiple storage spaces span multiple disks of different size, storage spaces are resilient from physical failure with either mirroring (at least 2 disks) or striped parity (at least 3 disks), disk management and data recovery is fully automatic
Apple Mac OS X Lion Core Storage Yes[7] No No No No No No Currently, it is used in Lion's implementation of FileVault, in order to allow for full disk encryption, as well as Fusion Drive, which is merely a multi-PV LVG.

Snapshots are handled by Time Machine; Software-based RAID is provided by AppleRAID. Both are separate from Core Storage.


  • Logical volumes can suffer from external fragmentation when the underlying storage devices do not allocate their PEs contiguously. This can reduce I/O performance on slow-seeking media (such as magnetic disks and other rotational media). Volume managers which use fixed-size PEs, however, typically make PEs relatively large (a default of 4 MB on the Linux LVM, for example) in order to amortize the cost of these seeks. Additionally, some volume managers implement allocation policies designed to minimize how often a new segment will need to be created.
  • With implementations that are solely volume management (such as Core Storage and Linux LVM), by separating and abstracting away volume management from the filesystem, you lose the ability to easily make storage decisions for particular files/directories. For example, if you want to permanently move a certain directory (but not the entire filesystem) to faster storage, you have to traverse both the filesystem layout and the underlying volume management layer. For example on Linux, you would have to manually determine the offset a file's contents existed within a filesystem and then pvmove the extents (along with data not related to that file) to the faster storage yourself. There is also no reliable way one could communicate a desire for all new files within a directory also be located on the faster storage. No modern filesystem currently supports these operations but having volume and file management both under the same system makes the process theoretically simpler to implement. It should be noted that single-volume filesystems such as ext4 and XFS won't see this benefit anyways.


  1. ^ Denotes whether the volume manager allows LVs to grow and span onto any PV in the VG.
  2. ^ JFS2 snapshots
  3. ^ AIX 5.1
  4. ^ Third-party product; available for Windows and many Unix-like OSes.
  5. ^ Windows Server 2003 and later
  6. ^ MSDN Blogs - Building Windows 8: Virtualizing storage for scale, resiliency, and efficiency
  7. ^ "man page diskutil section 8". Retrieved 10-6-2011.  Check date values in: |accessdate= (help)


External links[edit]