Jump to content

DRBD

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 152.8.99.118 (talk) at 12:53, 2 May 2013 (Disadvantages Over Shared Cluster Storage: Previously added this to balance the article, not needed now so I'm removing. See my last edit summary.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Original author(s)Philipp Reisner, Lars Ellenberg
Developer(s)LINBIT
Stable release
8.4.0 / 18 July 2011; 13 years ago (2011-07-18)
Repository
Written inC
Operating systemGNU/Linux
TypeDistributed storage system
LicenseGNU General Public License v2
Websitewww.drbd.org
Overview of DRBD concept

DRBD (Distributed Replicated Block Device) is a distributed storage system for the GNU/Linux platform. It consists of a kernel module, several userspace management applications and some shell scripts and is normally used on high availability (HA) clusters. DRBD bears similarities to RAID 1, except that it runs over a network.

DRBD refers to both the software (kernel module and associated userspace tools), and to logical block devices managed by the software. DRBD device and DRBD block device are also often used for the latter.

It is free software released under the terms of the GNU General Public License version 2.

DRBD is part of the Lisog open source stack initiative.

Mode of operation

DRBD layers logical block devices (conventionally named /dev/drbdX, where X is the device minor number) over existing local block devices on participating cluster nodes. Writes to the primary node are transferred to the lower-level block device and simultaneously propagated to the secondary node. The secondary node then transfers data to its corresponding lower-level block device. All read I/O is performed locally.

Should the primary node fail, a cluster management process promotes the secondary node to a primary state. This transition may require a subsequent verification of the integrity of the file system stacked on top of DRBD, by way of a filesystem check or a journal replay. When the failed ex-primary node returns, the system may (or may not) raise it to primary level again, after device data resynchronization. DRBD's synchronization algorithm is efficient in the sense that only those blocks that were changed during the outage must be resynchronized, rather than the device in its entirety.

DRBD is often deployed together with the Heartbeat cluster manager, although it does integrate with other cluster management frameworks. It integrates with virtualization solutions such as Xen, and may be used both below and on top of the Linux LVM stack.[1]

DRBD version 8, released in January 2007, introduced support for load-balancing configurations, allowing both nodes to access a particular DRBD in read/write mode with shared storage semantics.[2] Such a configuration requires the use of a distributed lock manager.

Advantages over shared cluster storage

Conventional computer cluster systems typically use some sort of shared storage for data being used by cluster resources. This approach has a number of disadvantages, which DRBD may help offset:

  • Shared storage resources must typically be addressed over a SAN or NAS, which creates some overhead in read I/O. In DRBD that overhead is greatly reduced as all read operations are carried out locally.
  • Shared storage are usually expensive, consume more space (2U and more) and power. DRBD allows to create a HA setup with only 2 machines.

Disadvantages Over Shared Cluster Storage

  • Without coming from a SAN, if storage requirements change it is difficult to re-appropriate storage to or from the service. It's storage capacity remains static until downtime is incurred to change.

Applications

Operating within the Linux kernel's block layer, DRBD is essentially workload agnostic. A DRBD can be used as the basis of

DRBD-based clusters are often employed for adding synchronous replication and high availability to file servers, relational databases (such as MySQL), and many other workloads.

Inclusion in Linux kernel

DRBD's authors originally submitted the software to the Linux kernel community in July 2007, for possible future inclusion of DRBD into the "vanilla" (standard, without modifications) Linux kernel.[5] After a lengthy review and several discussions, Linus Torvalds finally agreed to have DRBD as part of the official Linux kernel. DRBD got merged on 8 December 2009 during the "merge window" for Linux kernel version 2.6.33.

See also

References

  1. ^ LINBIT. "The DRBD User's Guide". Retrieved 2011-11-28.
  2. ^ Reisner, Philipp (2005-10-11). "DRBD v8 - Replicated Storage with Shared Disk Semantics" (PDF). Proceedings of the 12th International Linux System Technology Conference. Hamburg, Germany. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
  3. ^ http://www.drbd.org/users-guide/ch-ocfs2.html
  4. ^ http://en.gentoo-wiki.com/wiki/Active-active_DRBD_with_OCFS2
  5. ^ Ellenberg, Lars (2007-07-21). "DRBD wants to go mainline". linux-kernel (Mailing list). Retrieved 2007-08-03. {{cite mailing list}}: Unknown parameter |mailinglist= ignored (|mailing-list= suggested) (help)