Distributed Replicated Block Device

From Wikipedia, the free encyclopedia
  (Redirected from DRBD)
Jump to: navigation, search
Distributed Replicated Block Device
DRBD logo.svg
Original author(s) Philipp Reisner, Lars Ellenberg
Developer(s) LINBIT
Stable release 8.4.3 / 5 February 2013; 19 months ago (2013-02-05)
Development status Production
Written in C
Operating system Linux
Type Distributed storage system
License GNU General Public License v2
Website www.drbd.org
Overview of DRBD concept

The Distributed Replicated Block Device (DRBD) is a distributed replicated storage system for the Linux platform. It is implemented as several userspace management applications and some shell scripts and is normally used on high availability (HA) computer clusters.

DRBD also refers to the logical block devices provided by the scheme and to the software that implements it. DRBD device and DRBD block device are also often used for the former.

The DRBD software is free software released under the terms of the GNU General Public License version 2.

DRBD is part of the Lisog open source stack initiative.

Mode of operation[edit]

DRBD layers logical block devices (conventionally named /dev/drbdX, where X is the device minor number) over existing local block devices on participating cluster nodes. Writes to the primary node are transferred to the lower-level block device and simultaneously propagated to the secondary node. The secondary node then transfers data to its corresponding lower-level block device. All read I/O is performed locally.

Should the primary node fail, a cluster management process promotes the secondary node to a primary state. This transition may require a subsequent verification of the integrity of the file system stacked on top of DRBD, by way of a filesystem check or a journal replay. When the failed ex-primary node returns, the system may (or may not) raise it to primary level again, after device data resynchronization. DRBD's synchronization algorithm is efficient in the sense that only those blocks that were changed during the outage must be resynchronized, rather than the device in its entirety.

DRBD is often deployed together with the Heartbeat cluster manager, although it does integrate with other cluster management frameworks. It integrates with virtualization solutions such as Xen, and may be used both below and on top of the Linux LVM stack.[1]

DRBD version 8, released in January 2007, introduced support for load-balancing configurations, allowing both nodes to access a particular DRBD in read/write mode with shared storage semantics.[2] Such a configuration requires the use of a distributed lock manager.

Shared cluster storage comparison[edit]

Conventional computer cluster systems typically use some sort of shared storage for data being used by cluster resources. This approach has a number of disadvantages, which DRBD may help offset:

  • Shared storage resources must typically be accessed over a storage area network or on a network attached storage server, which creates some overhead in read I/O. In DRBD that overhead is reduced as all read operations are carried out locally.[citation needed]
  • Shared storage is usually expensive and consumes more space (2U and more) and power. DRBD allows for an HA setup with only 2 machines.

A disadvantage is the lower time to write directly to a shared storage device than to route the write through the other node.

Comparison to RAID-1[edit]

DRBD bears a superficial similarity to RAID-1 in that it involves a copy of data on two storage devices, such that if one fails, the data on the other can be used. But it operates in a much different way than RAID, even network RAID.

In RAID, the redundancy exists in a layer transparent to the storage-using application. While there are two storage devices, there is only one instance of the application and the application is not aware of multiple copies. When the application reads, the RAID layer chooses the storage device to read. When a storage device fails, the RAID layer chooses to read the other, without the application instance knowing of the failure.

In contrast, with DRBD there are two instances of the application, and each can read only from one of the two storage devices. Should one storage device fail, the application instance tied to that device can no longer read the data. Consequently, in that case that application instance shuts down and the other application instance, tied to the surviving copy of the data, takes over.

Conversely, in RAID, if the single application instance fails, the information on the two storage devices is effectively unusable, but in DRBD, the other application instance can take over.

Applications[edit]

Operating within the Linux kernel's block layer, DRBD is essentially workload agnostic. A DRBD can be used as the basis of

DRBD-based clusters are often employed for adding synchronous replication and high availability to file servers, relational databases (such as MySQL), and many other workloads.

Inclusion in Linux kernel[edit]

DRBD's authors originally submitted the software to the Linux kernel community in July 2007, for possible inclusion in the canonical kernel.org version of the Linux kernel.[5] After a lengthy review and several discussions, Linus Torvalds agreed to have DRBD as part of the official Linux kernel. DRBD was merged on 8 December 2009 during the "merge window" for Linux kernel version 2.6.33.

See also[edit]

Highly Available STorage

References[edit]

  1. ^ LINBIT. "The DRBD User's Guide". Retrieved 2011-11-28. 
  2. ^ Reisner, Philipp (2005-10-11). "DRBD v8 - Replicated Storage with Shared Disk Semantics". Proceedings of the 12th International Linux System Technology Conference. Hamburg, Germany. 
  3. ^ http://www.drbd.org/users-guide/ch-ocfs2.html
  4. ^ http://en.gentoo-wiki.com/wiki/Active-active_DRBD_with_OCFS2
  5. ^ Ellenberg, Lars (2007-07-21). "DRBD wants to go mainline". linux-kernel mailing list. http://lkml.org/lkml/2007/7/21/255. Retrieved 2007-08-03.

External links[edit]