Jump to content

Talk:DRBD

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 152.8.99.118 (talk) at 15:12, 2 May 2013 (→‎Terminology Issues). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Citation Needed for Speed Claim

There's a claim that read I/O operates at a penalty over Fibre Channel that I think needs further qualification in order to improve the quality of the article. Many FC deployments operate at 850MB/s whereas SATA III operates at around 600MB/s (more info). I'm not saying this disproves the claim (if I thought that, I would have just removed it from the article) but it does bring us to the point where we need to either substantiate the claim or think about removing it. 152.8.99.118 (talk) 13:19, 2 May 2013 (UTC)[reply]

Dubious advantages of DRBD vs shared storage

I don't know how to rewrite properly these 2 points but there are some dubious contents:

Shared storage typically DO NOT have a single point of failure

Shared storage sold for cluster(HA) typically are fully redundant with 2 controllers, and each host is connected to both controllers. I have also seen 2 boxes JBOD used with software mirroring. I have never seen any HA setup with a SPOF, this argument is dubious IMHO Pweltz (talk) 18:33, 20 May 2011 (UTC)[reply]

I agree with this. With multipathing on the host side, and RAID on the SP-side (for the actual disks), where is the downtime supposed to come from? Failure of a particular storage device causes RAID to run degraded, failure of a path from that device to the Host's OS is redundant and so can run degraded. No single component in this setup should be a point of service failure. — Preceding unsigned comment added by 152.8.99.118 (talk) 12:16, 2 May 2013 (UTC)[reply]

Overhead also dubious

Shared storage can also use SCSI/SAS direct-attached in a 2 nodes cluster. In this case it is as fast as it can get. I also doubt shared storage over FC would be slower. To be fair it should be mentioned DRBD would be sloer on write because of TCP overhead (except vs iSCSI perhaps) Pweltz (talk) 18:33, 20 May 2011 (UTC)[reply]

Price, space and power

IMO the real advantage of DRBD is that you can do small HA setups, less expensive and more efficient in term of power. (Added a few words on that) Pweltz (talk) 18:33, 20 May 2011 (UTC)[reply]

Terminology Issues

I think this article may need to change it's terminology to something a little more standard to help people compare apples to apples. This helps assess advantages/disadvantages as well as understand the base material by making sure we're all using the same language to describe common elements. 90% of this article seems to have real value, but if it's not easily consumed by the target audience it may render the whole article moot.

The main issue I have is with the term "shared cluster storage"? It would seem that even if DRBD does take care of all high availability needs for storage, you're still going to need automatic service relocation (otherwise, if you're concerned with high availability, what happens when the OS on the active node kernel panics or some other non-storage related outage occurred?). Since HA clustering would probably have to go on anyways (and DRBD mount points becoming a resource they migrate if there was a failure) it's probably better to drop "cluster" from the name given to the target of the comparison. Also, what does it mean that it's "shared"? From what I'm reading DRBD is a RAID-1 mirror between two nodes.

From what I can tell the counter point to DRBD is supposed to be LUN's presented over a Fibre Channel or iSCSI SAN. If that is the case (we would need the original authors' input on this) then I think a better term would be "LUNs presented from a SAN" or "SAN presented LUNs" or something along those lines. — Preceding unsigned comment added by 152.8.99.118 (talk) 13:09, 2 May 2013 (UTC)[reply]