Nested RAID levels

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Nested RAID levels, also known as hybrid RAID, combine two or more of the standard RAID levels (where "RAID" stands for "redundant array of independent disks") to gain performance, additional redundancy or both, as a result of combining properties of different standard RAID layouts.[1][2]

Nested RAID levels are usually numbered using a series of numbers, where the most commonly used levels use two numbers. The first number in the numeric designation denotes the lowest RAID level in the "stack", while the rightmost one denotes the highest layered RAID level; for example, RAID 50 layers the data striping of RAID 0 on top of the distributed parity of RAID 5. Nested RAID levels include RAID 01, RAID 10, RAID 100, RAID 50 and RAID 60, which all combine data striping with other RAID techniques; as a result of the layering scheme, RAID 01 and RAID 10 represent significantly different nested RAID levels.[3]

RAID 01 (RAID 0+1)[edit]

A nested RAID 01 configuration
A hybrid RAID 01 configuration

RAID 01, also called RAID 0+1, is a RAID level using a mirror of stripes, achieving both replication and sharing of data between disks.[3] The usable capacity of a RAID 01 array is the same as in a RAID 1 array made of the same drives, in which one half of the drives is used to mirror the other half. (N/2) \cdot S_{\mathrm{min}}, where N is the total number of drives and S_{\mathrm{min}} is the capacity of the smallest drive in the array.[4]

RAID 03 (RAID 0+3)[edit]

A typical RAID 03 configuration

RAID 03, also called RAID 0+3 and sometimes RAID 53, is similar to RAID 01 with the exception that byte-level striping with dedicated parity is used instead of simple mirroring.[5]

RAID 10 (RAID 1+0)[edit]

A typical RAID 10 configuration

RAID 10, also called RAID 1+0 and sometimes RAID 1&0, is similar to RAID 01 with an exception that two used standard RAID levels are layered in the opposite order; thus, RAID 10 is a stripe of mirrors.[3]

RAID 10, as recognized by the storage industry association and as generally implemented by RAID controllers, is a RAID 0 array of mirrors, which may be two-way or three way-mirrors,[6] and requires a minimum of four drives. However, a nonstandard definition of "RAID 10" was created for the Linux MD driver;[7] Linux "RAID 10" can be implemented with as few as two disks. Implementations supporting two disks such as Linux RAID 10 offer a choice of layouts.[7]

More than four disks are possible in RAID 10, and these larger arrays are common in professional applications. In high end configurations, enterprise storage experts expected PCIe and SAS storage to dominate and eventually replace interfaces designed for spinning metal and for these interfaces to become further integrated with Ethernet and network storage suggesting that rarely accessed data stripes could often be located over networks and that very large arrays using protocols like iSCSI would become more common.[8]

According to manufacturer specifications and official independent benchmarks,[9][10][11] in most cases RAID 10 provides better throughput and latency than all other RAID levels except RAID 0 (which wins in throughput). Thus, it is the preferable RAID level for I/O-intensive applications such as database, email, and web servers, as well as for any other use requiring high disk performance.[12]

RAID 50 (RAID 5+0)[edit]

A typical RAID 50 configuration. A1, B1, etc. each represent one data block; each column represents one disk; Ap, Bp, etc. each represent parity information for each distinct RAID 5 and may represent different values across the RAID 5 (that is, Ap for A1 and A2 can differ from Ap for A3 and A4).

RAID 50, also called RAID 5+0, combines the straight block-level striping of RAID 0 with the distributed parity of RAID 5.[3] As a RAID 0 array striped across RAID 5 elements, minimal RAID 50 configuration requires six drives. On the right is an example where three collections of 120 GB RAID 5s are striped together to make 720 GB of total storage space.

One drive from each of the RAID 5 sets could fail without loss of data; for example, a RAID 50 configuration including three RAID 5 sets can only tolerate three maximum potential drive failures. Because the reliability of the system depends on quick replacement of the bad drive so the array can rebuild, it is common to include hot spares that can immediately start rebuilding the array upon failure. However, this does not address the issue that the array is put under maximum strain reading every bit to rebuild the array at the time when it is most vulnerable.[13][14]

RAID 50 improves upon the performance of RAID 5 particularly during writes, and provides better fault tolerance than a single RAID level does. This level is recommended for applications that require high fault tolerance, capacity and random access performance. As the number of drives in a RAID set increases, and the capacity of the drives increase, this impacts the fault-recovery time correspondingly as the interval for rebuilding the RAID set increases.[13][14]

RAID 60 (RAID 6+0)[edit]

A typical RAID 60 configuration consisting of two sets of four drives each

RAID 60, also called RAID 6+0, combines the straight block-level striping of RAID 0 with the distributed double parity of RAID 6, resulting in a RAID 0 array striped across RAID 6 elements. It requires at least eight disks.[15]

RAID 100 (RAID 10+0)[edit]

A typical RAID 100 configuration

RAID 100, sometimes also called RAID 10+0, is a stripe of RAID 10s. This is logically equivalent to a wider RAID 10 array, but is generally implemented using software RAID 0 over hardware RAID 10. Being "striped two ways", RAID 100 is described as a "plaid RAID".[16]

Comparison[edit]

The following table provides an overview of some considerations for nested RAID levels. In each case:

  • Array space efficiency is given as an expression in terms of the number of drives, n; this expression designates a fractional value between zero and one, representing the fraction of the sum of the drives' capacities that is available for use. For example, if three drives are arranged in RAID 3, this gives an array space efficiency of 1 - 1/n = 1 - 1/3 = 2/3 \approx 67\%; thus, if each drive in this example has a capacity of 250 GB, then the array has a total capacity of 750 GB but the capacity that is usable for data storage is only 500 GB.
  • Array failure rate is given as an expression in terms of the number of drives, n, and the drive failure rate, r (which is assumed identical and independent for each drive) and can be seen to be a Bernoulli trial.[citation needed] For example, if each of three drives has a failure rate of 5% over the next three years, and these drives are arranged in RAID 3, then this gives an array failure rate over the next three years of:

\begin{align} 1 - (1 - r)^{n} - nr(1 - r)^{n - 1} & = 1 - (1 - 5\%)^{3} - 3 \times 5\% \times (1 - 5\%)^{3 - 1} \\
& = 1 - 0.95^{3} - 0.15 \times 0.95^{2} \\
& = 1 - 0.857375 - 0.135375 \\
& = 0.00725 \\
& \approx 0.7\% \end{align}
Level Description Minimum number of drives[a] Space efficiency Fault tolerance Array failure rate[b] Read performance Write performance
RAID 01 Block-level striping, and mirroring without parity 4
RAID 03 Block-level striping, and byte-level striping with dedicated parity 6
RAID 10 Mirroring without parity, and block-level striping 4 stripes/n One or more drive failures per span[c] [d] (n/spans)×
RAID 50 Block-level striping with distributed parity, and block-level striping 6
RAID 60 Block-level striping with double distributed parity, and block-level striping 8
RAID 100 Mirroring without parity, and two levels of block-level striping 8

See also[edit]

Notes[edit]

  1. ^ Assumes a non-degenerate minimum number of drives
  2. ^ Assumes independent, identical rate of failure amongst drives
  3. ^ RAID 10 can lose up to m-1 drives per span, where m is the number of drives per span. Thus, a RAID 10 setup can lose up to a total of stripes × (m-1) drives.
  4. ^ Theoretical maximum, as low as (n/spans)× in practice

References[edit]

  1. ^ Delmar, Michael Graves (2003). "Data Recovery and Fault Tolerance". The Complete Guide to Networking and Network+. Cengage Learning. p. 448. ISBN 1-4018-3339-X. 
  2. ^ Mishra, S. K.; Vemulapalli, S. K.; Mohapatra, P. (1995). "Dual-Crosshatch Disk Array: A Highly Reliable Hybrid-RAID Architecture". Proceedings of the 1995 International Conference on Parallel Processing: Volume 1. CRC Press. pp. I–146ff. ISBN 0-8493-2615-X. 
  3. ^ a b c d Layton, Jeffrey B. (2011-01-06). "Intro to Nested-RAID: RAID-01 and RAID-10". Linux-Mag.com. Linux Magazine. Retrieved 2015-02-01. 
  4. ^ Kozierok, Charles M. (2004). "RAID Levels 0+1 (01) and 1+0 (10)". PCGuide.com. Retrieved 2015-02-01. 
  5. ^ Kozierok, Charles M. (17 April 2001). "RAID Levels 0+3 (03 or 53) and 3+0 (30)". PCGuide.com. 
  6. ^ Dawkins, Bill; Jones, Arnold (2006-07-28). "Common RAID Disk Data Format Specification" (PDF). SNIA.org (1.2 ed.). Storage Networking Industry Association. Archived from the original on 2009-08-24. Retrieved 2015-01-31. 
  7. ^ a b Brown, Neil (27 August 2004). "RAID10 in Linux M driver". 
  8. ^ Cole, Arthur (24 August 2010). "SSDs: From SAS/SATA to PCIe". ITBusinessEdge.com. 
  9. ^ "Intel Rapid Storage Technology: What is RAID 10?". Intel. 16 November 2009. 
  10. ^ "IBM and HP 6-Gbps SAS RAID Controller Performance" (PDF). Demartek. October 2009. 
  11. ^ "Summary Comparison of RAID Levels". PCGuide.com. 17 April 2001. 
  12. ^ Gupta, Meeta (2002). Storage Area Network Fundamentals. Cisco Press. p. 268. ISBN 1-58705-065-X. 
  13. ^ a b "Cisco UCS Servers RAID Guide, Chapter 1: RAID Overview" (PDF). Cisco.com. Cisco Systems. pp. 1–14, 1–15. Retrieved 2015-02-01. 
  14. ^ a b Lowe, Scott (2010-07-09). "RAID 50 offers a balance of performance, storage capacity, and data integrity". TechRepublic.com. Retrieved 2015-02-01. 
  15. ^ "Which RAID Level is Right for Me: RAID 60 (Striping and striping with dual party)". Adaptec.com. Adaptec. Retrieved 2015-02-03. 
  16. ^ McKinstry, Jim. "Server Management: Questions and Answers". SAMag.com. Archived from the original on 19 January 2008.