Non-standard RAID levels

From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about non-standard RAID configurations. For RAID in general, see RAID. For basic RAID configurations, see Standard RAID levels.

Although all RAID implementations differ from the specification to some extent, some companies and open-source projects have developed non-standard RAID implementations that differ substantially from the standard. Additionally, there are non-RAID drive architectures, providing configurations of multiple hard drives not referred to by RAID acronyms.

Double parity[edit]

Diagram of a RAID-DP (double parity) setup

Now part of RAID 6, double parity (sometimes known as row diagonal parity[1]) features two sets of parity checks, like traditional RAID 6. Differently, the second set is not another set of points in the over-defined polynomial which characterizes the data. Rather, double parity calculates the extra parity against a different group of blocks. For example, in our graph both RAID 5 and 6 consider all A-labeled blocks to produce one or more parity blocks. However, it is fairly easy to calculate parity against multiple groups of blocks, one can calculate all A blocks and a permuted group of blocks.[2]


RAID-DP implements RAID 4, except with an additional disk that is used for a second parity, so it has the same failure characteristics of a RAID 6.[3] The performance penalty of RAID-DP is typically under 2% when compared to a similar RAID 4 configuration.[4]

RAID 5E, RAID 5EE, and RAID 6E[edit]

RAID 5E, RAID 5EE, and RAID 6E (with the added E standing for Enhanced) generally refer to variants of RAID 5 or 6 with an integrated hot-spare drive, where the spare drive is an active part of the block rotation scheme. This spreads I/O across all drives, including the spare, thus reducing the load on each drive, increasing performance. It does, however, prevent sharing the spare drive among multiple arrays, which is occasionally desirable.[5]

Intel Matrix RAID[edit]

Diagram of an Intel Matrix RAID setup
Main article: Intel Matrix RAID

Intel Matrix RAID (a feature of Intel Rapid Storage Technology) is a feature (not a RAID level) present in the ICH6R and subsequent Southbridge chipsets from Intel, accessible and configurable via the RAID BIOS setup utility. Matrix RAID supports as few as two physical disks or as many as the controller supports. The distinguishing feature of Matrix RAID is that it allows any assortment of RAID 0, 1, 5, or 10 volumes in the array, to which a controllable (and identical) portion of each disk is allocated.[6][7][8]

As such, a Matrix RAID array can improve both performance and data integrity. A practical instance of this would use a small RAID 0 (stripe) volume for the operating system, program, and paging files; second larger RAID 1 (mirror) volume would store critical data. Linux MD RAID is also capable of this.[6][7][8]

Linux MD RAID 10[edit]

The software RAID subsystem provided by the Linux kernel, called "md", supports the creation of both classic (nested) RAID 1+0 arrays, and non-standard RAID arrays that use a single-level RAID layout with some additional features.[9][10]

The standard "near" layout, in which each chunk is repeated n times in a k-way stripe array, is equivalent to the standard RAID 10 arrangement, but it does not require that n evenly divides k. For example, an n2 layout on two, three, and four drives would look like:[11][12]

2 drives         3 drives          4 drives
--------         ----------        --------------
A1  A1           A1  A1  A2        A1  A1  A2  A2
A2  A2           A2  A3  A3        A3  A3  A4  A4
A3  A3           A4  A4  A5        A5  A5  A6  A6
A4  A4           A5  A6  A6        A7  A7  A8  A8
..  ..           ..  ..  ..        ..  ..  ..  ..

The four-drive example is identical to a standard RAID 1+0 array, while the three-drive example is a software implementation of RAID 1E. The two-drive example is equivalent to RAID 1.[12]

The driver also supports a "far" layout, in which all the drives are divided into f sections. All the chunks are repeated in each section but are switched in groups (for example, in pairs). For example, f2 layouts on two-, three-, and four-drive arrays would look like this:[11][12]

2 drives             3 drives             4 drives
--------             ------------         ------------------
A1  A2               A1   A2   A3         A1   A2   A3   A4
A3  A4               A4   A5   A6         A5   A6   A7   A8
A5  A6               A7   A8   A9         A9   A10  A11  A12
..  ..               ..   ..   ..         ..   ..   ..   ..
A2  A1               A3   A1   A2         A2   A1   A4   A3
A4  A3               A6   A4   A5         A6   A5   A8   A7
A6  A5               A9   A7   A8         A10  A9   A12  A11
..  ..               ..   ..   ..         ..   ..   ..   ..

"Far" layout is designed for offering striping performance on a mirrored array; sequential reads can be striped, similar to as in RAID 0 configurations.[13] Random reads are somewhat faster, while sequential and random writes offer about equal speed to other mirrored RAID configurations. "Far" layout performs well for systems in which reads are more frequent than writes, which is a common case. For a comparison, regular RAID 1 as provided by Linux software RAID, does not stripe reads, but can perform reads in parallel.[14]

The "near" and "far" options can be used together; in that case chunks in each section are offset by n (near) devices. For example, an n2 f2 layout stores 2×2 = 4 copies of each sector, thus requiring at least four drives:[12]

4 drives              5 drives
--------------        -------------------
A1  A1  A2  A2        A1  A1  A2  A2  A3
A3  A3  A4  A4        A3  A4  A4  A5  A5
A5  A5  A6  A6        A6  A6  A7  A7  A8
A7  A7  A8  A8        A8  A9  A9  A10 A10
..  ..  ..  ..        ..  ..  ..  ..  ..
A2  A2  A1  A1        A2  A3  A1  A1  A2
A4  A4  A3  A3        A5  A5  A3  A4  A4
A6  A6  A5  A5        A7  A8  A6  A6  A7
A8  A8  A7  A7        A10 A10 A8  A9  A9
..  ..  ..  ..        ..  ..  ..  ..  ..

The md driver also supports an "offset" layout, in which each stripe is repeated o times and offset by f (far) devices. For example, o2 layouts on two-, three-, and four-drive arrays are laid out as:[11][12]

2 drives       3 drives           4 drives
--------       ----------         ---------------
A1  A2         A1  A2  A3         A1  A2  A3  A4
A2  A1         A3  A1  A2         A4  A1  A2  A3
A3  A4         A4  A5  A6         A5  A6  A7  A8
A4  A3         A6  A4  A5         A8  A5  A6  A7
A5  A6         A7  A8  A9         A9  A10 A11 A12
A6  A5         A9  A7  A8         A12 A9  A10 A11
..  ..         ..  ..  ..         ..  ..  ..  ..

It is also possible to combine "near" and "offset" layouts (but not "far" and "offset").[12]

In the examples above, k is the number of drives, while n#, f#, and o# are parameters to the mdadm's --layout option. Linux software RAID (Linux kernel's md driver) also supports creation of standard RAID 0, 1, 4, 5, and 6 configurations.[15][16]

RAID 1E[edit]

Diagram of a RAID 1E setup

Some RAID 1 implementations treat differently arrays with more than two disks, creating a non-standard RAID level known as RAID 1E. In this layout, data striping is combined with mirroring, by mirroring each written stripe to one of the remaining disks in the array. Usable capacity of a RAID 1E array equals to 50% of the total capacity of all drives forming up the array; if drives of different sizes are used, only the portions equaling to the size of smallest member are utilized on each drive.[17][18]

One of the benefits of RAID 1E over usual RAID 1 mirrored pairs is that the performance of random read operations remains above the performance of a single drive even in a degraded array.[17]


The ZFS filesystem provides RAID-Z, a data/parity distribution scheme similar to RAID 5, but using dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, eliminates the write hole error. RAID-Z is also faster than traditional RAID 5 because it does not need to perform the usual read-modify-write sequence. RAID-Z does not require any special hardware, such as NVRAM for reliability, or write buffering for performance.[19]

As all stripes are of different sizes, RAID-Z reconstruction has to traverse the filesystem metadata to determine the actual RAID-Z geometry. This would be impossible if the filesystem and the RAID array were separate products, whereas it becomes feasible when there is an integrated view of the logical and physical structure of the data. Going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes, whereas traditional RAID products usually cannot do this.[19]

In addition to handling whole-disk failures, RAID-Z can also detect and correct silent data corruption, offering "self-healing data": when reading a RAID-Z block, ZFS compares it against its checksum, and if the data disks did not return the right answer, ZFS reads the parity and then figures out which disk returned bad data. Then, it repairs the damaged data and returns good data to the requestor.[19]

There are three different RAID-Z modes: RAID-Z1 (similar to RAID 5, allows one disk to fail), RAID-Z2 (similar to RAID 6, allows two disks to fail), and RAID-Z3 (allows three disks to fail). The need for RAID-Z3 arose recently because RAID configurations with future disks (say, 6–10 TB) may take a long time to repair, the worst case being weeks. During those weeks, the rest of the disks in the RAID are stressed more because of the additional intensive repair process and might subsequently fail, too. By using RAID-Z3, the risk involved with disk replacement is reduced.[20]

Mirroring, the other ZFS RAID option, is essentially the same as RAID 1, allowing any number of disks to be mirrored.[21]

Drive Extender[edit]

Windows Home Server Drive Extender is a specialized case of JBOD RAID 1 implemented at the file system level.[22]

Microsoft announced in 2011 that Drive Extender would no longer be included as part of Windows Home Server Version 2, Windows Home Server 2011 (codename VAIL).[23] As a result, there has been a third-party vendor move to fill the void left by DE. Included competitors are Division M, the developers of DriveBender, and StableBit's DrivePool.[24][25]


BeyondRAID is not a true RAID extension, but consolidates up to 12 SATA hard drives into one pool of storage.[26] It has the advantage of supporting multiple disk sizes at once, much like JBOD, while providing redundancy for all disks and allowing a hot-swap upgrade at any time. Internally it uses a mix of techniques similar to RAID 1 and 5. Depending on the fraction of data in relation to capacity, it can survive up to three drive failures, if the "array" can be restored onto the remaining good disks before another drive fails. The amount of usable storage can be approximated by summing the capacities of the disks and subtracting the capacity of the largest disk. For example, if a 500, 400, 200, and 100 GB drive were installed, the approximate usable capacity would be 500+400+200+100+(-500)=700 GB of usable space. Internally the data would be distributed in two RAID 5-like arrays and two RAID 1-like sets:

 | 100 GB | 200 GB | 400 GB | 500 GB |

                            |   x    | unusable space (100 GB)
                   |   A1   |   A1   | RAID 1 set (2× 100 GB)
                   |   B1   |   B1   | RAID 1 set (2× 100 GB)
          |   C1   |   C2   |   Cp   | RAID 5 array (3× 100 GB)
 |   D1   |   D2   |   D3   |   Dp   | RAID 5 array (4× 100 GB)

BeyondRaid offers a RAID 6–like feature and can perform hash-based compression using 160-bit SHA1 hashes to maximize storage efficiency.[27]


unRAID is a Linux-based operating system optimized for media file storage.[28]

Disadvantages include slower write performance than a single disk and bottle necking when multiple drives are written concurrently. However, unRAID allows support of a cache drive which dramatically speeds up the write performance. Cache drive data is temporarily unprotected until unRAID moves it to the array based on a schedule set within the software.[citation needed]

Advantages include lower power consumption than standard RAID levels, the ability to use multiple hard drives with differing sizes to their full capacity and in the event of multiple concurrent hard drive failures (exceeding the redundancy), only losing the data stored on the failed hard drives compared to standard RAID levels which offer striping in which case all of the data on the array is lost when more hard drives fail than the redundancy can handle.[29]

CRYPTO softraid[edit]

In OpenBSD, CRYPTO is an encrypting discipline for the softraid subsystem. It encrypts data on a single chunk to provide for data confidentiality. CRYPTO does not provide redundancy.[30]

See also[edit]


  1. ^ Peter Corbett, Bob English, Atul Goel, Tomislav Grcanac, Steven Kleiman, James Leong, and Sunitha Sankar (2004). "Row-Diagonal Parity for Double Disk Failure Correction" (PDF). USENIX Association. Archived (PDF) from the original on 2013-11-22. Retrieved 2013-11-22. 
  2. ^ Patrick Schmid (2007-08-07). "RAID 6: Stripe Set With Double Redundancy - RAID Scaling Charts, Part 2". Retrieved 2014-01-15. 
  3. ^ White, Jay; Lueth, Chris; Bell, Jonathan (March 2003). "RAID-DP: NetApp Implementation of Double-Parity RAID for Data Protection" (PDF). Network Appliance. Retrieved 2014-06-07. 
  4. ^ White, Jay; Alvarez, Carlos (October 2011). "Back to Basics: RAID-DP | NetApp Community". NetApp. Retrieved 2014-08-25. 
  5. ^ "Non-standard RAID levels". Retrieved 2013-12-15. 
  6. ^ a b "Intel's Matrix RAID Explored". The Tech Report. 2005-03-09. Retrieved 2014-04-02. 
  7. ^ a b "Setting Up RAID Using Intel Matrix Storage Technology". Hewlett Packard. Retrieved 2014-04-02. 
  8. ^ a b "Intel Matrix Storage Technology". Intel. 2011-11-05. Retrieved 2014-04-02. 
  9. ^ "Creating Software RAID 10 Devices". SUSE. Retrieved 11 May 2016. 
  10. ^ "Nested RAID Levels". Arch Linux. Retrieved 11 May 2016. 
  11. ^ a b c "Creating a Complex RAID 10". SUSE. Retrieved 11 May 2016. 
  12. ^ a b c d e f "Linux Software RAID 10 Layouts Performance: Near, Far, and Offset Benchmark Analysis". 2012-08-28. Retrieved 2014-03-08. 
  13. ^ Jon Nelson (2008-07-10). "RAID5,6 and 10 Benchmarks on". Retrieved 2014-01-01. 
  14. ^ "Performance, Tools & General Bone-Headed Questions". Retrieved 2014-01-01. 
  15. ^ "mdadm(8): manage MD devices aka Software RAID - Linux man page". Retrieved 2014-03-08. 
  16. ^ "md(4): Multiple Device driver aka Software RAID - Linux man page". Retrieved 2014-03-08. 
  17. ^ a b "Which RAID Level is Right for Me?: RAID 1E (Striped Mirroring)". Adaptec. Retrieved 2014-01-02. 
  18. ^ "LSI 6 Gb/s Serial Attached SCSI (SAS) Integrated RAID: A Product Brief" (PDF). LSI Corporation. 2009. Archived from the original (PDF) on 2011-12-20. Retrieved 2015-01-02. 
  19. ^ a b c Bonwick, Jeff (2005-11-17). "RAID-Z". Jeff Bonwick's Blog. Oracle Blogs. Retrieved 2015-02-01. 
  20. ^ "Why RAID 6 stops working in 2019". ZDNet. February 22, 2010. Retrieved October 26, 2014. 
  21. ^ "Actually it's a n-way mirror". 2013-09-04. Retrieved 2013-11-19. 
  22. ^ Separate from Windows' Logical Disk Manager
  23. ^ "MS drops drive pooling from Windows Home Server". 
  24. ^ "Drive Bender Public Release Arriving This Week". We Got Served. Retrieved 2014-01-15. 
  25. ^ "StableBit DrivePool 2 Year Review". Home Media Tech. 
  26. ^ Data Robotics, Inc. implements BeyondRaid in their Drobostorage device.
  27. ^ Detailed technical information about BeyondRaid, including how it handles adding and removing drives, is: US US20070266037 
  28. ^ "What is unRAID?". Lime Technology. 2013-10-17. Retrieved 2014-01-15. 
  29. ^ "LimeTech – Technology". Lime Technology. 2013-10-17. Retrieved 2014-02-09. 
  30. ^ "Manual Pages: softraid(4)". 2013-10-31. Retrieved 2014-01-15.