Although allRAID implementations differ from the specification to some extent, some companies and open-source projects have developednon-standard RAID implementations that differ substantially from the standard. Additionally, there arenon-RAID drive architectures, providing configurations of multiple hard drives not referred to by RAID acronyms.
Row diagonal parity is a scheme where one dedicated disk of parity is in a horizontal "row" like inRAID 4, but the other dedicated parity is calculated from blocks permuted ("diagonal") like in RAID 5 and 6.[1] Alternative terms for "row" and "diagonal" include "dedicated" and "distributed".[2] Invented byNetApp, it is offered asRAID-DP in theirONTAP systems.[3] The technique can be considered RAID 6 in the broad SNIA definition[4] and has the same failure characteristics as RAID 6. The performance penalty of RAID-DP is typically under 2% when compared to a similar RAID 4 configuration.[5]
RAID 5E, RAID 5EE, and RAID 6E (with the addedE standing forEnhanced) generally refer to variants of RAID 5 or 6 with an integratedhot-spare drive, where the spare drive is an active part of the block rotation scheme. This spreads I/O across all drives, including the spare, thus reducing the load on each drive, increasing performance. It does, however, prevent sharing the spare drive among multiple arrays, which is occasionally desirable.[6]
Intel Matrix RAID (a feature of Intel Rapid Storage Technology) is a feature (not a RAID level) present in theICH6R and subsequent Southbridge chipsets from Intel, accessible and configurable via the RAIDBIOS setup utility. Matrix RAID supports as few as two physical disks or as many as the controller supports. The distinguishing feature of Matrix RAID is that it allows any assortment of RAID 0, 1, 5, or 10 volumes in the array, to which a controllable (and identical) portion of each disk is allocated.[7][8][9]
As such, a Matrix RAID array can improve both performance and data integrity. A practical instance of this would use a small RAID 0 (stripe) volume for theoperating system, program, and paging files; second larger RAID 1 (mirror) volume would store critical data.Linux MD RAID is also capable of this.[7][8][9]
The software RAID subsystem provided by theLinux kernel, calledmd, supports the creation of bothclassic (nested) RAID 1+0 arrays, and non-standard RAID arrays that use a single-level RAID layout with some additional features.[10][11]
The standard "near" layout, in which each chunk is repeatedn times in ak-way stripe array, is equivalent to the standard RAID 10 arrangement, but it does not require thatn evenly dividesk. For example, ann2 layout on two, three, and four drives would look like:[12][13]
2 drives 3 drives 4 drives-------- ---------- --------------A1 A1 A1 A1 A2 A1 A1 A2 A2A2 A2 A2 A3 A3 A3 A3 A4 A4A3 A3 A4 A4 A5 A5 A5 A6 A6A4 A4 A5 A6 A6 A7 A7 A8 A8.. .. .. .. .. .. .. .. ..
The four-drive example is identical to a standard RAID 1+0 array, while the three-drive example is a software implementation of RAID 1E. The two-drive example is equivalent to RAID 1.[13]
The driver also supports a "far" layout, in which all the drives are divided intof sections. All the chunks are repeated in each section but are switched in groups (for example, in pairs). For example,f2 layouts on two-, three-, and four-drive arrays would look like this:[12][13]
2 drives 3 drives 4 drives-------- ------------ ------------------A1 A2 A1 A2 A3 A1 A2 A3 A4A3 A4 A4 A5 A6 A5 A6 A7 A8A5 A6 A7 A8 A9 A9 A10 A11 A12.. .. .. .. .. .. .. .. ..A2 A1 A3 A1 A2 A2 A1 A4 A3A4 A3 A6 A4 A5 A6 A5 A8 A7A6 A5 A9 A7 A8 A10 A9 A12 A11.. .. .. .. .. .. .. .. ..
"Far" layout is designed for offering striping performance on a mirrored array; sequential reads can be striped, as in RAID 0 configurations.[14] Random reads are somewhat faster, while sequential and random writes offer about equal speed to other mirrored RAID configurations. "Far" layout performs well for systems in which reads are more frequent than writes, which is a common case. For a comparison, regular RAID 1 as provided byLinux software RAID, does not stripe reads, but can perform reads in parallel.[15]
The md driver also supports an "offset" layout, in which each stripe is repeatedo times and offset byf (far) devices. For example,o2 layouts on two-, three-, and four-drive arrays are laid out as:[12][13]
2 drives 3 drives 4 drives-------- ---------- ---------------A1 A2 A1 A2 A3 A1 A2 A3 A4A2 A1 A3 A1 A2 A4 A1 A2 A3A3 A4 A4 A5 A6 A5 A6 A7 A8A4 A3 A6 A4 A5 A8 A5 A6 A7A5 A6 A7 A8 A9 A9 A10 A11 A12A6 A5 A9 A7 A8 A12 A9 A10 A11.. .. .. .. .. .. .. .. ..
It is also possible to combine "near" and "offset" layouts (but not "far" and "offset").[13]
In the examples above,k is the number of drives, whilen#,f#, ando# are given as parameters tomdadm's--layout option. Linux software RAID (Linux kernel'smd driver) also supports creation of standard RAID 0, 1, 4, 5, and 6 configurations.[16][17]
SomeRAID 1 implementations treat arrays with more than two disks differently, creating a non-standard RAID level known asRAID 1E. In this layout, data striping is combined with mirroring, by mirroring each written stripe to one of the remaining disks in the array. Usable capacity of a RAID 1E array is 50% of the total capacity of all drives forming the array; if drives of different sizes are used, only the portions equal to the size of smallest member are utilized on each drive.[18][19]
One of the benefits of RAID 1E over usual RAID 1 mirrored pairs is that the performance of random read operations remains above the performance of a single drive even in a degraded array.[18]
TheZFS filesystem providesRAID-Z, a data/parity distribution scheme similar toRAID 5, but using dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, eliminates thewrite hole error. RAID-Z is also faster than traditional RAID 5 because it does not need to perform the usualread–modify–write sequence. RAID-Z does not require any special hardware, such as NVRAM for reliability, or write buffering for performance.[20]
Given the dynamic nature of RAID-Z's stripe width, RAID-Z reconstruction must traverse the filesystem metadata to determine the actual RAID-Z geometry. This would be impossible if the filesystem and the RAID array were separate products, whereas it becomes feasible when there is an integrated view of the logical and physical structure of the data. Going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes, whereas traditional RAID products usually cannot do this.[20]
In addition to handling whole-disk failures, RAID-Z can also detect and correctsilent data corruption, offering "self-healing data": when reading a RAID-Z block, ZFS compares it against its checksum, and if the data disks did not return the right answer, ZFS reads the parity and then figures out which disk returned bad data. Then, it repairs the damaged data and returns good data to the requestor.[20]
There are five different RAID-Z modes:RAID-Z0 (similar to RAID 0, offers no redundancy),RAID-Z1 (similar to RAID 5, allows one disk to fail),RAID-Z2 (similar to RAID 6, allows two disks to fail),RAID-Z3 (a RAID 7[a] configuration, allows three disks to fail), andmirror (similar to RAID 1, allows all but one of the disks to fail).[22]
Windows Home Server Drive Extender is a specialized case ofJBOD RAID 1 implemented at thefile system level.[23]
Microsoft announced in 2011 that Drive Extender would no longer be included as part ofWindows Home Server Version 2,Windows Home Server 2011 (codename VAIL).[24] As a result, there has been a third-party vendor move to fill the void left by DE. Included competitors are Division M, the developers of Drive Bender, and StableBit's DrivePool.[25][26]
BeyondRAID is not a true RAID extension, but consolidates up to 12 SATA hard drives into one pool of storage.[27] It has the advantage of supporting multiple disk sizes at once, much like JBOD, while providing redundancy for all disks and allowing a hot-swap upgrade at any time. Internally it uses a mix of techniques similar to RAID 1 and 5. Depending on the fraction of data in relation to capacity, it can survive up to three drive failures,[citation needed] if the "array" can be restored onto the remaining good disks before another drive fails. The amount of usable storage can be approximated by summing the capacities of the disks and subtracting the capacity of the largest disk. For example, if a 500, 400, 200, and 100 GB drive were installed, the approximate usable capacity would be 500 + 400 + 200 + 100 − 500 = 700 GB of usable space. Internally the data would be distributed in two RAID 5–like arrays and two RAID 1-like sets:
Drives | 100 GB | | 200 GB | | 400 GB | | 500 GB | ---------- | x | unusable space (100 GB) ---------- ---------- ---------- | A1 | | A1 | RAID 1 set (2× 100 GB) ---------- ---------- ---------- ---------- | B1 | | B1 | RAID 1 set (2× 100 GB) ---------- ---------- ---------- ---------- ---------- | C1 | | C2 | | Cp | RAID 5 array (3× 100 GB) ---------- ---------- ---------- ---------- ---------- ---------- ---------- | D1 | | D2 | | D3 | | Dp | RAID 5 array (4× 100 GB) ---------- ---------- ---------- ----------
BeyondRaid offers a RAID 6–like feature and can perform hash-based compression using 160-bitSHA-1 hashes to maximize storage efficiency.[28]
Unraid is a proprietary Linux-based operating system optimized for media file storage.[29]
Unfortunately Unraid doesn't provide information about its storage technology, but some[who?] say its parity array is a rewrite of the mdadm module.
Disadvantages include closed-source code,high price[citation needed],slower write performance than a single disk[citation needed] and bottlenecks when multiple drives are written concurrently. However, Unraid allows support of a cache pool which can dramatically speed up the write performance. Cache pool data can be temporarily protected usingBtrfs RAID 1 until Unraid moves it to the array based on a schedule set within the software.[citation needed]
Advantages include lower power consumption than standard RAID levels, the ability to use multiple hard drives with differing sizes to their full capacity and in the event of multiple concurrent hard drive failures (exceeding the redundancy), only losing the data stored on the failed hard drives compared to standard RAID levels which offer striping in which case all of the data on the array is lost when more hard drives fail than the redundancy can handle.[30]
InOpenBSD, CRYPTO is an encrypting discipline for the softraid subsystem. It encrypts data on a single chunk toprovide for data confidentiality. CRYPTO does not provide redundancy.[31]RAID 1C provides both redundancy and encryption.[31]
Some filesystems, such as Btrfs,[32] and ZFS/OpenZFS (with per-dataset copies=1|2|3 property),[33] support creating multiple copies of the same data on a single drive or disks pool, protecting from individual bad sectors, but not from large numbers of bad sectors or complete drive failure. This allows some of the benefits of RAID on computers that can only accept a single drive, such as laptops.
Declustered RAID allows for arbitrarily sized disk arrays while reducing the overhead to clients when recovering from disk failures. It uniformly spreads or declusters user data, redundancy information, and spare space across all the disks of a declustered array. Under traditional RAID, an entire disk storage system of, say, 100 disks would be split into multiple arrays each of, say, 10 disks. By contrast, under declustered RAID, the entire storage system is used to make one array. Every data item is written twice, as in mirroring, but logically adjacent data and copies are spread arbitrarily. When a disk fails, erased data is rebuilt using all the operational disks in the array, the bandwidth of which is greater than that of the fewer disks of a conventional RAID group. Furthermore, if an additional disk fault occurs during a rebuild, the number of impacted tracks requiring repair is markedly less than the previous failure and less than the constant rebuild overhead of a conventional array. The decrease in declustered rebuild impact and client overhead can be a factor of three to four times less than a conventional RAID. File system performance becomes less dependent upon the speed of any single rebuilding storage array.[34]
Dynamic disk pooling (DDP), also known as D-RAID, maintains performance even when up to 2 drives fail simultaneously.[35]DDP is a high performance type of declustered RAID.[36]