0% found this document useful (0 votes)
15 views16 pages

RAID

RAID (Redundant Array of Independent Disks) is a data storage technology that combines multiple physical drives into logical units for redundancy and performance. Various RAID levels, such as RAID 0 and RAID 1, provide different balances of reliability, availability, and performance, with advanced levels like RAID 6 offering fault tolerance against multiple drive failures. The document also discusses the history, standard levels, nested RAID configurations, non-standard levels, and implementations of RAID systems.

Uploaded by

Maxime Ngansop
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views16 pages

RAID

RAID (Redundant Array of Independent Disks) is a data storage technology that combines multiple physical drives into logical units for redundancy and performance. Various RAID levels, such as RAID 0 and RAID 1, provide different balances of reliability, availability, and performance, with advanced levels like RAID 6 offering fault tolerance against multiple drive failures. The document also discusses the history, standard levels, nested RAID configurations, non-standard levels, and implementations of RAID systems.

Uploaded by

Maxime Ngansop
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

RAID

RAID (/reɪd/; redundant array of inexpensive disks or redundant array of independent disks)[1][2] is
a data storage virtualization technology that combines multiple physical data storage components into one
or more logical units for the purposes of data redundancy, performance improvement, or both. This is in
contrast to the previous concept of highly reliable mainframe disk drives known as single large expensive
disk (SLED).[3][1]

Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the
required level of redundancy and performance. The different schemes, or data distribution layouts, are
named by the word "RAID" followed by a number, for example RAID 0 or RAID 1. Each scheme, or
RAID level, provides a different balance among the key goals: reliability, availability, performance, and
capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as
well as against failures of whole physical drives.

History
The term "RAID" was invented by David Patterson, Garth Gibson, and Randy Katz at the University of
California, Berkeley in 1987. In their June 1988 paper "A Case for Redundant Arrays of Inexpensive
Disks (RAID)", presented at the SIGMOD Conference, they argued that the top-performing mainframe
disk drives of the time could be beaten on performance by an array of the inexpensive drives that had
been developed for the growing personal computer market. Although failures would rise in proportion to
the number of drives, by configuring for redundancy, the reliability of an array could far exceed that of
any large single drive.[4]

Although not yet using that terminology, the technologies of the five levels of RAID named in the June
1988 paper were used in various products prior to the paper's publication,[3] including the following:

Mirroring (RAID 1) was well established in the 1970s including, for example, Tandem
NonStop Systems.
In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named
RAID 4.[5]
Around 1983, DEC began shipping subsystem mirrored RA8X disk drives (now known as
RAID 1) as part of its HSC50 subsystem.[6]
In 1986, Clark et al. at IBM filed a patent disclosing what was subsequently named
RAID 5.[7]
Around 1988, the Thinking Machines' DataVault used error correction codes (now known as
RAID 2) in an array of disk drives.[8] A similar approach was used in the early 1960s on the
IBM 353.[9][10]
Industry manufacturers later redefined the RAID acronym to stand for "redundant array of independent
disks".[2][11][12][13]
Overview
Many RAID levels employ an error protection scheme called "parity", a widely used method in
information technology to provide fault tolerance in a given set of data. Most use simple XOR, but
RAID 6 uses two separate parities based respectively on addition and multiplication in a particular Galois
field or Reed–Solomon error correction.[14]

RAID can also provide data security with solid-state drives (SSDs) without the expense of an all-SSD
system. For example, a fast SSD can be mirrored with a mechanical drive. For this configuration to
provide a significant speed advantage, an appropriate controller is needed that uses the fast SSD for all
read operations. Adaptec calls this "hybrid RAID".[15]

Standard levels
Originally, there were five standard levels of RAID, but many
variations have evolved, including several nested levels and many
non-standard levels (mostly proprietary). RAID levels and their
associated data formats are standardized by the Storage
Networking Industry Association (SNIA) in the Common RAID
Disk Drive Format (DDF) standard:[16][17]

RAID 0 consists of block-level striping, but no mirroring


or parity. Assuming n fully-used drives of equal capacity, Storage servers with 24 hard disk
the capacity of a RAID 0 volume matches that of a drives each and built-in hardware
spanned volume: the total of the n drives' capacities. RAID controllers supporting various
However, because striping distributes the contents of RAID levels
each file across all drives, the failure of any drive
renders the entire RAID 0 volume inaccessible. Typically,
all data is lost, and files cannot be recovered without a backup copy.

By contrast, a spanned volume, which stores files sequentially, loses data stored on the
failed drive but preserves data stored on the remaining drives. However, recovering the
files after drive failure can be challenging and often depends on the specifics of the
filesystem. Regardless, files that span onto or off a failed drive will be permanently lost.
On the other hand, the benefit of RAID 0 is that the throughput of read and write
operations to any file is multiplied by the number of drives because, unlike spanned
volumes, reads and writes are performed concurrently.[11] The cost is increased
vulnerability to drive failures—since any drive in a RAID 0 setup failing causes the entire
volume to be lost, the average failure rate of the volume rises with the number of attached
drives. This makes RAID 0 a poor choice for scenarios requiring data reliability or fault
tolerance.

RAID 1 consists of data mirroring, without parity or striping. Data is written identically to two
or more drives, thereby producing a "mirrored set" of drives. Thus, any read request can be
serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be
serviced by the drive that accesses the data first (depending on its seek time and rotational
latency), improving performance. Sustained read throughput, if the controller or software is
optimized for it, approaches the sum of throughputs of every drive in the set, just as for
RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest
drive. Write throughput is always slower because every drive must be updated, and the
slowest drive limits the write performance. The array continues to operate as long as at least
one drive is functioning.[11]
RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle
rotation is synchronized and data is striped such that each sequential bit is on a different
drive. Hamming-code parity is calculated across corresponding bits and stored on at least
one parity drive.[11] This level is of historical significance only; although it was used on some
early machines (for example, the Thinking Machines CM-2),[18] as of 2014 it is not used by
any commercially available system.[19]
RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is
synchronized and data is striped such that each sequential byte is on a different drive. Parity
is calculated across corresponding bytes and stored on a dedicated parity drive.[11] Although
implementations exist,[20] RAID 3 is not commonly used in practice.
RAID 4 consists of block-level striping with dedicated parity. This level was previously used
by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4
with two parity disks, called RAID-DP.[21] The main advantage of RAID 4 over RAID 2 and 3
is I/O parallelism: in RAID 2 and 3, a single read I/O operation requires reading the whole
group of data drives, while in RAID 4 one I/O read operation does not have to spread across
all data drives. As a result, more I/O operations can be executed in parallel, improving the
performance of small transfers.[1]
RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity
information is distributed among the drives, requiring all drives but one to be present to
operate. Upon failure of a single drive, subsequent reads can be calculated from the
distributed parity such that no data is lost. RAID 5 requires at least three disks.[11] Like all
single-parity concepts, large RAID 5 implementations are susceptible to system failures
because of trends regarding array rebuild time and the chance of drive failure during rebuild
(see "Increasing rebuild time and failure probability" section, below).[22] Rebuilding an array
requires reading all data from all disks, opening a chance for a second drive failure and the
loss of the entire array.
RAID 6 consists of block-level striping with double distributed parity. Double parity provides
fault tolerance up to two failed drives. This makes larger RAID groups more practical,
especially for high-availability systems, as large-capacity drives take longer to restore.
RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in
reduced performance of the entire array until the failed drive has been replaced.[11] With a
RAID 6 array, using drives from multiple sources and manufacturers, it is possible to
mitigate most of the problems associated with RAID 5. The larger the drive capacities and
the larger the array size, the more important it becomes to choose RAID 6 instead of
RAID 5.[23] RAID 10 also minimizes these problems.[24]

Nested (hybrid) RAID


In what was originally termed hybrid RAID,[25] many storage controllers allow RAID levels to be nested.
The elements of a RAID may be either individual drives or arrays themselves. Arrays are rarely nested
more than one level deep.

The final array is known as the top array. When the top array is RAID 0 (such as in RAID 1+0 and
RAID 5+0), most vendors omit the "+" (yielding RAID 10 and RAID 50, respectively).

RAID 0+1: creates two stripes and mirrors them. If a single drive failure occurs then one of
the mirrors has failed, at this point it is running effectively as RAID 0 with no redundancy.
Significantly higher risk is introduced during a rebuild than RAID 1+0 as all the data from all
the drives in the remaining stripe has to be read rather than just from one drive, increasing
the chance of an unrecoverable read error (URE) and significantly extending the rebuild
window.[26][27][28]
RAID 1+0: (see: RAID 10) creates a striped set from a series of mirrored drives. The array
can sustain multiple drive losses so long as no mirror loses all its drives.[29]
JBOD RAID N+N: With JBOD (just a bunch of disks), it is possible to concatenate disks, but
also volumes such as RAID sets. With larger drive capacities, write delay and rebuilding
time increase dramatically (especially, as described above, with RAID 5 and RAID 6). By
splitting a larger RAID N set into smaller subsets and concatenating them with linear JBOD,
write and rebuilding time will be reduced. If a hardware RAID controller is not capable of
nesting linear JBOD with RAID N, then linear JBOD can be achieved with OS-level software
RAID in combination with separate RAID N subset volumes created within one, or more,
hardware RAID controller(s). Besides a drastic speed increase, this also provides a
substantial advantage: the possibility to start a linear JBOD with a small set of disks and to
be able to expand the total set with disks of different size, later on (in time, disks of bigger
size become available on the market). There is another advantage in the form of disaster
recovery (if a RAID N subset happens to fail, then the data on the other RAID N subsets is
not lost, reducing restore time).

Non-standard levels
Many configurations other than the basic numbered RAID levels are possible, and many companies,
organizations, and groups have created their own non-standard configurations, in many cases designed to
meet the specialized needs of a small niche group. Such configurations include the following:

Linux MD RAID 10 provides a general RAID driver that in its "near" layout defaults to a
standard RAID 1 with two drives, and a standard RAID 1+0 with four drives; however, it can
include any number of drives, including odd numbers. With its "far" layout, MD RAID 10 can
run both striped and mirrored, even with only two drives in f2 layout; this runs mirroring with
striped reads, giving the read performance of RAID 0. Regular RAID 1, as provided by Linux
software RAID, does not stripe reads, but can perform reads in parallel.[29][30][31]
Hadoop has a RAID system that generates a parity file by xor-ing a stripe of blocks in a
single HDFS file.[32]
BeeGFS, the parallel file system, has internal striping (comparable to file-based RAID0) and
replication (comparable to file-based RAID10) options to aggregate throughput and capacity
of multiple servers and is typically based on top of an underlying RAID to make disk failures
transparent.
Declustered RAID scatters dual (or more) copies of the data across all disks (possibly
hundreds) in a storage subsystem, while holding back enough spare capacity to allow for a
few disks to fail. The scattering is based on algorithms which give the appearance of
arbitrariness. When one or more disks fail the missing copies are rebuilt into that spare
capacity, again arbitrarily. Because the rebuild is done from and to all the remaining disks, it
operates much faster than with traditional RAID, reducing the overall impact on clients of the
storage system.

Implementations
The distribution of data across multiple drives can be managed either by dedicated computer hardware or
by software. A software solution may be part of the operating system, part of the firmware and drivers
supplied with a standard drive controller (so-called "hardware-assisted software RAID"), or it may reside
entirely within the hardware RAID controller.

Hardware-based
Hardware RAID controllers can be configured through card BIOS or Option ROM before an operating
system is booted, and after the operating system is booted, proprietary configuration utilities are available
from the manufacturer of each controller. Unlike the network interface controllers for Ethernet, which can
usually be configured and serviced entirely through the common operating system paradigms like
ifconfig in Unix, without a need for any third-party tools, each manufacturer of each RAID controller
usually provides their own proprietary software tooling for each operating system that they deem to
support, ensuring a vendor lock-in, and contributing to reliability issues.[33]

For example, in FreeBSD, in order to access the configuration of Adaptec RAID controllers, users are
required to enable Linux compatibility layer, and use the Linux tooling from Adaptec,[34] potentially
compromising the stability, reliability and security of their setup, especially when taking the long-term
view.[33]

Some other operating systems have implemented their own generic frameworks for interfacing with any
RAID controller, and provide tools for monitoring RAID volume status, as well as facilitation of drive
identification through LED blinking, alarm management and hot spare disk designations from within the
operating system without having to reboot into card BIOS. For example, this was the approach taken by
OpenBSD in 2005 with its bio(4) pseudo-device and the bioctl utility, which provide volume status, and
allow LED/alarm/hotspare control, as well as the sensors (including the drive sensor) for health
monitoring;[35] this approach has subsequently been adopted and extended by NetBSD in 2007 as
well.[36]

Software-based
Software RAID implementations are provided by many modern operating systems. Software RAID can
be implemented as:

A layer that abstracts multiple devices, thereby providing a single virtual device (such as
Linux kernel's md and OpenBSD's softraid)
A more generic logical volume manager (provided with most server-class operating systems
such as Veritas or LVM)
A component of the file system (such as ZFS, Spectrum Scale or Btrfs)
A layer that sits above any file system and provides parity protection to user data (such as
RAID-F)[37]
Some advanced file systems are designed to organize data across multiple storage devices directly,
without needing the help of a third-party logical volume manager:

ZFS supports the equivalents of RAID 0, RAID 1, RAID 5 (RAID-Z1) single-parity, RAID 6
(RAID-Z2) double-parity, and a triple-parity version (RAID-Z3) also referred to as RAID 7.[38]
As it always stripes over top-level vdevs, it supports equivalents of the 1+0, 5+0, and 6+0
nested RAID levels (as well as striped triple-parity sets) but not other nested combinations.
ZFS is the native file system on Solaris and illumos, and is also available on FreeBSD and
Linux. Open-source ZFS implementations are actively developed under the OpenZFS
umbrella project.[39][40][41][42][43]
Spectrum Scale, initially developed by IBM for media streaming and scalable analytics,
supports declustered RAID protection schemes up to n+3. A particularity is the dynamic
rebuilding priority which runs with low impact in the background until a data chunk hits n+0
redundancy, in which case this chunk is quickly rebuilt to at least n+1. On top, Spectrum
Scale supports metro-distance RAID 1.[44]
Btrfs supports RAID 0, RAID 1 and RAID 10 (RAID 5 and 6 are under development).[45][46]
XFS was originally designed to provide an integrated volume manager that supports
concatenating, mirroring and striping of multiple physical storage devices.[47] However, the
implementation of XFS in Linux kernel lacks the integrated volume manager.[48]
Many operating systems provide RAID implementations, including the following:

Hewlett-Packard's OpenVMS operating system supports RAID 1. The mirrored disks, called
a "shadow set", can be in different locations to assist in disaster recovery.[49]
Apple's macOS and macOS Server natively support RAID 0, RAID 1, and RAID 1+0,[50][51]
which can be created with Disk Utility or its command-line interface, while RAID 4 and
RAID 5 can only be created using the third-party software SoftRAID by OWC,[52] with the
driver for SoftRAID access natively included since macOS 13.3.
FreeBSD supports RAID 0, RAID 1, RAID 3, and RAID 5, and all nestings via GEOM
modules and ccd.[53][54][55]
Linux's md supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and all nestings.[56] Certain
reshaping/resizing/expanding operations are also supported.[57]
Microsoft Windows supports RAID 0, RAID 1, and RAID 5 using various software
implementations. Logical Disk Manager, introduced with Windows 2000, allows for the
creation of RAID 0, RAID 1, and RAID 5 volumes by using dynamic disks, but this was
limited only to professional and server editions of Windows until the release of Windows
8.[58][59] Windows XP can be modified to unlock support for RAID 0, 1, and 5.[60] Windows 8
and Windows Server 2012 introduced a RAID-like feature known as Storage Spaces, which
also allows users to specify mirroring, parity, or no redundancy on a folder-by-folder basis.
These options are similar to RAID 1 and RAID 5, but are implemented at a higher
abstraction level.[61]
NetBSD supports RAID 0, 1, 4, and 5 via its software implementation, named
RAIDframe.[62]
OpenBSD supports RAID 0, 1 and 5 via its software implementation, named softraid.[63]
If a boot drive fails, the system has to be sophisticated enough to be able to boot from the remaining drive
or drives. For instance, consider a computer whose disk is configured as RAID 1 (mirrored drives); if the
first drive in the array fails, then a first-stage boot loader might not be sophisticated enough to attempt
loading the second-stage boot loader from the second drive as a fallback. The second-stage boot loader
for FreeBSD is capable of loading a kernel from such an array.[64]

Firmware- and driver-based


Software-implemented RAID is not always compatible with the system's boot process, and it is generally
impractical for desktop versions of Windows. However, hardware RAID controllers are expensive and
proprietary. To fill this gap, inexpensive "RAID controllers" were introduced that do not contain a
dedicated RAID controller chip, but simply a standard drive controller chip, or the chipset built-in RAID
function, with proprietary firmware and drivers. During early bootup, the RAID is implemented by the
firmware and, once the operating system has been more completely loaded, the drivers take over control.
Consequently, such controllers may not work when driver support
is not available for the host operating system.[65] An example is
Intel Rapid Storage Technology, implemented on many consumer-
level motherboards.[66][67]

Because some minimal hardware support is involved, this


implementation is also called "hardware-assisted software
RAID",[68][69][70] "hybrid model" RAID,[70] or even "fake
RAID".[71] If RAID 5 is supported, the hardware may provide a
A SATA 3.0 controller that provides
hardware XOR accelerator. An advantage of this model over the
RAID functionality through
pure software RAID is that—if using a redundancy mode—the proprietary firmware and drivers
boot drive is protected from failure (due to the firmware) during
the boot process even before the operating system's drivers take
over.[70]

Integrity
Data scrubbing (referred to in some environments as patrol read) involves periodic reading and checking
by the RAID controller of all the blocks in an array, including those not otherwise accessed. This detects
bad blocks before use.[72] Data scrubbing checks for bad blocks on each storage device in an array, but
also uses the redundancy of the array to recover bad blocks on a single drive and to reassign the
recovered data to spare blocks elsewhere on the drive.[73]

Frequently, a RAID controller is configured to "drop" a component drive (that is, to assume a component
drive has failed) if the drive has been unresponsive for eight seconds or so; this might cause the array
controller to drop a good drive because that drive has not been given enough time to complete its internal
error recovery procedure. Consequently, using consumer-marketed drives with RAID can be risky, and
so-called "enterprise class" drives limit this error recovery time to reduce risk. Western Digital's desktop
drives used to have a specific fix. A utility called WDTLER.exe limited a drive's error recovery time. The
utility enabled TLER (time limited error recovery), which limits the error recovery time to seven seconds.
Around September 2009, Western Digital disabled this feature in their desktop drives (such as the Caviar
Black line), making such drives unsuitable for use in RAID configurations.[74] However, Western Digital
enterprise class drives are shipped from the factory with TLER enabled. Similar technologies are used by
Seagate, Samsung, and Hitachi. For non-RAID usage, an enterprise class drive with a short error recovery
timeout that cannot be changed is therefore less suitable than a desktop drive.[74] In late 2010, the
Smartmontools program began supporting the configuration of ATA Error Recovery Control, allowing the
tool to configure many desktop class hard drives for use in RAID setups.[74]

While RAID may protect against physical drive failure, the data is still exposed to operator, software,
hardware, and virus destruction. Many studies cite operator fault as a common source of
malfunction,[75][76] such as a server operator replacing the incorrect drive in a faulty RAID, and disabling
the system (even temporarily) in the process.[77]

An array can be overwhelmed by catastrophic failure that exceeds its recovery capacity and the entire
array is at risk of physical damage by fire, natural disaster, and human forces, however backups can be
stored off site. An array is also vulnerable to controller failure because it is not always possible to migrate
it to a new, different controller without data loss.[78]

Weaknesses

Correlated failures
In practice, the drives are often the same age (with similar wear) and subject to the same environment.
Since many drive failures are due to mechanical issues (which are more likely on older drives), this
violates the assumptions of independent, identical rate of failure amongst drives; failures are in fact
statistically correlated.[11] In practice, the chances for a second failure before the first has been recovered
(causing data loss) are higher than the chances for random failures. In a study of about 100,000 drives,
the probability of two drives in the same cluster failing within one hour was four times larger than
predicted by the exponential statistical distribution—which characterizes processes in which events occur
continuously and independently at a constant average rate. The probability of two failures in the same 10-
hour period was twice as large as predicted by an exponential distribution.[79]

Unrecoverable read errors during rebuild


Unrecoverable read errors (URE) present as sector read failures, also known as latent sector errors (LSE).
The associated media assessment measure, unrecoverable bit error (UBE) rate, is typically guaranteed to
be less than one bit in 1015 for enterprise-class drives (SCSI, FC, SAS or SATA), and less than one bit in
1014 for desktop-class drives (IDE/ATA/PATA or SATA). Increasing drive capacities and large RAID 5
instances have led to the maximum error rates being insufficient to guarantee a successful recovery, due
to the high likelihood of such an error occurring on one or more remaining drives during a RAID set
rebuild.[11][80] When rebuilding, parity-based schemes such as RAID 5 are particularly prone to the
effects of UREs as they affect not only the sector where they occur, but also reconstructed blocks using
that sector for parity computation.[81]

Double-protection parity-based schemes, such as RAID 6, attempt to address this issue by providing
redundancy that allows double-drive failures; as a downside, such schemes suffer from elevated write
penalty—the number of times the storage medium must be accessed during a single write operation.[82]
Schemes that duplicate (mirror) data in a drive-to-drive manner, such as RAID 1 and RAID 10, have a
lower risk from UREs than those using parity computation or mirroring between striped sets.[24][83] Data
scrubbing, as a background process, can be used to detect and recover from UREs, effectively reducing
the risk of them happening during RAID rebuilds and causing double-drive failures. The recovery of
UREs involves remapping of affected underlying disk sectors, utilizing the drive's sector remapping pool;
in case of UREs detected during background scrubbing, data redundancy provided by a fully operational
RAID set allows the missing data to be reconstructed and rewritten to a remapped sector.[84][85]

Increasing rebuild time and failure probability


Drive capacity has grown at a much faster rate than transfer speed, and error rates have only fallen a little
in comparison. Therefore, larger-capacity drives may take hours if not days to rebuild, during which time
other drives may fail or yet undetected read errors may surface. The rebuild time is also limited if the
entire array is still in operation at reduced capacity.[86] Given an array with only one redundant drive
(which applies to RAID levels 3, 4 and 5, and to "classic" two-drive RAID 1), a second drive failure
would cause complete failure of the array. Even though individual drives' mean time between failure
(MTBF) have increased over time, this increase has not kept pace with the increased storage capacity of
the drives. The time to rebuild the array after a single drive failure, as well as the chance of a second
failure during a rebuild, have increased over time.[22]

Some commentators have declared that RAID 6 is only a "band aid" in this respect, because it only kicks
the problem a little further down the road.[22] However, according to the 2006 NetApp study of Berriman
et al., the chance of failure decreases by a factor of about 3,800 (relative to RAID 5) for a proper
implementation of RAID 6, even when using commodity drives.[87] Nevertheless, if the currently
observed technology trends remain unchanged, in 2019 a RAID 6 array will have the same chance of
failure as its RAID 5 counterpart had in 2010.[87]

Mirroring schemes such as RAID 10 have a bounded recovery time as they require the copy of a single
failed drive, compared with parity schemes such as RAID 6, which require the copy of all blocks of the
drives in an array set. Triple parity schemes, or triple mirroring, have been suggested as one approach to
improve resilience to an additional drive failure during this large rebuild time.[87]

Atomicity
A system crash or other interruption of a write operation can result in states where the parity is
inconsistent with the data due to non-atomicity of the write process, such that the parity cannot be used
for recovery in the case of a disk failure. This is commonly termed the write hole which is a known data
corruption issue in older and low-end RAIDs, caused by interrupted destaging of writes to disk.[88] The
write hole can be addressed in a few ways:

Write-ahead logging.

Hardware RAID systems use an onboard nonvolatile cache for this purpose.[89]
mdadm can use a dedicated journaling device (to avoid performance penalty, typically,
SSDs and NVMs are preferred) for this purpose.[90][91]
Write intent logging. mdadm uses a "write-intent-bitmap". If it finds any location marked as
incompletely written at startup, it resyncs them. It closes the write hole but does not protect
against loss of in-transit data, unlike a full WAL.[89][92]
Partial parity. mdadm can save a "partial parity" that, when combined with modified chunks,
recovers the original parity. This closes the write hole, but again does not protect against
loss of in-transit data.[93]
Dynamic stripe size. RAID-Z ensures that each block is its own stripe, so every block is
complete. Copy-on-write (COW) transactional semantics guard metadata associated with
stripes.[94] The downside is IO fragmentation.[95]
Avoiding overwriting used stripes. bcachefs, which uses a copying garbage collector,
chooses this option. COW again protect references to striped data.[95]
Write hole is a little understood and rarely mentioned failure mode for redundant storage systems that do
not utilize transactional features. Database researcher Jim Gray wrote "Update in Place is a Poison Apple"
during the early days of relational database commercialization.[96]
Write-cache reliability
There are concerns about write-cache reliability, specifically regarding devices equipped with a write-
back cache, which is a caching system that reports the data as written as soon as it is written to cache, as
opposed to when it is written to the non-volatile medium. If the system experiences a power loss or other
major failure, the data may be irrevocably lost from the cache before reaching the non-volatile storage.
For this reason good write-back cache implementations include mechanisms, such as redundant battery
power, to preserve cache contents across system failures (including power failures) and to flush the cache
at system restart time.[97]

See also
Disk data format
Network-attached storage (NAS)
Non-RAID drive architectures
Redundant array of independent memory
Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.)

References
1. Patterson, David; Gibson, Garth A.; Katz, Randy (1988). A Case for Redundant Arrays of
Inexpensive Disks (RAID) (https://www2.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-39
1.pdf) (PDF). SIGMOD Conferences. Retrieved 2024-01-03.
2. "Originally referred to as Redundant Array of Inexpensive Disks, the term RAID was first
published in the late 1980s by Patterson, Gibson, and Katz of the University of California at
Berkeley. (The RAID Advisory Board has since substituted the term Inexpensive with
Independent.)" Storage Area Network Fundamentals; Meeta Gupta; Cisco Press; ISBN 978-
1-58705-065-7; Appendix A.
3. Katz, Randy H. (October 2010). "RAID: A Personal Recollection of How Storage Became a
System" (http://web.eecs.umich.edu/~michjc/eecs584/Papers/katz-2010.pdf) (PDF).
eecs.umich.edu. IEEE Computer Society. Retrieved 2015-01-18. "We were not the first to
think of the idea of replacing what Patterson described as a slow large expensive disk
(SLED) with an array of inexpensive disks. For example, the concept of disk mirroring,
pioneered by Tandem, was well known, and some storage products had already been
constructed around arrays of small disks."
4. Hayes, Frank (November 17, 2003). "The Story So Far" (http://www.computerworld.com/arti
cle/2573180/data-center/the-story-so-far.html). Computerworld. Retrieved November 18,
2016. "Patterson recalled the beginnings of his RAID project in 1987. [...] 1988: David A.
Patterson leads a team that defines RAID standards for improved performance, reliability
and scalability."
5. US patent 4092732 (https://worldwide.espacenet.com/textdoc?DB=EPODOC&IDX=US4092
732), Norman Ken Ouchi, "System for Recovering Data Stored in Failed Memory Unit",
issued 1978-05-30
6. "HSC50/70 Hardware Technical Manual" (https://web.archive.org/web/20160304032213/htt
p://www.textfiles.com/bitsavers/pdf/dec/ci/EK-HS571-TM-001_HSC_hwTech.pdf) (PDF).
DEC. July 1986. pp. 29, 32. Archived from the original (http://www.textfiles.com/bitsavers/pd
f/dec/ci/EK-HS571-TM-001_HSC_hwTech.pdf) (PDF) on 2016-03-04. Retrieved 2014-01-03.
7. US patent 4761785 (https://worldwide.espacenet.com/textdoc?DB=EPODOC&IDX=US4761
785), Brian E. Clark, et al., "Parity Spreading to Enhance Storage Access", issued 1988-08-
02
8. US patent 4899342 (https://worldwide.espacenet.com/textdoc?DB=EPODOC&IDX=US4899
342), David Potter et al., "Method and Apparatus for Operating Multi-Unit Array of
Memories", issued 1990-02-06 See also The Connection Machine (1988) (http://www.svisio
ns.com/sv/raid-patent.html)
9. "IBM 7030 Data Processing System: Reference Manual" (http://bitsavers.trailing-edge.com/
pdf/ibm/7030/22-6530-2_7030RefMan.pdf#page=160) (PDF). bitsavers.trailing-edge.com.
IBM. 1960. p. 157. Retrieved 2015-01-17. "Since a large number of bits are handled in
parallel, it is practical to use error checking and correction (ECC) bits, and each 39 bit byte
is composed of 32 data bits and seven ECC bits. The ECC bits accompany all data
transferred to or from the high-speed disks, and, on reading, are used to correct a single bit
error in a byte and detect double and most multiple errors in a byte."
10. "IBM Stretch (aka IBM 7030 Data Processing System)" (http://www.brouhaha.com/~eric/retr
ocomputing/ibm/stretch/). brouhaha.com. 2009-06-18. Retrieved 2015-01-17. "A typical
IBM 7030 Data Processing System might have been comprised of the following units: [...]
IBM 353 Disk Storage Unit – similar to IBM 1301 Disk File, but much faster. 2,097,152
(2^21) 72-bit words (64 data bits and 8 ECC bits), 125,000 words per second"
11. Chen, Peter; Lee, Edward; Gibson, Garth; Katz, Randy; Patterson, David (1994). "RAID:
High-Performance, Reliable Secondary Storage". ACM Computing Surveys. 26 (2): 145–
185. CiteSeerX 10.1.1.41.3889 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4
1.3889). doi:10.1145/176979.176981 (https://doi.org/10.1145%2F176979.176981).
S2CID 207178693 (https://api.semanticscholar.org/CorpusID:207178693).
12. Donald, L. (2003). MCSA/MCSE 2006 JumpStart Computer and Network Basics (2nd ed.).
Glasgow: SYBEX.
13. Howe, Denis (ed.). "Redundant Arrays of Independent Disk" (http://foldoc.org/RAID). Free
On-line Dictionary of Computing (FOLDOC). Imperial College Department of Computing.
Retrieved 2011-11-10.
14. Dawkins, Bill and Jones, Arnold. "Common RAID Disk Data Format Specification" (http://ww
w.snia.org/tech_activities/standards/curr_standards/ddf/SNIA-DDFv1.2.pdf) Archived (http
s://web.archive.org/web/20090824050602/http://www.snia.org/tech_activities/standards/curr
_standards/ddf/SNIA-DDFv1.2.pdf) 2009-08-24 at the Wayback Machine [Storage
Networking Industry Association] Colorado Springs, 28 July 2006. Retrieved on 22 February
2011.
15. "Adaptec Hybrid RAID Solutions" (http://www.adaptec.com/nr/pdfs/hybrid-raid_wp.pdf)
(PDF). Adaptec.com. Adaptec. 2012. Retrieved 2013-09-07.
16. "Common RAID Disk Drive Format (DDF) standard" (http://www.snia.org/tech_activities/stan
dards/curr_standards/ddf/). SNIA.org. SNIA. Retrieved 2012-08-26.
17. "SNIA Dictionary" (http://www.snia.org/education/dictionary). SNIA.org. SNIA. Retrieved
2010-08-24.
18. Tanenbaum, Andrew S. Structured Computer Organization 6th ed. p. 95.
19. Hennessy, John; Patterson, David (2006). Computer Architecture: A Quantitative Approach,
4th ed. p. 362. ISBN 978-0123704900.
20. "FreeBSD Handbook, Chapter 20.5 GEOM: Modular Disk Transformation Framework" (htt
p://www.freebsd.org/doc/handbook/geom-raid3.html). Retrieved 2012-12-20.
21. White, Jay; Lueth, Chris (May 2010). "RAID-DP:NetApp Implementation of Double Parity
RAID for Data Protection. NetApp Technical Report TR-3298" (http://www.netapp.com/us/libr
ary/technical-reports/tr-3298.html). Retrieved 2013-03-02.
22. Newman, Henry (2009-09-17). "RAID's Days May Be Numbered" (http://www.enterprisestor
ageforum.com/technology/features/article.php/3839636). EnterpriseStorageForum.
Retrieved 2010-09-07.
23. "Why RAID 6 stops working in 2019" (https://web.archive.org/web/20100815215636/http://w
ww.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805). ZDNet. 22 February
2010. Archived from the original (https://www.zdnet.com/blog/storage/why-raid-6-stops-worki
ng-in-2019/805) on August 15, 2010.
24. Lowe, Scott (2009-11-16). "How to protect yourself from RAID-related Unrecoverable Read
Errors (UREs). Techrepublic" (https://www.techrepublic.com/blog/datacenter/how-to-protect-
yourself-from-raid-related-unrecoverable-read-errors-ures/1752). Retrieved 2012-12-01.
25. Vijayan, S.; Selvamani, S.; Vijayan, S (1995). "Dual-Crosshatch Disk Array: A Highly
Reliable Hybrid-RAID Architecture" (https://books.google.com/books?id=QliANH5G3_gC&q
=%22hybrid+raid%22). Proceedings of the 1995 International Conference on Parallel
Processing: Volume 1. CRC Press. pp. I–146ff. ISBN 978-0-8493-2615-8 – via Google
Books.
26. "Why is RAID 1+0 better than RAID 0+1?" (http://aput.net/~jheiss/raid10/). aput.net.
Retrieved 2016-05-23.
27. "RAID 10 Vs RAID 01 (RAID 1+0 Vs RAID 0+1) Explained with Diagram" (http://www.thegee
kstuff.com/2011/10/raid10-vs-raid01/). www.thegeekstuff.com. Retrieved 2016-05-23.
28. "Comparing RAID 10 and RAID 01 | SMB IT Journal" (http://www.smbitjournal.com/2014/07/
comparing-raid-10-and-raid-01/). www.smbitjournal.com. 30 July 2014. Retrieved
2016-05-23.
29. Jeffrey B. Layton: "Intro to Nested-RAID: RAID-01 and RAID-10" (https://web.archive.org/we
b/20110615143527/http://www.linux-mag.com/id/7928/?hq_e=el&hq_m=1151565&hq_l=36&
hq_v=3fa9646c7f)[usurped], Linux Magazine, January 6, 2011
30. "Performance, Tools & General Bone-Headed Questions" (http://www.tldp.org/HOWTO/Soft
ware-RAID-0.4x-HOWTO-8.html). tldp.org. Retrieved 2013-12-25.
31. "Main Page – Linux-raid" (https://web.archive.org/web/20080705104645/http://linux-raid.osd
l.org/). osdl.org. 2010-08-20. Archived from the original (http://linux-raid.osdl.org/) on 2008-
07-05. Retrieved 2010-08-24.
32. "Hdfs Raid" (http://hadoopblog.blogspot.com/2009/08/hdfs-and-erasure-codes-hdfs-raid.htm
l). Hadoopblog.blogspot.com. 2009-08-28. Retrieved 2010-08-24.
33. "3.8: "Hackers of the Lost RAID" " (http://www.openbsd.org/lyrics.html#38). OpenBSD
Release Songs. OpenBSD. 2005-11-01. Retrieved 2019-03-23.
34. Long, Scott; Adaptec, Inc (2000). "aac(4) — Adaptec AdvancedRAID Controller driver" (htt
p://bxr.su/f/share/man/man4/aac.4). BSD Cross Reference. FreeBSD., "aac -- Adaptec
AdvancedRAID Controller driver". FreeBSD Manual Pages (https://www.freebsd.org/cgi/ma
n.cgi?query=aac&sektion=4). FreeBSD.
35. Raadt, Theo de (2005-09-09). "RAID management support coming in OpenBSD 3.8" (http
s://marc.info/?l=openbsd-misc&m=112630095818062). misc@ (Mailing list). OpenBSD.
36. Murenin, Constantine A. (2010-05-21). "1.1. Motivation; 4. Sensor Drivers; 7.1. NetBSD
envsys / sysmon". OpenBSD Hardware Sensors — Environmental Monitoring and Fan
Control (http://cnst.su/MMathCS) (MMath thesis). University of Waterloo: UWSpace.
hdl:10012/5234 (https://hdl.handle.net/10012%2F5234). Document ID:
ab71498b6b1a60ff817b29d56997a418.
37. "RAID over File System" (https://web.archive.org/web/20131109055927/http://www.flexraid.
com/faq-items/what-is-raid-over-file-system/). Archived from the original (http://www.flexraid.
com/faq-items/what-is-raid-over-file-system/) on 2013-11-09. Retrieved 2014-07-22.
38. "ZFS Raidz Performance, Capacity and Integrity" (https://calomel.org/zfs_raid_speed_capac
ity.html). calomel.org. Retrieved 26 June 2017.
39. "ZFS -illumos" (https://web.archive.org/web/20190315183042/https://wiki.illumos.org/displa
y/illumos/ZFS). illumos.org. 2014-09-15. Archived from the original (http://wiki.illumos.org/dis
play/illumos/ZFS) on 2019-03-15. Retrieved 2016-05-23.
40. "Creating and Destroying ZFS Storage Pools – Oracle Solaris ZFS Administration Guide" (ht
tp://docs.oracle.com/cd/E23823_01/html/819-5461/gaypw.html). Oracle Corporation. 2012-
04-01. Retrieved 2014-07-27.
41. "20.2. The Z File System (ZFS)" (https://web.archive.org/web/20140703043231/http://www.fr
eebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html). freebsd.org.
Archived from the original (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/fi
lesystems-zfs.html) on 2014-07-03. Retrieved 2014-07-27.
42. "Double Parity RAID-Z (raidz2) (Solaris ZFS Administration Guide)" (http://docs.oracle.com/c
d/E19120-01/open.solaris/817-2271/gcviu/index.html). Oracle Corporation. Retrieved
2014-07-27.
43. "Triple Parity RAIDZ (raidz3) (Solaris ZFS Administration Guide)" (http://docs.oracle.com/cd/
E19120-01/open.solaris/817-2271/givdn/index.html). Oracle Corporation. Retrieved
2014-07-27.
44. Deenadhayalan, Veera (2011). "General Parallel File System (GPFS) Native RAID" (http://w
ww.usenix.org/events/lisa11/tech/slides/deenadhayalan.pdf) (PDF). UseNix.org. IBM.
Retrieved 2014-09-28.
45. "Btrfs Wiki: Feature List" (https://btrfs.wiki.kernel.org/index.php/Main_Page#Features).
2012-11-07. Retrieved 2012-11-16.
46. "Btrfs Wiki: Changelog" (https://btrfs.wiki.kernel.org/index.php/Changelog). 2012-10-01.
Retrieved 2012-11-14.
47. Trautman, Philip; Mostek, Jim. "Scalability and Performance in Modern File Systems" (http
s://web.archive.org/web/20150422201638/http://linux-xfs.sgi.com/projects/xfs/papers/xfs_wh
ite/xfs_white_paper.html). linux-xfs.sgi.com. Archived from the original (http://linux-xfs.sgi.co
m/projects/xfs/papers/xfs_white/xfs_white_paper.html) on 2015-04-22. Retrieved
2015-08-17.
48. "Linux RAID Setup – XFS" (https://raid.wiki.kernel.org/index.php/RAID_setup#XFS).
kernel.org. 2013-10-05. Retrieved 2015-08-17.
49. Hewlett Packard Enterprise. "HPE Support document - HPE Support Center" (https://suppor
t.hpe.com/hpsc/doc/public/display?docId=emr_na-c04619764). support.hpe.com.
50. "Mac OS X: How to combine RAID sets in Disk Utility" (http://support.apple.com/kb/TA2435
9). Retrieved 2010-01-04.
51. "Apple Mac OS X Server File Systems" (https://www.apple.com/server/macosx/technology/fil
e-system.html). Retrieved 2008-04-23.
52. "Other World Computing Launches SoftRAID 8 Setting a New Standard for Reliability,
Speed and Data Safeguards" (https://www.techpowerup.com/320611/other-world-computing
-launches-softraid-8-setting-a-new-standard-for-reliability-speed-and-data-safeguards).
TechPowerUp. 2024-03-20. Retrieved 2024-11-24.
53. "FreeBSD System Manager's Manual page for GEOM(8)" (http://www.freebsd.org/cgi/man.c
gi?query=geom). Retrieved 2009-03-19.
54. "freebsd-geom mailing list – new class / geom_raid5" (http://lists.freebsd.org/pipermail/freeb
sd-geom/2006-July/001356.html). 6 July 2006. Retrieved 2009-03-19.
55. "FreeBSD Kernel Interfaces Manual for CCD(4)" (http://www.freebsd.org/cgi/man.cgi?query
=ccd). Retrieved 2009-03-19.
56. "The Software-RAID HowTo" (http://tldp.org/HOWTO/Software-RAID-HOWTO.html).
Retrieved 2008-11-10.
57. "mdadm(8) – Linux man page" (http://linux.die.net/man/8/mdadm). Linux.Die.net. Retrieved
2014-11-20.
58. "Windows Vista support for large-sector hard disk drives" (https://web.archive.org/web/2007
0703092408/http://support.microsoft.com/kb/923332/). Microsoft. 2007-05-29. Archived from
the original (http://support.microsoft.com/kb/923332/) on 2007-07-03. Retrieved 2007-10-08.
59. "You cannot select or format a hard disk partition when you try to install Windows Vista,
Windows 7 or Windows Server 2008 R2" (http://support.microsoft.com/kb/927520/en-us).
Microsoft. 14 September 2011. Archived (https://web.archive.org/web/20110303111057/htt
p://support.microsoft.com/kb/927520/en-us) from the original on 3 March 2011. Retrieved
17 December 2009.
60. "Using Windows XP to Make RAID 5 Happen" (http://www.tomshardware.com/reviews/wind
owsxp-make-raid-5-happen,925.html). Tom's Hardware. 19 November 2004. Retrieved
24 August 2010.
61. Sinofsky, Steven (January 5, 2012). "Virtualizing storage for scale, resiliency, and efficiency"
(https://web.archive.org/web/20130509100721/http://blogs.msdn.com/b/b8/archive/2012/01/
05/virtualizing-storage-for-scale-resiliency-and-efficiency.aspx). Building Windows 8 blog.
Archived from the original (http://blogs.msdn.com/b/b8/archive/2012/01/05/virtualizing-stora
ge-for-scale-resiliency-and-efficiency.aspx) on May 9, 2013. Retrieved January 6, 2012.
62. Metzger, Perry (1999-05-12). "NetBSD 1.4 Release Announcement" (http://www.netbsd.org/
releases/formal-1.4/NetBSD-1.4.html). NetBSD.org. The NetBSD Foundation. Retrieved
2013-01-30.
63. "OpenBSD softraid man page" (https://man.openbsd.org/softraid.4). OpenBSD.org.
Retrieved 2018-02-03.
64. "FreeBSD Handbook" (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geo
m-mirror.html). Chapter 19 GEOM: Modular Disk Transformation Framework. Retrieved
2009-03-19.
65. "SATA RAID FAQ" (https://ata.wiki.kernel.org/index.php/SATA_RAID_FAQ).
Ata.wiki.kernel.org. 2011-04-08. Retrieved 2012-08-26.
66. "Red Hat Enterprise Linux – Storage Administrator Guide – RAID Types" (https://access.red
hat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administratio
n_Guide/s1-raid-approaches.html). redhat.com.
67. Russel, Charlie; Crawford, Sharon; Edney, Andrew (2011). Working with Windows Small
Business Server 2011 Essentials (https://books.google.com/books?id=R2gJ9kcX2ywC&pg=
PA90). O'Reilly Media, Inc. p. 90. ISBN 978-0-7356-5670-3 – via Google Books.
68. Block, Warren. "19.5. Software RAID Devices" (http://www.freebsd.org/doc/handbook/geom-
graid.html). freebsd.org. Retrieved 2014-07-27.
69. Krutz, Ronald L.; Conley, James (2007). Wiley Pathways Network Security Fundamentals (h
ttps://books.google.com/books?id=Gdux_6ckDYwC&pg=PA422). John Wiley & Sons.
p. 422. ISBN 978-0-470-10192-6 – via Google Books.
70. "Hardware RAID vs. Software RAID: Which Implementation is Best for my Application?
Adaptec Whitepaper" (http://www.adaptec.com/nr/rdonlyres/14b2fd84-f7a0-4ac5-a07a-2141
23ea3dd6/0/4423_sw_hwraid_10.pdf) (PDF). adaptec.com.
71. Smith, Gregory (2010). PostgreSQL 9.0: High Performance (https://books.google.com/book
s?id=OWOAu0GcsqoC&pg=PT72). Packt Publishing Ltd. p. 31. ISBN 978-1-84951-031-8 –
via Google Books.
72. Ulf Troppens, Wolfgang Mueller-Friedt, Rainer Erkens, Rainer Wolafka, Nils Haustein.
Storage Networks Explained: Basics and Application of Fibre Channel SAN, NAS, ISCSI,
InfiniBand and FCoE. John Wiley and Sons, 2009. p.39
73. Dell Computers, Background Patrol Read for Dell PowerEdge RAID Controllers, By Drew
Habas and John Sieber, Reprinted from Dell Power Solutions, February 2006
http://www.dell.com/downloads/global/power/ps1q06-20050212-Habas.pdf
74. "Error Recovery Control with Smartmontools" (https://web.archive.org/web/2011092819004
5/http://www.csc.liv.ac.uk/~greg/projects/erc/). 2009. Archived from the original (http://www.c
sc.liv.ac.uk/~greg/projects/erc/) on September 28, 2011. Retrieved September 29, 2017.
75. Gray, Jim (Oct 1990). "A census of Tandem system availability between 1985 and 1990" (htt
ps://web.archive.org/web/20190220114624/http://pdfs.semanticscholar.org/22a4/ddf4d609c
6e9c8a0a0ea6187af4c3178a7ed.pdf) (PDF). IEEE Transactions on Reliability. 39 (4). IEEE:
409–418. doi:10.1109/24.58719 (https://doi.org/10.1109%2F24.58719). S2CID 2955525 (htt
ps://api.semanticscholar.org/CorpusID:2955525). Archived from the original (http://pdfs.sem
anticscholar.org/22a4/ddf4d609c6e9c8a0a0ea6187af4c3178a7ed.pdf) (PDF) on 2019-02-
20.
76. Murphy, Brendan; Gent, Ted (1995). "Measuring system and software reliability using an
automated data collection process". Quality and Reliability Engineering International. 11 (5):
341–353. doi:10.1002/qre.4680110505 (https://doi.org/10.1002%2Fqre.4680110505).
77. Patterson, D., Hennessy, J. (2009), 574.
78. "The RAID Migration Adventure" (http://www.tomshardware.com/reviews/RAID-MIGRATION
-ADVENTURE,1640.html). 10 July 2007. Retrieved 2010-03-10.
79. Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean to You? (htt
p://www.usenix.org/events/fast07/tech/schroeder.html) Bianca Schroeder and Garth A.
Gibson
80. Harris, Robin (2010-02-27). "Does RAID 6 stop working in 2019?" (http://storagemojo.com/2
010/02/27/does-raid-6-stops-working-in-2019/). StorageMojo.com. TechnoQWAN. Retrieved
2013-12-17.
81. J.L. Hafner, V. Dheenadhayalan, K. Rao, and J.A. Tomlin. "Matrix methods for lost data
reconstruction in erasure codes. USENIX Conference on File and Storage Technologies (htt
ps://www.usenix.org/legacy/event/fast05/tech/full_papers/hafner_matrix/hafner_matrix_html/
matrix_hybrid_fast05.html), Dec. 13–16, 2005.
82. Miller, Scott Alan (2016-01-05). "Understanding RAID Performance at Various Levels" (htt
p://www.storagecraft.com/blog/raid-performance/). Recovery Zone. StorageCraft. Retrieved
2016-07-22.
83. Kagel, Art S. (March 2, 2011). "RAID 5 versus RAID 10 (or even RAID 3, or RAID 4)" (http
s://web.archive.org/web/20141103162704/http://www.miracleas.com/BAARF/RAID5_versus
_RAID10.txt). miracleas.com. Archived from the original (http://www.miracleas.com/BAARF/
RAID5_versus_RAID10.txt) on November 3, 2014. Retrieved October 30, 2014.
84. Baker, M.; Shah, M.; Rosenthal, D.S.H.; Roussopoulos, M.; Maniatis, P.; Giuli, T.; Bungale,
P (April 2006). "A fresh look at the reliability of long-term digital storage". Proceedings of the
1st ACM SIGOPS/EuroSys European Conference on Computer Systems 2006. pp. 221–
234. doi:10.1145/1217935.1217957 (https://doi.org/10.1145%2F1217935.1217957).
ISBN 1595933220. S2CID 7655425 (https://api.semanticscholar.org/CorpusID:7655425).
85. Bairavasundaram, L.N.; Goodson, G.R.; Pasupathy, S.; Schindler, J. (June 12–16, 2007).
"An analysis of latent sector errors in disk drives" (http://research.cs.wisc.edu/adsl/Publicatio
ns/latent-sigmetrics07.pdf) (PDF). Proceedings of the 2007 ACM SIGMETRICS international
conference on Measurement and modeling of computer systems. pp. 289–300.
doi:10.1145/1254882.1254917 (https://doi.org/10.1145%2F1254882.1254917).
ISBN 9781595936394. S2CID 14164251 (https://api.semanticscholar.org/CorpusID:141642
51).
86. Patterson, D., Hennessy, J. (2009). Computer Organization and Design. New York: Morgan
Kaufmann Publishers. pp 604–605.
87. Leventhal, Adam (2009-12-01). "Triple-Parity RAID and Beyond. ACM Queue, Association
for Computing Machinery" (https://queue.acm.org/detail.cfm?id=1670144). Retrieved
2012-11-30.
88. " "Write Hole" in RAID5, RAID6, RAID1, and Other Arrays" (http://www.raid-recovery-guide.c
om/raid5-write-hole.aspx). ZAR team. Retrieved 15 February 2012.
89. Danti, Gionatan. "write hole: which RAID levels are affected?" (https://serverfault.com/a/100
2509). Server Fault.
90. "ANNOUNCE: mdadm 3.4 - A tool for managing md Soft RAID under Linux [LWN.net]" (http
s://lwn.net/Articles/673953/). lwn.net.
91. "A journal for MD/RAID5 [LWN.net]" (https://lwn.net/Articles/665299/). lwn.net.
92. md(4) (https://manned.org/md.4) – Linux Programmer's Manual – Special Files
93. "Partial Parity Log" (https://www.kernel.org/doc/html/latest/driver-api/md/raid5-ppl.html). The
Linux Kernel documentation.
94. Bonwick, Jeff (2005-11-17). "RAID-Z" (https://web.archive.org/web/20141216015058/https://
blogs.oracle.com/bonwick/en_US/entry/raid_z). Jeff Bonwick's Blog. Oracle Blogs. Archived
from the original (https://blogs.oracle.com/bonwick/en_US/entry/raid_z) on 2014-12-16.
Retrieved 2015-02-01.
95. Overstreet, Kent (18 Dec 2021). "bcachefs: Principles of Operation" (https://bcachefs.org/bc
achefs-principles-of-operation.pdf) (PDF). Retrieved 10 May 2023.
96. Gray, Jim (2008-06-11). "The Transaction Concept: Virtues and Limitations (Invited Paper)"
(https://web.archive.org/web/20080611230227/http://www.informatik.uni-trier.de/~ley/db/con
f/vldb/Gray81.html). VLDB [Very Large Data Bases] 1981. pp. 144–154. Archived from the
original (http://www.informatik.uni-trier.de/~ley/db/conf/vldb/Gray81.html) on 2008-06-11.
97. "Definition of write-back cache at SNIA dictionary" (https://www.snia.org/education/online-dic
tionary/w). www.snia.org.

External links
"Empirical Measurements of Disk Failure Rates and Error Rates" (https://www.microsoft.co
m/en-us/research/publication/empirical-measurements-of-disk-failure-rates-and-error-rate
s/), by Jim Gray and Catharine van Ingen, December 2005
The Mathematics of RAID-6 (https://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf),
by H. Peter Anvin
BAARF: Battle Against Any Raid Five (https://web.archive.org/web/20131216113135/http://
miracleas.com/BAARF/BAARF2.html) (RAID 3, 4 and 5 versus RAID 10)
A Clean-Slate Look at Disk Scrubbing (https://www.usenix.org/legacy/event/fast10/tech/full_
papers/oprea.pdf)

Retrieved from "https://en.wikipedia.org/w/index.php?title=RAID&oldid=1278745585"

You might also like