3 - Disk Performance Parameter
3 - Disk Performance Parameter
3 - Disk Performance Parameter
Rotational Delay:
Disk drive generally rotates at 3600 rpm, ie to make one revolution it takes around 16.7
ms. Thus on the average, the rotational delay will be 8.3 ms.
Transfer Time:
The transfer time to or from the disk depends on the rotational speed of the disk and it is
estimated as
T
where
b
rN
T Transfer time
b Number of bytes to be transferred.
N Numbers of bytes on a track
r Rotational speed, in revolution per second.
Since the striping unit are distributed over several disks in the disk array in round robin
fashion, large o requests of the size of many continuous blocks involve all disks. We
can process the request by all disks in parallel and thus increase the transfer rate.
Redundancy:
While having more disks increases storage system performance, it also lower overall
storage system reliability, because the probability of failure of a disk in disk array is
increasing.
Reliability of a disk array can be increases by storing redundant information. If a disk
fails, the redundant information is used to reconstruct the data on the failed disk.
One design issue involves her-where to store the redundant information. There are two
choices-either store the redundant information on a small number of check disks, or
distribute the redundant information uniformly over all disk.
In a RAID system, the disk array is partitioned into reliability group, where a reliability
group consists of a set of data disks and a set of check disks. A common redundancy
scheme is applied to each group.
RAID levels
RAID Level 0: Nonredundant
A RAID level 0 system is not a true member of the RAID family, because it does not
include redundancy, that is, no redundant information is maintained.
It uses data striping to increase the o performance.
For RAID 0, the user and system data are distributed across all of the disk in the array,
i.e. data are striped across the available disk.
If two different o requests are there for two different data block, there is a good
probability that the requested blocks are in different disks. Thus, the two requests can be
issued in parallel, reducing the o waiting time
RAID level 0 is a low cost solution, but the reliability is a problem since there is no
redundant information t retrieve in case of disk failure.
RAID level 0 has the best write performance of al RAID levels, because there is no need
of updation of redundant information.
RAID level 1: Mirrored
RAID level 1 is the most expensive solution to achieve the redundancy. In this system,
two identical copies of the data on two different disks are maintained. This type of
redundancy is called mirroring.
Data striping is used here similar to RAID o.
Every write of a disk block involves two write due to the mirror image of the disk blocks.
These write may not be performed simultaneously, since a global system failure may
occur while writing the blocks and then leave both copies in an inconsistent state.
Therefore, write a block on a disk first and then write the other copy o the mirror disk.
A read of a block can be scheduled to the disk that has the smaller access time. Since we
are maintaining the full redundant information, the disk for mirror copy may be less
costly one to reduce the overall cost.
RAID level 2
RAID levels 2 and 3 make use of a parallel access technique where all member disks
particular in the execution of every o requests.
Data striping is used in RAID level 2 and 3, but the size of strips are very small, often a
small as a single byte or word.
With RAID 2, an error-correcting code is calculated across corresponding bits on each
data disk, and the bits of the cods are stored in the corresponding bit positions on
multiple parity disks.
RAID 2 requires fewer disks than RAID 1. The number of redundant disks is
proportional to the log of the number of data disks. For error-correcting, it uses Haming
code.
On a single read, all disks are simultaneously accessed. The requested data and the
associated error correcting code are delivered to the array controller. If there is a single
bit error, the controller can recognize and correct the error instantly, so that read access
time is not slowed down.
On a single write, all data disks and parity disks must be accessed for the write operation.
RAID level 3
RAID level 3 is organized in a similar fashion to RAID level 2. The difference is that
RAID 3 requires only a single redundant disk.
RAID 3 access parallel access, with data distributed in small strips.
Instead of an error correcting code, a simple parity bit is computed for the set of
individual bits in the same position on all of the data disks.
In this event of drive failure, the parity drive is accessed and data is reconstructed from
the remaining drives. Once the failed drive is replaced, the missing data can be restored
on the new drive.
RAID level 4
RAID levels 4 through 6 make use of an independent access technique, where each
member disk operates independently, so that separate o request can be shifted in
parallel.
Data stripings are used in this scheme also, but the data strips are relatively large for
RAID level 4 through 6.
With RAID 4, a bit-by-bit parity strip is calculated across corresponding strips on each
data disks, and the parity bits are stored in the corresponding strip on the parity disk.
RAID 4 involves a write penalty when an o write request of small size is occurred. Each
time a write occurs, update is required both in user data and the corresponding parity bits.
RAID level 5:
RAID level 5 is similar to RAID 4, only the difference is that RAID 5 distributes the
parity strips across all disks.
The distribution of parity strips across all drives avoids the potential o bottleneck.
RAID level 6:
In RAID level 6, two different parity calculations are carried out and stored in separate
blocks on different disks.
The advantage of RAID 6 is that it has got a high data availability, because the data can
be regenerated even if two disk containing user data fails. It is possible due to the use of
Reed-Solomon code for parity calculations.
In RAID 6, there is a write penalty, because each write affects two parity blocks.