0% found this document useful (0 votes)
67 views

Operating System

A deadlock occurs when processes are waiting for resources held by other processes, resulting in a circular wait where no progress can be made. The four conditions for deadlock are mutual exclusion, hold and wait, no preemption, and circular wait. Approaches to handling deadlocks include prevention, avoidance, and detection. Prevention ensures conditions for deadlock cannot occur while avoidance allows the conditions but avoids reaching a deadlock state. Detection identifies deadlocks and recovers by terminating processes or preempting resources.

Uploaded by

Munish Jha
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

Operating System

A deadlock occurs when processes are waiting for resources held by other processes, resulting in a circular wait where no progress can be made. The four conditions for deadlock are mutual exclusion, hold and wait, no preemption, and circular wait. Approaches to handling deadlocks include prevention, avoidance, and detection. Prevention ensures conditions for deadlock cannot occur while avoidance allows the conditions but avoids reaching a deadlock state. Detection identifies deadlocks and recovers by terminating processes or preempting resources.

Uploaded by

Munish Jha
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 56

operating System

UNIT III
Topics
Deadlocks
What is a deadlock?
 Every process needs some resources to complete its execution. However, the
resource is granted in a sequential order.
1. The process requests for some resource.
2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a


resource which is being assigned to some another process. In this situation, none
of the process gets executed since the resource it needs, is held by some other
process which is also waiting for some other resource to be released.
Example:

Let us assume that there are three processes P1, P2 and P3. There are
three different resources R1, R2 and R3. R1 is assigned to P1, R2 is
assigned to P2 and R3 is assigned to P3.

After some time, P1 demands for R1 which is being used by P2. P1


halts its execution since it can't complete without R2. P2 also
demands for R3 which is being used by P3. P2 also stops its execution
because it can't continue without R3. P3 also demands for R1 which is
being used by P1 therefore P3 also stops its execution.
In this scenario, a cycle is being formed among the three
processes. None of the process is progressing and they are
all waiting. The computer becomes unresponsive since all
the processes got blocked.
System Model
A system model or structure consists of a fixed number of resources to be
circulated among some opposing processes. The resources are then partitioned
into numerous types, each consisting of some specific quantity of identical
instances. Memory space, CPU cycles, directories and files, I/O devices like
keyboards, printers and CD-DVD drives are prime examples of resource types.
When a system has 2 CPUs, then the resource type CPU got two instances.
• Under the standard mode of operation, any process may use a resource in only
the below-mentioned sequence:
1. Request: When the request can't be approved immediately (where the case may be when
another process is utilizing the resource), then the requesting job must remain waited until
it can obtain the resource.
2. Use: The process can run on the resource (like when the resource is a printer, its
job/process is to print on the printer).
3. Release: The process releases the resource (like, terminating or exiting any specific process).
Deadlock Characterization
A deadlock occurs if the four Coffman conditions hold true. But these
conditions are not mutually exclusive. They are given as follows −

Mutual Exclusion
There should be a resource that can only be held by one process at a
time. In the diagram below, there is a single instance of Resource 1 and it
is held by Process 1 only.
Hold and Wait
A process can hold multiple resources and still request more resources
from other processes which are holding them. In the diagram given below,
Process 2 holds Resource 2 and Resource 3 and is requesting the Resource
1 which is held by Process 1.
No Preemption
A resource cannot be preempted from a process by force. A process
can only release a resource voluntarily. In the diagram below, Process 2
cannot preempt Resource 1 from Process 1. It will only be released
when Process 1 relinquishes it voluntarily after its execution is
complete.
Circular Wait
A process is waiting for the resource held by the second process, which is
waiting for the resource held by the third process and so on, till the last
process is waiting for a resource held by the first process. This forms a circular
chain. For example: Process 1 is allocated Resource2 and it is requesting
Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting
Resource 2. This forms a circular wait loop.
Handling
There areDeadlocks
three approaches to deal with deadlocks.

1. Deadlock Prevention

2. Deadlock avoidance

3. Deadlock detection
Deadlock Prevention
The strategy of deadlock prevention is to design the system in such a way that the possibility of
deadlock is excluded. Indirect method prevent the occurrence of one of three necessary condition of
deadlock i.e., mutual exclusion, no pre-emption and hold and wait. Direct method prevent the
occurrence of circular wait.

 Prevention techniques – 

Mutual exclusion – is supported by the OS. 

Hold and Wait – condition can be prevented by requiring that a process requests all its required
resources at one time and blocking the process until all of its requests can be granted at a same time
simultaneously. But this prevention does not yield good result because :

o long waiting time required


o in efficient use of allocated resource
o A process may not know all the required resources in advance
No pre-emption – techniques for ‘no pre-emption are’

• If a process that is holding some resource, requests another resource that can
not be immediately allocated to it, the all resource currently being held are
released and if necessary, request them again together with the additional
resource.
• If a process requests a resource that is currently held by another process, the OS
may pre-empt the second process and require it to release its resources. This
works only if both the processes do not have same priority.

Circular wait One way to ensure that this condition never hold is to impose a total
ordering of all resource types and to require that each process requests resource in
an increasing order of enumeration, i.e., if a process has been allocated resources of
type R, then it may subsequently request only those resources of types following R
in ordering.
Deadlock Avoidance
This approach allows the three necessary conditions of deadlock but makes judicious choices
to assure that deadlock point is never reached. It allows more concurrency than avoidance
detection A decision is made dynamically whether the current resource allocation request
will, if granted, potentially lead to deadlock. It requires the knowledge of future process
requests. Two techniques to avoid deadlock :

1. Process initiation denial


2. Resource allocation denial
Advantages of deadlock avoidance techniques :

• Not necessary to pre-empt and rollback processes


• Less restrictive than deadlock prevention

Disadvantages :

• Future resource requirements must be known in advance


• Processes can be blocked for long periods
• Exists fixed number of resources for allocation
Deadlock Detection
Deadlock detection is used by employing an algorithm that tracks the circular
waiting and killing one or more processes so that deadlock is removed. The system
state is examined periodically to determine if a set of processes is deadlocked. A
deadlock is resolved by aborting and restarting a process, relinquishing all the
resources that the process held.

• This technique does not limit resources access or restrict process action.

• Requested resources are granted to processes whenever possible.

• It never delays the process initiation and facilitates online handling.

• The disadvantage is the inherent pre-emption losses.


Detection & Recovery from Deadlock
1. If resources have a single instance –
In this case for Deadlock detection, we can run an algorithm to check for the cycle
in the Resource Allocation Graph. The presence of a cycle in the graph is a sufficient
condition for deadlock.

In the diagram, resource


1 and resource 2 have
single instances. There is
a cycle R1 → P1 → R2 →
P2. So, Deadlock is
Confirmed.
3. If there are multiple instances of resources –
Detection of the cycle is necessary but not sufficient condition for deadlock detection, in this case, the
system may or may not be in deadlock varies according to different situations.

• Deadlock Recovery :
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is a time
and space-consuming process. Real-time operating systems use Deadlock recovery.

1. Killing the process –


Killing all the processes involved in the deadlock. Killing process one by one. After killing each
process check for deadlock again keep repeating the process till the system recovers from
deadlock. Killing all the processes one by one helps a system to break circular wait condition.

2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock, preempted resources are
allocated to other processes so that there is a possibility of recovering the system from deadlock.
In this case, the system goes into starvation.
When a Deadlock Detection Algorithm determines that a deadlock has
occurred in the system, the system must recover from that deadlock. There
are two approaches of breaking a Deadlock:

1. Process Termination: 
To eliminate the deadlock, we can simply kill one or more processes. For this, we use two
methods: 
• (a). Abort all the Deadlocked Processes: 
Aborting all the processes will certainly break the deadlock, but with a great expense. The
deadlocked processes may have computed for a long time and the result of those partial
computations must be discarded and there is a probability to recalculate them later. 
 
• (b). Abort one process at a time until deadlock is eliminated: 
Abort one deadlocked process at a time, until deadlock cycle is eliminated from the
system. Due to this method, there may be considerable overhead, because after aborting
each process, we have to run deadlock detection algorithm to check whether any processes
are still deadlocked. 
2. Resource Preemption:
To eliminate deadlocks using resource preemption, we preempt some resources from processes
and give those resources to other processes. This method will raise three issues –

(a). Selecting a victim:


We must determine which resources and which processes are to be preempted and also
the order to minimize the cost.

(b). Rollback:
We must determine what should be done with the process from which resources are
preempted. One simple idea is total rollback. That means abort the process and restart it.

(c). Starvation:
In a system, it may happen that same process is always picked as a victim. As a result, that
process will never complete its designated task. This situation is called Starvation and
must be avoided. One solution is that a process must be picked as a victim only a finite
number of times.
Device Management
Operatin
Memor g File
y manage
manage system r
r

Process
Device
manage
Manage
r
r
Networ
k Security
Manage manage
r r
Device Management Technique
An operating system or the OS manages communication with the devices through their
respective drivers. The operating system component provides a uniform interface to access
devices of varied physical attributes. For device management in operating system:

• Keep tracks of all devices and the program which is responsible to perform this is called
I/O controller.
• Monitoring the status of each device such as storage drivers, printers and other
peripheral devices.
• Enforcing preset policies and taking a decision which process gets the device when and
for how long.
• Allocates and Deallocates the device in an efficient way.De-allocating them at two levels:
at the process level when I/O command has been executed and the device is temporarily
released, and at the job level, when the job is finished and the device is permanently
released.
• Optimizes the performance of individual devices.
Types of Devices
The OS peripheral devices can be categorized into 3:

• Dedicated devices
• Shared devices
• Virtual devices

The differences among them are the functions of the characteristics of the
devices as well as how they are managed by the Device Manager.
Dedicated Devices
Such type of devices in the device management in operating system are dedicated
or assigned to only one job at a time until that job releases them.

Devices like printers, tape drivers, plotters etc. demand such allocation scheme
since it would be awkward if several users share them at the same point of time.

The disadvantages of such kind of devices is the inefficiency resulting from the
allocation of the device to a single user for the entire duration of job execution
even though the device is not put to use 100% of the time.
Shared Devices
These devices can be allocated o several processes. Disk-DASD can be shared among
several processes at the same time by interleaving their requests. The interleaving is
carefully controlled by the Device Manager and all issues must be resolved on the
basis of predetermined policies.

Virtual Devices
These devices are the combination of the first two types and they are dedicated
devices which are transformed into shared devices. For example, a printer converted
into a shareable device via spooling program which re-routes all the print requests to a
disk. A print job is not sent straight to the printer, instead, it goes to the
disk(spool)until it is fully prepared with all the necessary sequences and formatting,
then it goes to the printers. This technique can transform one printer into several
virtual printers which leads to better performance and use.
Input/Output Devices
input/Output devices are the devices that are responsible for the
input/output operations in a computer system.

Basically there are following two types of input/output devices:

• Block devices
• Character devices
Block Devices
A block device stores information in block with fixed-size and own-
address.

It is possible to read/write each and every block independently in case


of block device.

In case of disk, it is always possible to seek another cylinder and then


wait for required block to rotate under head without mattering where
the arm currently is. Therefore, disk is a block addressable device.
Character Devices
A character device accepts/delivers a stream of characters without
regarding to any block structure.

Character device isn’t addressable.

Character device doesn’t have any seek operation.

There are too many character devices present in a computer system


such as printer, mice, rats, network interfaces etc. These four are the
common character devices.
Storage Device
There are two types of storage devices:-

Volatile Storage Device –


It looses its contents when the power of the device is removed.
Non-Volatile Storage device –
It does not looses its contents when the power is removed. It holds all the data
when the power is removed.

Secondary Storage is used as an extension of main memory. Secondary storage devices


can hold the data permanently.
Storage devices consists of Registers, Cache, Main-Memory, Electronic-Disk, Magnetic-
Disk, Optical-Disk, Magnetic-Tapes. Each storage system provides the basic system of
storing a datum and of holding the datum until it is retrieved at a later time. All the
storage devices differ in speed, cost, size and volatility. The most common Secondary-
storage device is a Magnetic-disk, which provides storage for both programs and data.
In this hierarchy all the storage devices are arranged according to
speed and cost. The higher levels are expensive, but they are fast.
As we move down the hierarchy, the cost per bit generally
decreases, where as the access time generally increases.

The storage systems above the Electronic disk are Volatile, where
as those below are Non-Volatile.
An Electronic disk can be either designed to be either Volatile or
Non-Volatile. During normal operation, the electronic disk stores
data in a large DRAM array, which is Volatile. But many electronic
disk devices contain a hidden magnetic hard disk and a battery for
backup power. If external power is interrupted, the electronic disk
controller copies the data from RAM to the magnetic disk. When
external power is restored, the controller copies the data back into
the RAM.

The design of a complete memory system must balance all the


factors. It must use only as much expensive memory as
necessary while providing as much inexpensive, Non-Volatile
memory as possible. Caches can be installed to improve
performance where a large access-time or transfer-rate
disparity exists between two components.
Buffering
A buffer is a memory area that stores data being transferred between two devices or
between a device and an application.

Uses of I/O Buffering :

• Buffering is done to deal effectively with a speed mismatch between the producer
and consumer of the data stream.

• A buffer is produced in main memory to heap up the bytes received from modem.

• After receiving the data in the buffer, the data get transferred to disk from buffer
in a single operation.
• This process of data transfer is not instantaneous, therefore the modem needs
another buffer in order to store additional incoming data.

• When the first buffer got filled, then it is requested to transfer the data to disk.

• The modem then starts filling the additional incoming data in the second buffer
while the data in the first buffer getting transferred to disk.

• When both the buffers completed their tasks, then the modem switches back to
the first buffer while the data from the second buffer get transferred to the disk.

• The use of two buffers disintegrates the producer and the consumer of the data,
thus minimizes the time requirements between them.

• Buffering also provides variations for devices that have different data transfer
sizes.
Types of various I/O buffering techniques :
1. Single buffer :

A buffer is provided by the operating system to the system portion of the main memory.

Block oriented device –

• System buffer takes the input.

• After taking the input, the block gets transferred to the user space by the process and then
the process requests for another block.

• Two blocks works simultaneously, when one block of data is processed by the user process,
the next block is being read in.

• OS can swap the processes.

• OS can record the data of system buffer to user processes.


Stream oriented device –

• Line- at a time operation is used for scroll made terminals. User inputs
one line at a time, with a carriage return signaling at the end of a line.
• Byte-at a time operation is used on forms mode, terminals when each
keystroke is significant.
2. Double buffer :

Block oriented –

• There are two buffers in the system.


• One buffer is used by the driver or controller to store data while waiting for it to
be taken by higher level of the hierarchy.

• Other buffer is used to store data from the lower level module.
• Double buffering is also known as buffer swapping.

• A major disadvantage of double buffering is that the complexity of the process get
increased.
• If the process performs rapid bursts of I/O, then using double buffering may be
deficient.
Stream oriented –

• Line- at a time I/O, the user process need not be suspended for
input or output, unless process runs ahead of the double buffer.

• Byte- at a time operations, double buffer offers no advantage over a


single buffer of twice the length.
3. Circular buffer :

• When more than two buffers are used, the collection of buffers is itself referred
to as a circular buffer.

• In this, the data do not directly passed from the producer to the consumer
because the data would change due to overwriting of buffers before they had
been consumed.

• The producer can only fill up to buffer i-1 while data in buffer i is waiting to be
consumed.
Secondry storage structure

Secondary storage devices are those devices whose memory is non volatile, meaning,
the stored data will be intact even if the system is turned off. Here are a few things
worth noting about secondary storage.

• Secondary storage is also called auxiliary storage.


• Secondary storage is less expensive when compared to primary
memory like RAMs.
• The speed of the secondary storage is also lesser than that of primary
storage.
• Hence, the data which is less frequently accessed is kept in the
secondary storage.
• A few examples are magnetic disks, magnetic tapes, removable thumb
drives etc.
Magnetic Disk Structure

In modern computers, most of


the secondary storage is in the
form of magnetic disks. Hence,
knowing the structure of a
magnetic disk is necessary to
understand how the data in the
disk is accessed by the
computer.
Structure of a magnetic disk
A magnetic disk contains several platters. Each platter is divided into circular shaped
tracks. The length of the tracks near the centre is less than the length of the tracks
farther from the centre. Each track is further divided into sectors, as shown in the
figure.

Tracks of the same distance from centre form a cylinder. A read-write head is used to
read data from a sector of the magnetic disk.

The speed of the disk is measured as two parts:

• Transfer rate: This is the rate at which the data moves from disk to the computer.
• Random access time: It is the sum of the seek time and rotational latency.
Seek time is the time taken by the arm to move to the required track. Rotational
latency is defined as the time taken by the arm to reach the required sector in the
track.

Even though the disk is arranged as sectors and tracks physically, the data is logically
arranged and addressed as an array of blocks of fixed size. The size of a block can be
512 or 1024 bytes. Each logical block is mapped with a sector on the disk,
sequentially. In this way, each sector in the disk will have a logical address.

Disk Scheduling Algorithms

On a typical multiprogramming system, there will usually be multiple disk access


requests at any point of time. So those requests must be scheduled to achieve good
efficiency. Disk scheduling is similar to process scheduling. Some of the disk
scheduling algorithms are described below.
First Come First Serve
This algorithm performs requests in the same order asked by the system. Let’s take an
example where the queue has the following requests with cylinder numbers as follows:

98, 183, 37, 122, 14, 124, 65, 67

Assume the head is initially at cylinder 56. The head moves in the given order in the
queue i.e., 56→98→183→…→67.
Shortest Seek Time First (SSTF)
Here the position which is closest to the current head position is chosen
first. Consider the previous example where disk queue looks like,

98, 183, 37, 122, 14, 124, 65, 67

Assume the head is initially at


cylinder 56. The next closest cylinder
to 56 is 65, and then the next
nearest one is 67, then 37, 14, so on.
SCAN algorithm

This algorithm is also called the elevator algorithm because of it’s behavior. Here,
first the head moves in a direction (say backward) and covers all the requests in the
path. Then it moves in the opposite direction and covers the remaining requests in
the path. This behavior is similar to that of an elevator. Let’s take the previous
example,

98, 183, 37, 122, 14, 124, 65, 67

Assume the head is initially at cylinder 56.


The head moves in backward direction
and accesses 37 and 14. Then it goes in
the opposite direction and accesses the
cylinders as they come in the path.
Disk Management
The operating system is responsible for several aspects of disk management.
Disk Formatting
A new magnetic disk is a blank slate. It is just platters of a magnetic recording material. Before a disk can
store data, it must be divided into sectors that the disk controller can read and write. This process is
called low-level formatting (or physical formatting).

Low-level formatting fills the disk with a special data structure for each sector. The data structure for a
sector consists of a header, a data area, and a trailer. The header and trailer contain information used by
the disk controller, such as a sector number and an error-correcting code (ECC).

To use a disk to hold files, the operating system still needs to record its own data structures on the disk.
It does so in two steps. The first step is to partition the disk into one or more groups of cylinders. The
operating system can treat each partition as though it were a separate disk. For instance, one partition
can hold a copy of the operating system’s executable code, while another holds user files. After
partitioning, the second step is logical formatting (or creation of a file system). In this step, the operating
system stores the initial file-system data structures onto the disk.
Boot block
When a computer is powered up or rebooted, it needs to have an initial program to run. This initial
program is called the bootstrap program. It initializes all aspects of the system (i.e. from CPU registers to
device controllers and the contents of main memory) and then starts the operating system.

To do its job, the bootstrap program finds the operating system kernel on disk, loads that kernel into
memory, and jumps to an initial address to begin the operating-system execution.

For most computers, the bootstrap is stored in read-only memory (ROM). This location is convenient
because ROM needs no initialization and is at a fixed location that the processor can start executing when
powered up or reset. And since ROM is read-only, it cannot be infected by a computer virus. The problem
is that changing this bootstrap code requires changing the ROM hardware chips.

For this reason, most systems store a tiny bootstrap loader program in the boot ROM, whose only job is
to bring in a full bootstrap program from disk. The full bootstrap program can be changed easily: A new
version is simply written onto the disk. The full bootstrap program is stored in a partition (at a fixed
location on the disk) is called the boot blocks. A disk that has a boot partition is called a boot disk or
system disk.
Bad Blocks

Since disks have moving parts and small tolerances, they are prone to failure. Sometimes the
failure is complete, and the disk needs to be replaced, and its contents restored from backup
media to the new disk.

More frequently, one or more sectors become defective. Most disks even come from the
factory with bad blocks. Depending on the disk and controller in use, these blocks are handled
in a variety of ways.

The controller maintains a list of bad blocks on the disk. The list is initialized during the low-
level format at the factory and is updated over the life of the disk. The controller can be told to
replace each bad sector logically with one of the spare sectors. This scheme is known as sector
sparing or forwarding.
Swap-space management

Swapping is a memory management technique used in multi-programming to increase the number of


process sharing the CPU. It is a technique of removing a process from main memory and storing it into
secondary memory, and then bringing it back into main memory for continued execution. This action of
moving a process out from main memory to secondary memory is called Swap Out and the action of
moving a process out from secondary memory to main memory is called Swap In.

Swap-Space :
The area on the disk where the swapped out processes are stored is called swap space.

Swap-Space Management :
Swap-Swap management is another low-level task pf the operating system. Disk space is used as an
extension of main memory by the virtual memory. As we know the fact that disk access is much slower
than memory access, In the swap-space management we are using disk space, so it will significantly
decreases system performance. Basically, in all our systems we require the best throughput, so the goal
of this swap-space implementation is to provide the virtual memory the best throughput
Swap-Space Use :

Swap-space is used by the different operating-system in various ways. The systems which
are implementing swapping may use swap space to hold the entire process which may
include image, code and data segments. Paging systems may simply store pages that
have been pushed out of the main memory. The need of swap space on a system can
vary from a megabytes to gigabytes but it also depends on the amount of physical
memory, the virtual memory it is backing and the way in which it is using the virtual
memory.

A swap space can reside in one of the two places –

• Normal file system


• Separate disk partition
Note
Let, if the swap-space is simply a large file within the file system. To create it, name it and allocate its
space normal file-system routines can be used. This approach, through easy to implement, is inefficient.
Navigating the directory structures and the disk-allocation data structures takes time and extra disk
access. During reading or writing of a process image, external fragmentation can greatly increase
swapping times by forcing multiple seeks.

There is also an alternate to create the swap space which is in a separate raw partition. There is no
presence of any file system in this place. Rather, a swap space storage manager is used to allocate and
de-allocate the blocks. from the raw partition. It uses the algorithms for speed rather than storage
efficiency, because we know the access time of swap space is shorter than the file system. By this
Internal fragmentation increases, but it is acceptable, because the life span of the swap space is shorter
than the files in the file system. Raw partition approach creates fixed amount of swap space in case of
the disk partitioning.

Some operating systems are flexible and can swap both in raw partitions and in the file system space,
example: Linux
Disk Reliability
Reliability is the ability of the disk system to accommodate a single- or multi-disk failure and still
remain available to the users.

Adding redundancy almost always increases the reliability of the disk system. The most
common way to add redundancy is to implement a Redundant Array of Inexpensive Disks
(RAID).

There are two types of RAID:

• Hardware — The most commonly used hardware RAID levels are: RAID 0, RAID 1, RAID 5,
and RAID 10. The main differences between these RAID levels focus on reliability and
performance as previously defined.

• Software — Software RAID can be less expensive. However, it is almost always much slower
than hardware RAID, because it places a burden on the main system CPU to manage the
extra disk I/O.
The different hardware RAID types are as follows:

RAID 0 (Striping) — RAID 0 has the following characteristics:


High performance — Performance benefit for randomized reads and writes
Low reliability — No failure protection
Increased risk — If one disk fails, the entire set fails
The disks work together to send information to the user. While this arrangement does help
performance, it can cause a potential problem. If one disk fails, the entire file system is
corrupted.

RAID 1 (Mirroring) — RAID 1 has the following characteristics:


Medium performance — Superior to conventional disks due to "optimistic read"
Expensive — Requires twice as many disks to achieve the same storage, and also requires twice
as many controllers if you want redundancy at that level
High reliability — Loses a disk without an outage
Good for sequential reads and writes — The layout of the disk and the layout of the data are
sequential, promoting a performance benefit, provided you can isolate a sequential file to a
mirror pair
In a two disk RAID 1 system, the first disk is the primary disk and the second disk acts as the
parity, or mirror disk. The role of the parity disk is to keep an exact synchronous copy of all the
information stored on the primary disk. If the primary disk fails, the information can be retrieved
from the parity disk.

Be sure that your disks are able to be hot swapped so repairs can be made without bringing
down the system. Remember that there is a performance penalty during the resynchronization
period of the disks.

On a read, the disk that has its read/write heads positioned closer to the data will retrieve
information. This data retrieval technique is known as an optimistic read. An optimistic read can
provide a maximum of 15 percent improvement in performance over a conventional disk. When
setting up mirrors, it is important to consider which physical disks are being used for primary and
parity information, and to balance the I/O across physical disks rather than logical disks.

RAID 10 or 1+0 — RAID 10 has the following characteristics:


High reliability — Provides mirroring and striping
High performance — Good for randomized reads and writes.
Low cost — No more expensive than RAID 1 mirroring
RAID 10 resolves the reliability problem of striping by adding mirroring to the equation.
Note: If you are implementing a RAID solution, Progress Software Corporation recommends
RAID 10.

RAID 5 — RAID 5 has the following characteristics:


High reliability — Provides good failure protection
Low performance — Performance is poor for writes due to the parity's construction
Absorbed state — Running in an absorbed state provides diminished performance throughout
the application because the information must be reconstructed from parity

CAUTION: Progress Software Corporation recommends not using RAID 5 for database systems.

It is possible to have both high reliability and high performance. However, the cost of a system
that delivers both of these characteristics is higher than a system that is only delivers one of
the two.
Thank you
P.S.
**For further reference and more in-depth knowledge for the topics
covered, you can follow up on the following links from where this material
has been taken
• Unit-4: Device management – B.C.A study (bcastudyguide.com)
• Operating Systems: Device Management (slideshare.net)
• OS Deadlocks Introduction – javatpoint
• Introduction of Deadlock in Operating System - GeeksforGeeks

**This presentation has been created for the sole purpose of education as directed
by our teacher. It was a team effort with all the members having contributed.
From Roll No _ 41 to Roll No_60

You might also like