0% found this document useful (0 votes)
6 views

Chapter 5 input output management

Uploaded by

Jhalak
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Chapter 5 input output management

Uploaded by

Jhalak
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

INPUT/OUTPUT

Input/output or I/O is the communication between an information processing system, such as a


computer, and the outside world, possibly a human or another information processing system.
Inputs are the signals or data received by the system and outputs are the signals or data sent from
it. The term can also be used as part of an action; to "perform I/O" is to perform an input or
output operation.

Besides providing abstractions like processes, address spaces and files, and OS also controls all
of the computer’s I/O devices.

PRINCIPLES OF I/O HARDWARE


• Different people look at I/O hardware differently
• Electrical engineers look in terms of chips, wires, motors, power supplies and other physical
components that make up the hardware
• Programmers look at the interface presented to the software (the commands the hardware
accepts, the function it carries out and the errors that can be reported back)
• We are concerned with programming I/O devices, not designing, building or maintaining
them.
• Our interest is restricted to how the hardware is programmed not, how it works inside.

I/O devices

I/O devices are the pieces of hardware used by a human (or other system) to communicate with a
computer. For instance, a keyboard or computer mouse is an input device for a computer, while
monitors and printers are output devices. Devices for communication between computers, such
as modems and network cards, typically perform both input and output operations.
I/O devices can be roughly divided into:

i. Block devices
 stores information in a fixed sized block, each one with its own address.
 common block size ranges from 512 bytes to 32,768 bytes
 data transfer takes place in blocks
 block addressable not byte addressable
 e.g. Hard disks, CD-ROMs, USB sticks etc.

Compiled by : Ravi Nandan Karn


ii. Character devices
 delivers or accepts a stream of characters, without regard to any block structure
 they are not addressable
 e.g. printers, network interfaces, mice etc.

DEVICE CONTROLLERS
• A device controller is a piece of hardware that receives commands from the system bus,
translates them into device actions and reads/writes the data onto the system bus.
• I/O devices typically consists of two components: electrical and mechanical
• The electronic component is called the device controller or the adapter
• A device controller is a part of a computer system that makes sense of the signals going to,
and coming from the CPU
• There are many device controllers in a computer system
• Any device connected to the computer is connected by a plug and socket, and the socket is
connected to a device controller
• In personal computer, device controller usually takes the form of a chip on the parent board
• Many controllers can handle two, four or even eight identical devices

Memory-mapped I/O

 Memory-mapped I/O uses the same address bus to address both memory and I/O devices
 The memory and registers of the I/O devices are mapped to (associate with) address values.
 When an address is accessed by the CPU, it may refer to a portion of physical RAM, but it
can also refer to memory of the I/O device.
 The CPU instructions used to access the memory can also be used for accessing devices.
 Each I/O device monitors the CPU's address bus and responds to any CPU access of an
address assigned to that device, connecting the data bus to the desired device's hardware
register.
 To accommodate the I/O devices, areas of the addresses used by the CPU must be reserved
Compiled by : Ravi Nandan Karn
for I/O and must not be available for normal physical memory

Compiled by : Ravi Nandan Karn


DMA – Direct memory access( IMP

Direct Memory Access (DMA) is a method that allows an input/output (I/O) device to send or
receive data directly to or from the main memory, bypassing the CPU to speed up memory
operations. CPU is only involved at the beginning and end of the transfer and interrupted only
after entire block has been transferred.

Slow devices like keyboards will generate an interrupt to the main CPU after each byte is
transferred. If a fast device such as a disk generated an interrupt for each byte, the operating
system would spend most of its time handling these interrupts. So a typical computer uses direct
memory access (DMA) hardware to reduce this overhead.

Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages
the data transfers and arbitrates access to the system bus. The controllers are programmed with
source and destination pointers (where to read/write the data), counters to track the number of
transferred bytes, and settings, which includes I/O and memory types, interrupts and states for
the CPU cycles.

Advantage
 Free up CPU
 high transfer rates
 fewer CPU cycles for each transfer

Disadvantages

 DMA transfer requires a DMA controller to carry out the operation, hence
more expensive system
 synchronization mechanisms must be provided in order to avoid accessing non-updated
information from RAM

Use CPU over DMA if


 device speed is fast
 CPU has nothing to do (if cpu is free)

Compiled by : Ravi Nandan Karn


block diagram of working of DMA

6.1 PRINCIPLES OF I/O SOFTWARE

Goals of I/O software


1. Device independence
• It should be possible to write programs that can access any I/O device without having to
specify the device in advance.
• For example, a program that reads a file as input should be able to read a file on a hard
disk, a CD-ROM, a DVD, or a USB stick without having to modify the program for each
different device.
2. Uniform naming
• The name of the device should simply be a string or an integer and do not depend on the
device in any way
3. Error Handling
• In general, the error should be handled as close to the hardware as possible
• Propagate errors up only when lower layer cannot handle it.
Compiled by : Ravi Nandan Karn
• For example, if controller discovers read error, it should try to correct the error itself. If it
cannot, the device driver should handle it.
4. Synchronous (blocking) Vs. Asynchronous (interrupt driven) transfers

Compiled by : Ravi Nandan Karn


• It is up to the OS to make the operation that are interrupt driven look blocking to the user
programs
5. Buffering
• Often, data that comes off a device cannot be stored directly to its final destination due to
various reasons.
• buffer is the intermediate destination
6. Dedicated Vs Shared devices
• Devices that can be used by many users at a time are sharable. E.g. Disk (multiple users
can open files from the same disk at the same time)
• Some devices have to be dedicated to a single person. E.g. tapes
Basically, input/output software organized in the following four layers:
 Interrupt handlers
 Device drivers
 Device-independent input/output software
 User-space input/output software
In every input/output software, each of the above given four layer has a well-defined function to
perform and a well-defined interface to the adjacent layers.
The figure given below shows all the layers along with hardware of the input/output software
system.

Here is another figure shows all the layers of the input/output software system along with their
principal functions.

Compiled by : Ravi Nandan Karn


DEVICE DRIVERS

 Device specific code for controlling the device is called device driver.
 Written by device’s manufacturer and delivered along with the device.
 Device driver is the software part that communicates to the device controller, giving it
commands and accepting response
 A driver provides the software interface to hardware devices, enabling the OS and
other programs to access hardware functions without having to now the precise details
of the hardware being used.

FUNCTIONS OF DEVICE DRIVERS

i. Accept read/write request from device independent I/O software above it.
ii. Initialize the device if needed.
iii. Manage its power requirements and log events
iv. Check whether the input parameters are valid
v. Translate parameters from abstract to concrete terms (e.g, linear block number to
CHS(Cylinder Head, Sector) for disk)
vi. Check if the device is currently in use. If it is request will be queued for later
processing.

Interrupt handlers
An interrupt handler, also known as an interrupt service routine or ISR, is a piece of software or
more specifically a callback function in an operating system or more specifically in a device
driver, whose execution is triggered by the reception of an interrupt.

When the interrupt happens, the interrupt procedure does whatever it has to in order to handle
the interrupt, updates data structures and wakes up process that was waiting for an interrupt to
Compiled by : Ravi Nandan Karn
happen.

Compiled by : Ravi Nandan Karn


The interrupt mechanism accepts an address ─ a number that selects a specific interrupt
handling routine/function from a small set. In most architectures, this address is an offset stored
in a table called the interrupt vector table. This vector contains the memory addresses of
specialized interrupt handlers.

DEVICE INDEPENDENT I/O SOFTWARE

Although some of the I/O software is device specific, other parts of it are device independent.
The basic function of device-independent software is to perform the I/O functions that are
common to all devices and provide uniform interface to user-level software.

Functions of device independent I/O software


i. Uniform interfacing for device drivers
ii. Buffering
iii. Error Reporting
iv. Allocating and releasing dedicated devices
v. Providing a device-independent block size

User-Space I/O Software

These are the libraries which provide richer and simplified interface to access the functionality of
the kernel or ultimately interactive with the device drivers. Most of the user-level I/O software
consists of library procedures with some exception like spooling system which is a way of
dealing with dedicated I/O devices in a multiprogramming system.

I/O Libraries (e.g., stdio) are in user-space to provide an interface to the OS resident device-
independent I/O SW. For example putchar(), getchar(), printf() and scanf() are example of user
level I/O library stdio available in C programming.

Polled driven IO and Interrupt driven IO

Most input and output devices are much slower than the CPU—so much slower that it would be
a terrible waste of the CPU to make it wait for the input devices. For example, compare the speed
you can type with the speed the CPU can execute instructions. Even a very fast typist could
probably type no more than 10 characters per second, while a modern CPU can execute more
than two billion instructions in that same second!

Compiled by : Ravi Nandan Karn


Polled driven

If the CPU simply waits for the next input character, we call this polled I/O. In this method, the
CPU continuously polls the I/O device in a program loop, waiting for the next input. Think of a
game where a basketball player is asked to make as many free-throws as possible in one minute,

Compiled by : Ravi Nandan Karn


but the clock is down the hall in another room. The player must run down the hall and check if
the minute has passed, and if not, go back to the gym and try to make another shot, then run
down the hall to check the clock and run back to take another shot. The player spends much of
the time simply checking (polling) the clock.

Interrupt driven

An alternative scheme for dealing with I/O is the interrupt-driven method. Here the CPU works
on its given tasks continuously. When an input is available, such as when someone types a key
on the keyboard, then the CPU is interrupted from its work to take care of the input data. In our
example, the basketball player would take shots one after another, while a second person
watched the clock down the hall. When the clock reached one minute, the person watching the
clock would yell down the hallway for the player to stop. This allows the player to take many
more shots, but at the expense of needing someone to watch the clock. Similarly, the CPU can
work continuously on a task without checking the input devices, allowing the devices themselves
to interrupt it as necessary. This requires some extra "smarts" in the form of electronic circuitry
at the I/O devices so that they can interrupt the CPU.

CUI and GUI

GUI and CUI are two types of User Interfaces. GUI stands for Graphical User Interface while
CUI stands for Character User Interface. In this article we will be discussing the differences
between these two interfaces and which one has advantages over the other.

User Interface: User Interface comprises of everything the user can use to interact with the
computer. It is basically the means by which the user and computer system can interact using
input and output devices.

GUI: GUI stands for Graphical User Interface. This is a type of user interface where user
interacts with the computer using graphics. Graphics include icons, navigation bars, images etc.
Mouse can be used while using this interface to interact with the graphics. It is a very user
friendly interface and requires no expertise. Eg: Windows has GUI.

CUI: CUI stands for Character User Interface. This is a type of user interface where user
interacts with computer using only keyboard. To perform any action a command is required. CUI
is precursor of GUI and was used in most primitive computers. Most modern computers use GUI
and not CUI. Eg: MS-DOS has CUI.

Deadlock and indefinite

Compiled by : Ravi Nandan Karn


Deadlock is a situation where the several processes in CPU compete for the finite number of
resources available within the CPU. Each process holds a resource and wait to acquire a resource
that is held by some other process. All the processes wait for resources in a circular fashion.
In the figure below, you can see that Process P1 has acquired resource R2 that is requested by
process P2 and Process P1 is requesting for resource R1 which is again held by R2.

So process P1 and P2 form a deadlock.


Deadlock is a common problem in multiprocessing operating systems, distributed systems, and
also in parallel computing systems.
Formal definition:
“A set of processes is deadlocked if each process in the set is waiting for an event that only
another process in the set can cause.”
Because all the processes are waiting, none of them will ever cause any of the events that
could wake up any of the other member in the set, and all the processes continue to wait
forever.
That is, none of the processes can-
• run
• release resources
• be awakened

Compiled by : Ravi Nandan Karn


Types of resource
1. Preemptable resources
 these resources can be taken away from the processes owning it with no ill effects
 For example: memory
2. Non-Preemptable resources
 Resources that cannot be taken away from its current owner without causing the
computation to fail.
 For example: scanner, printer etc.
In general, deadlocks involve non-preemptable resource
Conditions of deadlock
A deadlock situation can arise if and only if the following four conditions hold simultaneously in
a system-
• Mutual Exclusion: At least one resource is held in a non-sharable mode that is only one
process at a time can use the resource. If another process requests that resource, the
requesting process must be delayed until the resource has been released.
• Hold and Wait: There must exist a process that is holding at least one resource and is
waiting to acquire additional resources that are currently being held by other processes.
• No Preemption: Resources cannot be preempted; that is, a resource can only be released
voluntarily by the process holding it, after the process has completed its task.
• Circular Wait: There must exist a set {p0, p1,.....pn} of waiting processes such that p0 is
waiting for a resource which is held by p 1, p1 is waiting for a resource which is held by
p2,..., pn-1 is waiting for a resource which is held by pn and pn is waiting for a resource
which is held by p0.
NOTE: All four of these conditions must be present for a deadlock to occur. If one of them is
absent, no deadlock is possible.

Deadlock modeling
• The four Coffman conditions can be modeled using directed graphs
• The graph has two kind of nodes: circles for processes and squares for resources
• A directed arc from a resource node (square) to a process node (circle) means that the
resource is being held by the process

• A directed arc from a process to a resource means that the process is currently
requesting that resource

Compiled by : Ravi Nandan Karn


Handling Deadlock
• Deadlock detection
• Recovery from deadlock
• Deadlock prevention
• Deadlock avoidance
Deadlock detection and Recovery
Deadlock detection

Deadlock detection with one resource of each type


 For a system with one instance of each resource, we can detect deadlock by constructing
a resource allocation graph.
 If the graph contains one or more cycles, a deadlock exists. Any process that is part of
the cycle is deadlocked.
 If no cycle exists, the system is not deadlocked
Example:
 Consider a complex system with 7 processes, A through G, and 6 resources, R through W.
 The state of which resources are currently owned and which ones are currently
being requested is as follows:

1. Process A holds R and wants S.


2. Process B holds nothing but wants T.
3. Process C holds nothing but wants S.
4. Process D holds U and wants S and T.
5. Process E holds T and wants V.
6. Process F holds W and wants S.
7. Process G holds V and wants U.

From this, we can construct resource graph as follows:

Compiled by : Ravi Nandan Karn


Figure: (a) A resource graph. (b) A cycle extracted from (a).

Processes D, E and G are all deadlocked.


Processes A,C and F are not deadlocked because S can be allocated to any one of them, which
then finishes and take by other two processes in turn.

Deadlock Detection with multiple resource of each type

In this case, we use a matrix based algorithm for detecting deadlocks among n processes (P1,…Pn)
E: Existing resource vector (E1,E2….Em) ; we have m different resource
For example: if class 1 is printer then E1=2 means we have 2 printers
A: Available resource vectors
For example: if A1=0, no printers are available, both printers have been assigned
C: Current Allocation matrix
Cij is the number of instances of resource j that are held by process
i R: Request matrix
Rij is the number of instance of resource j that process i wants.

Deadlock detection algorithm


1. Look for an unmarked process, Pi , for which the i-th row of R is less than or equal to A.
2. If such process is found, add the i-th row of C to A, mark the process, and go back to step 1.
3. If no such process exists, the algorithm terminates.

Compiled by : Ravi Nandan Karn


Here, requests represented by 1st and 2nd rows of matrix R cant not be satisfied (Compare A
with each Rs) . But 3rd one can be satisfied.
• So process 3 runs and eventually returns A=(2 2 2 0).
• At this point process 2 can run and return A=(4 2 2 1).
• Now process 1 can run and there is no deadlock.
• What happen if process 2 needs 1 CD-ROM drive, 2 Tape drives and 1 Plotter (i.e. 2nd row of
R becomes (2 1 1 1))?
• Entire system gets deadlock or not???

Deadlock Recovery
Traditional operating system such as Windows doesn’t deal with deadlock recovery as it is time
and space consuming process. Real time operating systems use Deadlock recovery.
i. Recovery through preemption
To eliminate deadlocks using resource preemption, we successively preempt some resources
from processes and give these resources to other processes until the deadlock cycle is broken.
Issues to be addresses:
a. Selecting a victim
b. Rollback
c. Starvation
ii. Recovery through rollback
 Processes are checkpoint periodically
 Checkpoint refers to save the state of a process by writing the state to a file
 Checkpoint contains not only the memory image but also the resource state (i.e.
Compiled by : Ravi Nandan Karn
the resource allocated to the process)

Compiled by : Ravi Nandan Karn


 To be more effective, new checkpoints must not replace the old checkpoints but should
be written to a different file
 When a deadlock is detected, we first determine the resource causing the deadlock and
then rollback the process which currently holds that resource to a point in time before
it acquired the resource by starting one of its earlier checkpoints

iii. Recovery through killing process


The simplest way to recover from deadlock is to kill one or more processes.
Which process to kill?
a. Kill all the deadlocked processes
Sure to break deadlock but can be expensive as some of the processes may have computed for a
long time and killing the process makes the process to do all those computations once again after
it has been started again
b. Abort one process at a time until the deadlock cycle is eliminated.
This method incurs considerable overhead, since, after each process is aborted, a deadlock-
detection algorithm must be invoked to determine whether any processes are still deadlocked.
Deadlock Prevention
Deadlock prevention simply involves techniques to attack the four conditions that may cause
deadlock (Coffman conditions).
i. Prevention from Mutual exclusion
 The resource should not be assigned to a single process until absolutely necessary.
 For example : instead of assigning a printer to a process, we can create a spool (send (data
that is intended for printing or processing on a peripheral device) to an intermediate store.),
so that several processes can generate output at the same time
ii. Preventing the Wait and Hold condition
 The process should request all of its required resources at the beginning.
 If every resource is available, the process runs or else it waits but does not hold any of the
resource.
 This way hold and wait condition is eliminated but it will lead to low device utilization.
 For example, if a process requires printer at a later time and we have allocated printer before
the start of its execution printer will remained blocked till it has completed its execution.
iii. Preventing the non-preemption condition
 Preempt resources from process when resources required by other high priority process.
iv. Preventing Circular Wait
 Each resource will be assigned with a numerical number. A process can request for the
resources only in increasing order of numbering.
 For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3
lesser than R5 such request will not be granted, only request for resources more than R5 will
be granted.

Compiled by : Ravi Nandan Karn


Deadlock Avoidance
 We cannot always assume that a process will ask for all its required resource at once as we
did in earlier topic: deadlock detection
 The system must decide whether granting a resource is safe or not and make the allocation if
it is safe
 So, we need an algorithm to avoid deadlock by making right choices all the time given that
some information is available in advance

Safe and unsafe states


 A state is said to be safe if there is some scheduling order in which every process can run
up to completion even if all of them suddenly request their maximum number of resources
immediately
 A state is safe if the system can allocate resources to each process (up to its maximum) in
some order and still avoid a deadlock.
 More formally, a system is in a safe state only if there exists a safe sequence.
 A sequence of processes <P1, P2, ..., Pn> is a safe sequence for the current allocation state
if, for each Pi, the resource requests that Pi can still make can be satisfied by the currently
available resources plus the resources held by all P
 A safe state is a non-deadlock state.
 Conversely, a deadlocked state is an unsafe state.
 Not all unsafe states are deadlocks; however an unsafe state may lead to a deadlock.
 As long as the state is safe, the operating system can avoid unsafe (and deadlocked)
states Note: Safe and unsafe states can be determined by using Bankers algorithm.

The Banker's algorithm

The Banker’s algorithm is a resource allocation and deadlock avoidance algorithm developed by
Edsger Dijkstra. Resource allocation state is defined by the number of available and allocated
resources and the maximum demand of the processes. When a process requests an available
resource, system must decide if immediate allocation leaves the system in a safe state.

Algorithm
1) Find a row in the Need matrix which is less than the Available vector. If such a row exists,
then the process represented by that row may complete with those additional resources. If
no such row exists, eventual deadlock is possible.
2) Double check that granting these resources to the process for the chosen row will result in a
safe state. Looking ahead, pretend that that process has acquired all its needed resources,
executed, terminated, and returned resources to the Available vector. Now the value of the
Available vector should be greater than or equal to the value it was previously.
3) Repeat steps 1 and 2 until
a) all the processes have successfully reached pretended termination (this implies that
the initial state was safe); or
b) deadlock is reached (this implies the initial state was unsafe).

Compiled by : Ravi Nandan Karn


Basic Facts:
If a system is in safe state ⇒ no deadlocks.
If a system is in unsafe state ⇒ possibility of deadlock.

Avoidance ⇒ ensure that a system will never enter an unsafe state.



Some terminologies
a) Available
It represents the number of available resources of each type.
b) Max
It represents the maximum number of instances of each resource that a process can request.
c) Allocation
It represents the number of resources of each type currently allocated to each process.
d) Need
It indicates the remaining resource needs of each process.

Bankers Algorithms for single resource

Example 1: State whether the given processes are in deadlock or not. Given that resource
instance is 10.
process Allocated Maximum
A 3 9
B 2 4
C 2 7

Solution,
Calculating need resources, using
Need = Maximum - Allocated we get,
process Allocated Maximum Need
A 3 9 6
B 2 4 2
C 2 7 5
Here currently total allocation = 3+2+2 = 7
So free = total available – current allocation = 10 – 7 = 3

 Step 1
With current free resources process B can be executed, since need of B ≤ Free i.e 2 ≤ 3 So B
executes. After execution of B it release the resources allocated by it.
Total free resource becomes, free = current free + Allocation by B = (1+2+2) = 5

Compiled by : Ravi Nandan Karn


 Step 2
Now, with current free resources process C can be executed, since need of C ≤ Free i.e 5 ≤ 5 So
C executes. After execution of C it release the resources allocated by it.
Total free resource becomes, free = current free + Allocation by C = (5+2) = 7

Compiled by : Ravi Nandan Karn


Step 3
With current free resources process A can be executed, since need of A ≤ Free i.e 6 ≤ 7 So A
executes. After execution of A it releases the resources allocated by it.
Total free resource becomes, free = current free + Allocation by A = (1+6+3) = 10

Here all the process runs successfully hence they are is safe state and occurs no deadlock.
Safe sequence is: B→C→A

Example 2: State whether the given processes are in deadlock or not. Given that resource
instance is 10.
process Allocated Maximum
A 4 9
B 2 4
C 2 7

Solution,
Calculating need resources, using
Need = Maximum - Allocated we get,
process Allocated Maximum Need
A 4 9 5
B 2 4 2
C 2 7 5
Here currently total allocation = 4+2+2 = 8
So free = total available – current allocation = 10 – 8 = 2
 Step 1
With current free resources process B can be executed, since need of B ≤ Free i.e 2 ≤ 2 So B
executes. After execution of B it release the resources allocated by it.
Total free resource becomes, free = current free + Allocation by B = (2+2) = 4

With current free resources none of the processes can be further be executed hence process are
unsafe and occurs deadlock.

Bankers Algorithm for multiple resources

Example 1: A system has four process P1, P2, P3 and P4 and three resources R1, R2 and R3
with existing resources E = (15, 9, 5). After allocating resources to all the processes available
resources becomes A = ( 3, 2, 0). State whether the process is safe or not using banker’s
algorithm. If safe, write the safe sequence.

Compiled by : Ravi Nandan Karn


Process Allocation Maximum Need
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 3 0 1 3 2 2 0 2 1
P2 5 4 1 6 8 2 1 4 1
P3 2 2 0 3 2 4 1 0 4
P4 2 1 3 4 2 3 2 1 0
Note: If Need is not given, it can be calculated using Need = Maximum – Allocation
Solution:
We have A = ( 3, 2, 0)
Step 1: With current available resources A = ( 3, 2, 0) P4 can be executed.
Since need of P4 ≤ A i.e. (2,1,0) ≤ (3,2,0) so P4 executes
After complete execution of P4 it releases the resources which is allocated by it. Now total
current available resources A becomes, A = previous free + Allocation by P4
A = (3,2,0) + (2,1,3) = (5,3,3)
Step 2: With current available resources A = ( 5, 3, 3) P1 can be executed.
Since need of P1 ≤ A i.e. (0,2,1) ≤ (5,3,3) so P1 executes
After complete execution of P1 it releases the resources which is allocated by it. Now total
current available resources A becomes, A = previous free + Allocation by P1
A = (5,3,3) + (3,0,1) = (8,3,4)
Step 3: With current available resources A = ( 8, 3, 4) P3 can be executed.
Since need of P3 ≤ A i.e. (1,0,4) ≤ (8,3,4) so P3 executes
After complete execution of P3 it releases the resources which is allocated by it. Now total
current available resources A becomes, A = previous free + Allocation by P3
A = (8,3,4) + (2,2,0) = (10,5,4)
Step 4: With current available resources A = ( 10, 5, 4) P2 can be executed.
Since need of P2 ≤ A i.e. (1,4,1) ≤ (10,5,4) so P2 executes
After complete execution of P2 it releases the resources which is allocated by it. Now total
current available resources A becomes, A = previous free + Allocation by P2
A = (10,5,4) + (5,4,1) = (15,9,5)
Here all the process runs hence they are in safe state and occurs no deadlock.
Safe sequence is: P4→P1→P3→P2

Compiled by : Ravi Nandan Karn


Example 2:
Assume we have the following resources:
 5 tape drives
 2 graphic displays
 4 printers
 3 disks
We can create a vector representing our total resources: Total = (5, 2, 4, 3).
Consider we have already allocated these resources among four processes as demonstrated by the
following matrix named Allocation.

Process Name Tape Drives Graphics Printers Disk Drives


Process A 2 0 1 1
Process B 0 1 0 0
Process C 1 0 1 1
Process D 1 1 0 1
The vector representing the allocated resources is the sum of these columns:
Allocated = (4, 2, 2, 3).
We also need a matrix to show the number of each resource still needed for each process; we
call this matrix Need.
Process Name Tape Drives Graphics Printers Disk Drives
Process A 1 1 0 0
Process B 0 1 1 2
Process C 3 1 0 0
Process D 0 0 1 0
The vector representing the available resources will be the sum of these columns subtracted from
the Allocated vector: Available = (1, 0, 2, 0).
Following the algorithm sketched above,
Iteration 1:
Examine the Need matrix. The only row that is less than the Available vector is the one for
Process D.
Need(Process D) = (0, 0, 1, 0) < (1, 0, 2, 0) = Available
If we assume that Process D completes, it will turn over its currently allocated resources,
incrementing the Available vector.
(1, 0, 2, 0) Current value of Available
+ (1, 1, 0, 1) Allocation (Process D)
````````````````
(2, 1, 2, 1) Updated value of Available
Iteration 2:

Compiled by : Ravi Nandan Karn


Examine the Need matrix, ignoring the row for Process D. The only row that is less than the
Available vector is the one for Process A.
Need(Process A) = (1, 1, 0, 0) < (2, 1, 2, 1) = Available
If we assume that Process A completes, it will turn over its currently allocated resources,
incrementing the Available vector.
(2, 1, 2, 1) Current value of Available
+ (2, 0, 1, 1) Allocation (Process A)
````````````````
(4, 1, 3, 2) Updated value of Available
Iteration 3:
Examine the Need matrix without the row for Process D and Process A. The only row that is less
than the Available vector is the one for Process B.
Need(Process B) = (0, 1, 1, 2) < (4, 1, 3, 2) = Available
If we assume that Process B completes, it will turn over its currently allocated resources,
incrementing the Available vector.
(4, 1, 3, 2) Current value of Available
+ (0, 1, 0, 0) Allocation (Process B)
````````````````
(4, 2, 3, 2) Updated value of Available
Iteration 4:
Examine the Need matrix without the rows for Process A, Process B, and Process D. The only
row left is the one for Process C, and it is less than the Available vector.
Need(Process C) = (3, 1, 0, 0) < (4, 2, 3, 2) = Available
If we assume that Process C completes, it will turn over its currently allocated resources,
incrementing the Available vector.
(4, 2, 3, 3) Current value of Available
+ (1, 0, 1, 1) Allocation (Process C)
````````````````
(5, 2, 4, 3) Updated value of Available
Notice that the final value of the Available vector is the same as the original Total vector,
showing the total number of all resources:
Total = (5, 2, 4, 2) < (5, 2, 4, 2) = Available
This means that the initial state represented by the Allocation and Need matrices is a safe state.
The safe sequence that assures this safe state is <D, A, B, C>.

Compiled by : Ravi Nandan Karn


Note: The Banker's algorithm can also be used in the detection of deadlock.

Compiled by : Ravi Nandan Karn


Disadvantages of the Banker's Algorithm
 It requires the number of processes to be fixed; no additional processes can
start while it is executing.
 It requires that the number of resources remain fixed; no resource may go
down for any reason without the possibility of deadlock occurring.
 It allows all requests to be granted in finite time, but one year is a finite amount of
time.
 Similarly, all of the processes guarantee that the resources loaned to them
will be repaid in a finite amount of time. While this prevents absolute
starvation, some pretty hungry processes might develop.
 All processes must know and state their maximum resource need in advance.

large CPU burst might never get the CPU to complete its execution and
starve.
 In Priority Scheduling, a constant stream of high priority processes might
starve one or more lower priority process(es) as the CPU will always be
allocated to the process with highest priority.

RAID

RAID or redundant array of independent disks is a data storage virtualization technology that
combines multiple physical disk drive components into one or more logical units for data
redundancy, performance improvement, or both.
It is a way of storing the same data in different places on multiple hard disks or solid-state drives
to protect data in the case of a drive failure. A RAID system consists of two or more drives
working in parallel. These can be hard discs, but there is a trend to use SSD technology (Solid
State Drives).
RAID combines several independent and relatively small disks into single storage of a large size.
The disks included in the array are called array members. The disks can combine into the array
in different ways, which are known as RAID levels. Each of RAID levels has its own
characteristics of:
o Fault-tolerance is the ability to survive one or several disk failures.
o Performance shows the change in the read and writes speed of the entire array compared
to a single disk.
o The array's capacity is determined by the amount of user data written to the array. The
array capacity depends on the RAID level and does not always match the sum of the
RAID member disks' sizes. To calculate the particular RAID type's capacity and a set of
member disks, you can use a free online RAID calculator.

Compiled by : Ravi Nandan Karn


RAID systems can use with several interfaces, including SATA, SCSI, IDE, or FC (fiber
channel.) Some systems use SATA disks internally but that have a FireWire or SCSI interface for
the host system.
Sometimes disks in a storage system are defined as JBOD, which stands for Just a Bunch of
Disks. This means that those disks do not use a specific RAID level and acts as stand-alone
disks. This is often done for drives that contain swap files or spooling data.
How RAID Works
RAID works by placing data on multiple disks and allowing input/output operations to overlap in
a balanced way, improving performance. Because various disks increase the mean time between
failures (MTBF), storing data redundantly also increases fault tolerance.
RAID arrays appear to the operating system as a single logical drive. RAID employs the
techniques of disk mirroring or disk striping.
o Disk Mirroring will copy identical data onto more than one drive.
o Disk Striping partitions help spread data over multiple disk drives. Each drive's storage
space is divided into units ranging from 512 bytes up to several megabytes. The stripes of
all the disks are interleaved and addressed in order.
o Disk mirroring and disk striping can also be combined in a RAID array.
In a single-user system where significant records are stored, the stripes are typically set up to be
small (512 bytes) so that a single record spans all the disks and can be accessed quickly by
reading all the disks at the same time.
In a multi-user system, better performance requires a stripe wide enough to hold the typical or
maximum size record, allowing overlapped disk I/O across drives.
Levels of RAID
Many different ways of distributing data have been standardized into various RAID levels. Each
RAID level is offering a trade-off of data protection, system performance, and storage space. The
number of levels has been broken into three categories, standard, nested, and non-standard RAID
levels.
Standards RAID Levels
Below are the following most popular and standard RAID levels.
1. RAID 0 (striped disks)
RAID 0 is taking any number of disks and merging them into one large volume. It will increase
speeds as you're reading and writing from multiple disks at a time. But all data on all disks is lost
if any one disk fails. An individual file can then use the speed and capacity of all the drives of the
array. The downside to RAID 0, though, is that it is NOT redundant. The loss of any individual

Compiled by : Ravi Nandan Karn


disk will cause complete data loss. This RAID type is very much less reliable than having a
single disk.

There is rarely a situation where you should use RAID 0 in a server environment. You can use it
for cache or other purposes where speed is essential, and reliability or data loss does not matter at
all.
2. RAID 1 (mirrored disks)
It duplicates data across two disks in the array, providing full redundancy. Both disks are store
exactly the same data, at the same time, and at all times. Data is not lost as long as one disk
survives. The total capacity of the array equals the capacity of the smallest disk in the array. At
any given instant, the contents of both disks in the array are identical.
RAID 1 is capable of a much more complicated configuration. The point of RAID 1 is primarily
for redundancy. If you completely lose a drive, you can still stay up and running off the other
drive.

Compiled by : Ravi Nandan Karn


If either drive fails, you can then replace the broken drive with little to no downtime. RAID 1
also gives you the additional benefit of increased read performance, as data can read off any of
the drives in the array. The downsides are that you will have slightly higher write latency. Since
the data needs to be written to both drives in the array, you'll only have a single drive's available
capacity while needing two drives.
3. RAID 5(striped disks with single parity)
RAID 5 requires the use of at least three drives. It combines these disks to protect data against
loss of any one disk; the array's storage capacity is reduced by one disk. It strips data across
multiple drives to increase performance. But, it also adds the aspect of redundancy by
distributing parity information across the disks.

Compiled by : Ravi Nandan Karn


4. RAID 6 (Striped disks with double parity)
RAID 6 is similar to RAID 5, but the parity data are written to two drives. The use of additional
parity enables the array to continue to function even if two disks fail simultaneously. However,
this extra protection comes at a cost. RAID 6 has a slower write performance than RAID 5.

Compiled by : Ravi Nandan Karn


The chances that two drives break down at the same moment are minimal. However, if a drive in
a RAID 5 system died and was replaced by a new drive, it takes a lot of time to rebuild the
swapped drive. If another drive dies during that time, you still lose all of your data. With RAID
6, the RAID array will even survive that second failure also

RAM DISK
A RAM disk (or RAM drive) is a virtual disk drive created using a portion of a computer's
RAM (Random Access Memory). It mimics the functionality of a traditional storage device like
an HDD or SSD but operates at much higher speeds due to the nature of RAM.
Key Features of a RAM Disk:
1. High Speed: RAM is significantly faster than traditional storage devices, so read and
write operations on a RAM disk are exceptionally fast.
2. Volatility: RAM is volatile, meaning the data stored in a RAM disk is lost when the
computer is powered off or restarted.
3. Configurable Size: You can allocate a portion of your system's RAM to act as a disk, but
this reduces the RAM available for other tasks.
Uses of RAM Disk:

Compiled by : Ravi Nandan Karn


1. Temporary Storage: Ideal for caching, temporary file storage, or swap files to improve
performance.
2. Testing Software: Developers use RAM disks to test software that requires fast
read/write operations.
3. Gaming: Some gamers use RAM disks to store game files for faster loading times.
4. Data Processing: Suitable for applications like video editing, where high-speed access to
data is critical.
How to Create a RAM Disk:
1. Operating System Tools:
o Some operating systems (e.g., Linux) natively support RAM disks (e.g., /dev/shm
or tmpfs).
o Windows requires third-party software or specific commands.
2. Third-Party Software:
o Tools like SoftPerfect RAM Disk or ImDisk can simplify the process of creating
and managing RAM disks.
Advantages:
 Extremely fast data access.
 Reduces wear on SSDs by handling frequent read/write operations.
 Can significantly speed up applications that require fast I/O.
Disadvantages:
 Limited by the amount of available RAM.
 Volatile: Data is lost when the system powers off or reboots.
 Misconfiguration can lead to performance issues if too much RAM is allocated.

Compiled by : Ravi Nandan Karn

You might also like