Unit 04

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Operating System (22516) Unit No.

Unit No. 4 CPU Scheduling and Algorithms


Marks – 14
Contents:
4.1 Scheduling Type- Scheduling objectives, CPU and IO bound cycles,
Preemptive and non preemptive scheduling criteria
4.2 Types of Scheduling algorithms: First Come First Serve, Shortest Job First,
Shortest Remaining First, Round Robin Algorithm, Priority Seducing, Multilevel
queue scheduling
4.3 Deadlock: system Models, Necessary conditions. Leading to deadlock,
Deadlock Handling: prevention and avoidance
________________________________________________________________________
CO: Apply scheduling algorithms to calculate turn around time and average waiting time
________________________________________________________________________
References:
1. Operating System Concepts: Silberschatz, Abraham, Galvin, Peter B., Gagne,
Greg: 9780470128725: Amazon.com: Books.
2. Operating Systems: Internals and Design Principles” by William Stallings
3. Operating Systems: A Concept-Based Approach” by D M Dhamdhere
4. https://nptel.ac.in/courses/106/105/106105214/
5. https://nptel.ac.in/courses/106/105/106105214/ for Deadlock

AVK 1
Operating System (22516) Unit No.4

4.1 Concept:
In a single-processor system, only one process can run at a time; any others must wait
until the CPU is free and can be rescheduled. The objective of multiprogramming is to
have some process running at all times, to maximize CPU utilization. The idea is
relatively simple. A process is executed until it must wait for the completion of some I/O
request. In a simple computer system, the CPU then just sits idle. All this waiting time is
wasted; no useful work is accomplished. With multiprogramming, we try to use this time
productively. Several processes are kept in memory at one time. When one process has to
wait, the operating system takes the CPU away from that process and gives the CPU to
another process. This pattern continues. Every time one process has to wait, another
process can take over use of the CPU.
Scheduling of this kind is a fundamental operating-system function. Almost all computer
resources are scheduled before use. The CPU is one of the primary computer resources.
Thus, its scheduling is central to operating-system design.
CPU-I/O Burst Cycle
The success of CPU scheduling depends on an observed property of processes: Process
execution consists of a cycle of CPU execution and I/O wait. Processes switches between
these two states. Process execution begins with a CPU burst. That is followed by an I/O
burst, which is followed by another CPU burst, then another I/O burst, and so on.
Eventually, the final CPU burst ends with a system request to terminate execution This is
shown in Figure .

AVK 2
Operating System (22516) Unit No.4

The durations of CPU bursts have been measured extensively. Although they vary
greatly from process to process and from computer to computer. An I/O-bound
program typically has many short CPU bursts. A CPU-bound program might have a
few long CPU bursts.
CPU Scheduler

Whenever the CPU becomes idle, the operating system must select one of the processes
in the ready queue to be executed. The selection process is carried out by the short-
term scheduler (or CPU scheduler). The scheduler selects a process from the
processes in memory that are ready to execute and allocates the CPU to that process.
Note that the ready queue is not necessarily in a first-in, first-out (FIFO) queue fashion.
In the various scheduling algorithms, a ready queue can be implemented as a FIFO
queue, a priority queue, a tree, or simply an unordered linked list. All the processes in
the ready queue are lined up waiting for a chance to run on the CPU. The records in the
queues are generally process control blocks (PCBs) of the processes.

AVK 3
Operating System (22516) Unit No.4

Preemptive Scheduling:-Even if CPU is allocated to one process, CPU can be


preempted to other process if other process is having higher priority or some other
fulfilling criteria.
➢ Circumstances for preemptive
Process switch from running to ready state
Process switch from waiting to ready state
e.g.: Round Robin, Priority algorithms, shortest remaining Time
• Throughput is less
• It is suitable for RTS.
• Only the processes having higher priority are scheduled.
It doesn‟t treat all processes as equal.
Algorithm design is complex.
Non Preemptive Scheduling
Once the CPU has been allocated to a process the process keeps the CPU until it releases
CPU either by terminating or by switching to waiting state.
➢ Circumstances for Non preemptive
• Process switches from running to waiting state
• Process terminates
e.g.: FCFS algorithm, SJF
• Throughput is high.
• It is not suitable for RTS.
• Processes having any priority can get scheduled.
• It treats all process as equal.
• Algorithm design is simple.
Preemptive and Non Preemptive Scheduling
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for
example, as the result of an I/O request or an invocation of wait for the
termination of one of the child processes)
2. When a process switches from the running state to the ready state (for
example, when an interrupt occurs)

AVK 4
Operating System (22516) Unit No.4

3. When a process switches from the waiting state to the ready state (for
example, at completion of I/O)

4. When a process terminates


When scheduling takes place only under circumstances 1 and 4, we say that the
scheduling scheme is nonpreemptive or cooperative; otherwise, it is preemptive. Under
nonpreemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the
waiting state. Cooperative scheduling is the only method that can be used on certain
hardware platforms, because it does not require the special hardware (for example, a
timer) needed for preemptive scheduling.
Scheduling Criteria
Different CPU scheduling algorithms have different properties. Many criteria have
been suggested for comparing CPU scheduling algorithms. The criteria include the
following:
CPU utilization. We want to keep the CPU as busy as possible. Conceptually, CPU
utilization can range from 0 to 100 percent. In a real system, it should range from 40
percent (for a lightly loaded system) to 90 percent (for a heavily used system).

Throughput. If the CPU is busy in executing processes, then work is being done. One
measure of work is the number of processes that are completed per time unit, called
throughput. For long processes, this rate may be one process per hour; for short
transactions, it may be 10 processes per second.

Turnaround time. From the point of view of a particular process, the important criterion
is how long it takes to execute that process. The interval from the time of submission of a
process to the time of completion is the turnaround time. Turnaround time is the sum of
the periods spent waiting to get into memory, waiting in the ready queue, executing on
the CPU, and doing I/O.
Waiting time. The CPU scheduling algorithm does not affect the amount of time during
which a process executes or does I/O; it affects only the amount of time that a process
spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in
the ready queue.

AVK 5
Operating System (22516) Unit No.4

Response time. In an interactive system, turnaround time may not be the best criterion.
Often, a process can produce some output fairly early and can continue computing new
results while previous results are being output to the user. Thus, another measure is the
time from the submission of a request until the first response is produced. This measure,
called response time, is the time it takes to start responding, not the time it takes to output
the response. The turnaround time is generally limited by the speed of the output device.
It is desirable to maximize CPU utilization and throughput and to minimize turnaround
time, waiting time, and response time.
Scheduling Algorithms
First-Come, First-Served Scheduling

• The code for FCFS scheduling is simple to write and understand.


• The process that requests the CPU first is allocated the CPU first.
• It is nonpreemptive algorithm.
• Can easily be implemented with a FIFO queue.
• Poor in performance as average waiting time is high.
• When a process enters the ready queue, its PCB is linked onto the tail of the
queue.
As other jobs come in, they are put onto the end of the queue. When running process is
blocked by any reason, the first process from the queue will be executed next. When a
blocked process
becomes ready, it is put at the end of the queue. Average waiting time is long in FCFS.
1. Solve given problem by Using FCFS to calculate average waiting time and
turnaround Time.
Process Average time CPU burst Time (mS)
P0 0 5
P1 1 3
P2 2 8
P3 3 6

The Gantt Chart is


P0 P1 P2 P3
0 5 8 16 22
AVK 6
Operating System (22516) Unit No.4

Waiting Time = Start Time – Arrival Rime


Turnaround Time = Burst Time + Waiting Time = Finish Time – Arrival Time
Waiting Time of each process is as follows:

Process Waiting Time


P0 0-0=0
P1 5 -1 = 4
P2 8–2=6
P3 16 – 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75 ms


Turnaround Time of each process is as follows:

Process Turnaround Time


P0 5+0=5
P1
3+ 4=7
P2 8 + 6 = 14
P3 6 + 13 = 19

Average Turnaround time: (5+7+14+19) / 4 = 11.25 ms


Solve given problem by Using FCFS to calculate average waiting time and turnaround
Time. (Sample QP)
Process Arrival time CPU burst Time (mS)
P0 0 7
P1 1 4
P2 2 9

P3 3 6
P4 4 8

Average waiting time =


Average turnaround Time =
AVK 7
Operating System (22516) Unit No.4

Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds:

If the processes arrive in the order Pi, P2, P3, and this algorithm is used. Then the result
observed is shown in the following Gant Chart.

The waiting time is 0 milliseconds for process Pi, 24 milliseconds for process P2, and 27
milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17

milliseconds. If the processes arrive in the order P?, P3, P1, however, the results will be
as shown in the following Gantt chart:

The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is


substantial. Thus, the average waiting time under an FCFS policy is generally not
minimal and may vary substantially if the process's CPU burst times vary greatly.
Shortest-Job-First Scheduling
This algorithm associates with each process the length of the process's next CPU burst.
When the CPU is available, it is assigned to the process that has the smallest next CPU
burst. If the next CPU bursts of two processes are the same, FCFS scheduling is used to
break the tie. Note that a more appropriate term for this scheduling method would be the
shortest-next-CPU-burst algorithm, because scheduling depends on the length of the

AVK 8
Operating System (22516) Unit No.4

next CPU burst of a process, rather than its total length. We use the term SJF because
most people and textbooks use this term to refer to this type of scheduling.
Non preemptive SJF
Process Average time CPU burst Time (mS)
P0 0 5
P1 1 3
P2 2 8
P3 3 6

The Gabtt Chart is

P0 P1 P3 P2

0 5 8 14 22
Waiting time of each process is as follows –

Process Waiting Time


P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25ms


Turnaround Time of each process is as follows:

Process Turnaround Time


P0 5+0=5
P1 3+ 4=7
P2 8 + 12 = 20
P3 6 + 5 = 11

Average Turnaround time: (5+7+20+11) / 4 = 8.25 ms

AVK 9
Operating System (22516) Unit No.4

As an example of SJF scheduling, consider the following set of processes, with the length
of the CPU burst given in milliseconds:
Process Burst Time
Pi 6
P2 8
P3 7
P4 3

Using SJF scheduling, we would schedule these processes according to the following
Gantt chart:

The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9
milliseconds for process P3, and 0 milliseconds for process P4. Thus, the average waiting
time is (3 + 16 + 9 + 0)/4 = 7 milliseconds. By comparison, if we were using the FCFS
scheduling scheme, the average waiting time would be 10.25 milliseconds.
preemptive SJF or shortest Remaining Time
As an example, consider the following four processes, with the length of the CPU burst
given in milliseconds:
Process arrival Time Burst Time
PP1 0 8
P2 1 4
P3 2 9
P4 3 5

If the processes arrive at the ready queue at the times shown and need the indicated burst
times, then the resulting preemptive SJF or shortest remaining Time schedule is as
depicted in the following Gantt chart:

AVK 10
Operating System (22516) Unit No.4

Process P1 is started at time 0, since it is the only process in the queue. Process P2
arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger than the
time required by process P2 (4 milliseconds), so process P1 is preempted, and process P2
is scheduled. The average waiting time for this example is ((10 - 1) + (1 - 1) + (17 - 2) +
(5 - 3))/4 = 26/4 = 6.5 milliseconds. Nonpreemptive SJF scheduling would result in an
average waiting time of 7.75 milliseconds.
Priority Scheduling
The SJF algorithm is a special case of the general priority scheduling algorithm. A
priority is associated with each process, and the CPU is allocated to the process with the
highest priority. Equal-priority processes are scheduled in FCFS order. An SJF algorithm
is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next
CPU burst. The larger the CPU burst, the lower the priority, and vice versa.
As an example, consider the following set of processes, assumed to have arrived at time
0, in the order Pi, P2,p3,p4 and P5, with the length of the CPU burst given in
milliseconds:
Process 1 Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Using priority scheduling, we would schedule these processes according to the Gant
Chart

The average waiting time is (6+0+16+18+1)/5 = 8.2 milliseconds.


Priority scheduling can be either preemptive or nonpreemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the currently
running process. A preemptive priority scheduling algorithm will preempt the CPU if the
priority of the newly arrived process is higher than the priority of the currently running

AVK 11
Operating System (22516) Unit No.4

process. A nonpreemptive priority scheduling algorithm will simply put the new process
at the head of the ready queue.
A major problem with priority scheduling algorithms is starvation or indefinite block-
ing, A process that is ready to run but waiting for the CPU can be considered blocked.
This algorithm can leave some low-priority processes waiting indefinitely for the CPU. In
a heavily loaded computer system, higher-priority processes can prevent a low-priority
process from getting the CPU
A solution to the problem of indefinite blockage of low-priority processes is aging.
Aging is a technique of gradually increasing the priority of processes that wait in the
system for a long time.
Round-Robin Scheduling

The round-robin (RR) scheduling algorithm is designed especially for timesharing


systems. It is similar to FCFS scheduling, but preemption is added to switch between
processes. A small unit of time, called a time quantum or time slice, is defined. A time
quantum is generally from 10 to 100 milliseconds. The ready queue is treated as a
circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to
each process for a time interval of up to 1 time quantum.
To implement RR scheduling, keep the ready queue as a FIFO queue of processes. New
processes are added to the tail of the ready queue. The CPU scheduler picks the first
process from the ready queue, sets a timer to interrupt after 1 time quantum, and
dispatches the process.
One of two things will then happen. The process may have a CPU burst of less than 1
time quantum. In this case, the process itself will release the CPU voluntarily. The
scheduler will then proceed to the next process in the ready queue. Otherwise, if the CPU
burst of the currently running process is longer than 1 time quantum, the timer will go off
and will cause an interrupt to the operating system. A context switch will be executed,
and the process will be put at the tail of the ready queue. The CPU scheduler will then
select the next process in the ready queue.
The average waiting time under the RR policy is often long.
1. Calculate average waiting time with Round Robin for the following processes in
memory
The time quantum is 4 ms
AVK 12
Operating System (22516) Unit No.4

Process Burst time (ms)


P0 3
P1 5
P2 7
P3 4

The Gantt Chart is

P0 P1 P2 P3 P1 P2
0 3 7 11 15 16 19

Wating time of processes are

Process Waiting Time


P0 0-0=0
P1
15 –(4* 1) = 11
P2 16 – (4 * 1) = 12
P3 11

Average waiting time = (0 +11 +12+ 11)/ 4 = 8.5 ms


*Waiting time = last chance of execution – ( time quantum* No. Of
previous chances)
2. Consider the following set of processes that arrive at time 0, with the length of the
CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3

AVK 13
Operating System (22516) Unit No.4

If we use a time quantum of 4 milliseconds, then process P1 gets the first 4


milliseconds. Since it requires another 20 milliseconds, it is preempted after the first time
quantum, and the CPU is given to the next process in the queue, i.e. process P2 Since
process P2 does not need 4 milliseconds, it quits before its time quantum expires. The
CPU is then given to the next process, i.e. process P3. Once each process has received 1
time quantum, the CPU is returned to process P1 for an additional time quantum. The
resulting RR schedule is

The average waiting time is ((10 –4) + 4+7)=17/3 = 5.66 milliseconds. In the RR
scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in
a row (unless it is the only runnable process). If a process's CPU burst exceeds 1 time
quantum, that process is preempted and is put back in the ready queue. The RR
scheduling algorithm is thus preemptive.
The performance of the RR algorithm depends on the size of the time quantum. At
one extreme, if the time quantum is extremely large, the RR policy is the same as the
FCFS policy. If the time quantum is extremely small (say, 1 millisecond), the RR
approach is called processor sharing. In software, we need also to consider the effect
of context switching on the performance of RR scheduling. Let us assume that we have
only one process of 10 time units. If the quantum is 12 time units, the process finishes
in less than 1 time quantum, with no overhead. If the quantum is 6 time units, however,
the process requires 2 quanta, resulting in a context switch. If the time quantum is1
time unit, then nine context switches will occur, slowing the execution of the process
accordingly (Figure).

AVK 14
Operating System (22516) Unit No.4

Although the time quantum should be large compared with the context-switch time, it
should not be too large. If the time quantum is too large, RR scheduling degenerates to
FCFS policy. A rule of thumb is that 80 percent of the CPU bursts should be shorter than
the time quantum.

Multilevel Queue Scheduling


A multilevel queue scheduling algorithm partitions the ready queue into several
separate queues (Figure). The processes are permanently assigned to one queue, generally
based on some property of the process, such as memory size, process priority, or process
type. Each queue has its own scheduling algorithm.

lowest priority
Figure Multilevel queue s scheduling.

AVK 15
Operating System (22516) Unit No.4

For example, separate queues might be used for foreground and background processes.
The foreground queue might be scheduled by an RR algorithm, while the background
queue is scheduled by an FCFS algorithm
An example of a multilevel queue scheduling algorithm with five queues, listed below in
order of priority:

1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Each queue has absolute priority over lower-priority queues. No process in the batch
queue, for example, could run unless the queues for system processes, interactive processes,
and interactive editing processes were all empty. If an interactive editing process entered
the ready queue while a batch process was running, the batch process would be preempted.
Another possibility is to time-slice among the queues. Here, each queue gets a certain
portion of the CPU time, which it can then schedule among its various processes. For
instance, in the foreground background queue example, the foreground queue can be given
80 percent of the CPU time for RR scheduling among its processes, whereas the
background queue receives 20 percent of the CPU to give to its processes on an FCFS
basis.
Deadlocks
In a multiprogramming environment, several processes may compete for a finite number
of resources. A process requests resources; and if the resources are not available at that
time, the process enters into waiting state. Sometimes, a waiting process is never again
able to change state, because that requested resources are held by other waiting processes.
This situation is called a deadlock.
System Model
A system consists of a finite number of resources to be distributed among a number of
competing processes. The resources are divided into several types, such as Memory
space, CPU cycles, files, and I/O devices (such as printers and DVD drives).

AVK 16
Operating System (22516) Unit No.4

A process must request a resource before using it and must release the resource after
using it. A process may request as many resources as it requires to carry out its
designated task. The number of resources requested may not exceed the total number of
resources available in the system i.e., a process cannot request three printers if the system
has only two.
Under the normal mode of operation, a process may utilize a resource in only the
following sequence:

1.Request. If the request cannot be granted immediately (for example, if the resource is
being used by another process), then the requesting process must wait until it can acquire
the resource.

2.Use. The process can operate on the resource (for example, if the resource is a printer,
the process can print on the printer).

3.Release. The process releases the resource.

Necessary Conditions A deadlock situation can arise if the following four


conditions hold simultaneously in a system:
1.Mutual exclusion. At least one resource must be held in a nonsharable mode; that is,
only one process at a time can use the resource. If another process requests that resource,
the requesting process must be delayed until the resource has been released.
2.Hold and wait. A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.

3.No preemption. Resources cannot be preempted. A resource can be released only


voluntarily by the process holding it, after that process has completed its task.

Resources cannot be removed from the processes are used to completion or released
voluntarily by the process holding it.
4.Circular wait. The processes in the system form a circular list or chain where
each process in the list is waiting for a resource held by the next process in the list.
Deadlocks Handling
we can deal with the deadlock problem in one of three ways:

AVK 17
Operating System (22516) Unit No.4

• We can use a protocol to prevent or avoid deadlocks, ensuring that the system will
never enter a deadlock state.

• We can allow the system to enter a deadlock state, detect it, and recover.

• We can ignore the problem altogether and pretend that deadlocks never occur in the
system.
To ensure that deadlocks never occur, the system can use either a deadlock-prevention
or a deadlock-avoidance scheme. Deadlock prevention provides a set of methods for
ensuring that at least one of the necessary conditions cannot hold. These methods prevent

deadlocks by constraining how requests for resources can be made.

Deadlock Prevention
For a deadlock to occur, each of the four necessary conditions must hold. By ensuring
that at least one of these conditions cannot hold, we can prevent the occurrence of a
deadlock.
1.Mutual Exclusion

The mutual-exclusion condition must hold for nonsharable resources. For example, a
printer cannot be simultaneously shared by several processes. Shamble resources, in
contrast, do not require mutually exclusive access and thus cannot be involved in a
deadlock. Read-only files are a good example of a sharable resource. If several
processes attempt to open a read-only file at the same time, they can be granted
simultaneous access to the file. A process never needs to wait for a sharable resource. In
general, however, we cannot prevent deadlocks by denying the mutual-exclusion
condition, because some resources are intrinsically nonsharable.
2. Hold and Wait
To ensure that the hold-and-wait condition never occurs in the system, we must guarantee
that, whenever a process requests a resource, it does not hold any other resources. One
protocol that can be used requires each process to request and be allocated all its
resources before it begins execution.
An alternative protocol allows a process to request resources only when it has none. A
process may request some resources and use them. Before it can request any additional
resources, however, it must release all the resources that it is currently allocated.

AVK 18
Operating System (22516) Unit No.4

Both these protocols have two main disadvantages. First, resource utilization may be low,
since resources may be allocated but unused for a long period.
Second, starvation is possible. A process that needs several popular resources may have
to wait indefinitely, because at least one of the resources that it needs is always allocated
to some other process.
3. No Preemption

The third necessary condition for deadlocks is that there be no preemption of resources
that have already been allocated. To ensure that this condition does not hold, we can use
the following protocol. If a process is holding some resources and requests another
resource that cannot be immediately allocated to it (that is, the process must wait), then
all resources currently being held are preempted. The preempted resources are added to
the list of resources for which the process is waiting. The process will be restarted only
when it can regain its old resources, as well as the new ones that it is requesting.

4.Circular wait

The fourth and final condition for deadlocks is the circular-wait condition, One way to
ensure that this condition never holds is to impose a total ordering of all resource types
and to require that each process requests resources in an increasing order of enumeration.
A process can't request for a lesser priority resource. This ensures that not a single
process can request a resource which is being utilized by some other process and no cycle
will be formed.

AVK 19
Operating System (22516) Unit No.4

Deadlock Avoidance
In deadlock avoidance, the request for any resource will be granted if the resulting state
of the system doesn't cause deadlock in the system. The state of the system will
continuously be checked for safe and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of resources
a process can request to complete its execution.
The simplest and most useful approach states that the process should declare the
maximum number of resources of each type it may ever need. The Deadlock avoidance
algorithm examines the resource allocations so that there can never be a circular wait
condition.
Deadlock avoidance requires that the operating system be given in advance additional
information concerning which resources a process will request and use during its lifetime.
With this additional knowledge, it can decide for each request whether or not the process
should wait. To decide whether the current request can be satisfied or must be delayed,
the system must consider the resources currently available, the resources currently allo-
cated to each process, and the future requests and releases of each process.

If a system does not employ either a deadlock-prevention or a deadlock-avoidance


algorithm, then a deadlock situation may arise. In this environment, the system can
provide an algorithm that examines the state of the system to determine whether a
deadlock has occurred and an algorithm to recover from the deadlock
There are two deadlock-avoidance algorithms.

Banker's Algorithm
The name was chosen because the algorithm could be used in a banking system to ensure
that the bank never allocated its available cash in such a way that it could no longer
satisfy the needs of all its customers.
When a new process enters the system, it must declare the maximum number of
instances of each resource type that it may need. This number may not exceed the total
number of resources in the system. When a user requests a set of resources, the system
must determine whether the allocation of these resources will leave the system in a safe
state. If it will, the resources are allocated; otherwise, the process must wait until some
other process releases enough resources.

AVK 20
Operating System (22516) Unit No.4

Several data structures must be maintained to implement the banker's algorithm. These
data structures encode the state of the resource-allocation system.
This algorithm calculates resources allocated, required and available before allocating
resources to any process to avoid deadlock. It contains two matrices on a dynamic basis.
Matrix A contains resources allocated to different processes at a given time. Matrix B
maintains the resources which are still required by different processes at the same time.
Algorithm F: Free resources
Step 1: When a process requests for a resource, the OS allocates it on a trial basis.
Step 2: After trial allocation, the OS updates all the matrices and vectors. This updating
can be done by the OS in a separate work area in the memory.
Step 3: It compares F vector with each row of matrix B on a vector to vector basis.
Step 4: If F is smaller than each of the row in Matrix B i.e. even if all free resources are
allocated to any process in Matrix B and not a single process can completes its task then
OS concludes that the system is in unstable state.
Step 5: If F is greater than any row for a process in Matrix B the OS allocates all required
resources for that process on a trial basis. It assumes that after completion of process, it
will release all the recourses allocated to it. These resources can be added to
the free vector.

Safety Algorithm
We can now present the algorithm for finding out whether or not a system is in a safe
state. This algorithm can be described as follows:

1. Let Work and Finish be vectors of length in and n, respectively. Initialize


Work = Available and Finish[i] = false for /' = 0,1,..., n - 1.
2. Find an i such that both
a. Finish[i] ==false
b. Needj < Work
If no such /' exists, go to step 4.

3. Work = Work + Allocation j


Finish[i] = true
Go to step 2.

AVK 21
Operating System (22516) Unit No.4

4. If Finish[i] -- true for all /, then the system is in a safe state.

This algorithm may require an order of m x n2 operations to determine whether a


state is safe.

AVK 22

You might also like