0% found this document useful (0 votes)
13 views68 pages

Chapter-2-3-Process Scheduling

operating system

Uploaded by

Belete Siyum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views68 pages

Chapter-2-3-Process Scheduling

operating system

Uploaded by

Belete Siyum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 68

Scheduling Queues

Job queue :
► Set of all processes in the system

► In starting, all the processes get stored in the job queue.

► It is maintained in the secondary memory.

► The long term scheduler (Job scheduler) picks some of


the jobs and put them in the primary memory.
Ready queue:
► Set of all processes residing in main memory

► The short term scheduler picks the job from the ready
queue and dispatch to the CPU for the execution.
Scheduling Queues (Cont’d)
Device queues:
► Set of processes waiting for an I/O device.

► Once the process is assigned to the CPU and is execut-


ing, one of the following several events can occur:
► Making an I/O request and placed in the I/O queue.
► Creates a new child process and waits for the child
process to finish.
► Forcibly removed from the CPU and then returned to
the Ready queue.
► Timeout occurs.
Scheduling Queues (Cont’d)
Scheduler
Schedulers : are special system software which handle
process scheduling in various ways.
►Their main task is to select the jobs to be submitted into

the system and to decide which process to run.


►There are three types of schedulers:

► Long-Term Scheduler
► Short-Term Scheduler

► Medium-Term Scheduler
Scheduler (Cont’d)
Long-Term Scheduler
►Also known as the Job Scheduler.

►Determines which processes from the Job Queue should be

come into the Ready Queue.


►Runs less frequently as it deals with long-term decisions.

►Controls the degree of multiprogramming by regulating the

number of processes in the system.


►Allocates resources and memory to processes that are ready

to execute.
Scheduler (Cont’d)
Short-Term Scheduler
►Also known as the CPU Scheduler.
►Selects the next process from the Ready Queue to be exe-

cuted by the CPU.


►Runs more frequently as it deals with immediate decisions.

►Aims to minimize the waiting time and maximize throughput

by efficiently utilizing the CPU.


►Prioritizes processes based on scheduling algorithms (e.g.,

Round Robin, Priority Scheduling, etc.) and their associated


parameters.
Scheduler (Cont’d)

Scheduler
Types of CPU Scheduling
CPU scheduling decisions may take place under the follow-
ing four circumstances:
1. When a process switches from the running to
the waiting state.
2. When a process switches from the running to
the ready state.
3. When a process switches from the waiting state to
the ready state.
4. When a process terminates.
Types of CPU Scheduling (Cont’d)
► In circumstances 1 and 4, there is no choice in terms
of scheduling. A new process must be selected for ex-
ecution.
► However in circumstances 2 and 3, there is a choice.

► When Scheduling takes place only under circum-

stances 1 and 4, we say the scheduling scheme is non-


preemptive; otherwise, the scheduling scheme
is preemptive.
Types of CPU Scheduling (Cont’d)
Preemptive Scheduling
► Applied when a process switches from running to ready

state or from waiting state to ready state.


► CPUs are allocated to the process for the limited amount
of time and then is taken away.
► Process is placed back in the ready queue again if it still
has CPU burst time remaining.
► The process stays in ready queue till it gets next chance to
execute.
► If a process with high priority arrives in the ready queue,
it does not have to wait for the current process.
Types of CPU Scheduling (Cont’d)
► If a process with high priority arrives in the ready
queue, it does not have to wait for the current process.
Advantages
► Flexible.
► Cost effective

Disadvantages
► If a high priority process frequently arrives in the
ready queue, low priority process may starve.
► Overheads of scheduling the processes.
Types of CPU Scheduling (Cont’d)
Non-preemptive scheduling
► Applied when a process terminates, or a process switches

from running to waiting state.


► Once the CPU is allocated to a process, the process holds the

CPU till it gets terminated or it reaches a waiting state.


► Does not interrupt a process running in middle of the execution.

Advantage
► Does not have overheads.

Disadvantages
► Rigid.
► If a process with long burst time is running CPU, then an-
other process with less CPU burst time may starve.
CPU Scheduling: Criteria
►There are different criteria to check when considering
the “best” scheduling algorithm, they are:
CPU Utilization
► keep the CPU as busy as possible.
►Ideally 100% of the time.
►40% lightly loaded
►90% heavily loaded.

Throughput
► It is the total number of processes completed per unit of

time.
CPU Scheduling: Criteria (Cont’d)
Turnaround Time
►It is the amount of time taken to execute a particular

process.
►The interval from the time of submission of the process to

the time of completion of the process.


►It is the sum of times spent waiting to in to memory, in

the ready queue, executing in CPU, and waiting for I/O.

Turn Around Time = Completion Time - Arrival Time.


CPU Scheduling: Criteria(Cont’d)
Waiting Time
►The total time that is spent by the process while staying in a

ready queue before it reaches the CPU.

Waiting Time (WT) = Turnaround Time (TAT) - Burst Time.

Load Average
►It is the average number of processes residing in the ready

queue waiting for their turn to get into the CPU.


CPU Scheduling: Criteria(Cont’d)
Response Time
►Amount of time it takes from when a request was submitted

until the first response is produced.


►N.B: It is the time till the first response and not the comple-

tion of process execution.

Response Time = CPU Allocation Time - Arrival Time

Completion Time
►The completion time is the time when the process stops exe-

cuting, which means that the process has completed its burst
time and is completely executed.
Scheduling Algorithms
► In order to maximize CPU utilization, computer scientists
have developed a set of algorithms that determine which
process should run first and which should run last.
► The algorithms used are as follows:
► First Come First Serve
► Shortest Job First(SJF)

► Priority

► Round Robin

► Multilevel Queue

► Multilevel Feedback Queue

► Etc.
Scheduling Algorithms(Cont’d)
First Come First Serve (FCFS)
► The process which arrives first, gets executed first.
►Or the process which requests the CPU first, gets

the CPU allocated first.


► It is just like FIFO Queue data structure, where the data
element which is added to the queue first, is the one who
leaves the queue first.
► This is used in Batch Systems.

► A perfect real life example of FCFS scheduling is Stu-

dents queue at the cafeteria.


Scheduling Algorithms(Cont’d)
Example 1: Consider the processes given in the below table, arrives
for execution in the same order, with Arrival Time 0, and given Burst
Time, then find the average waiting time using the FCFS scheduling
algorithm. Process Burst time
P1 21
P2 3
P3 6
p4 2
Solution: If the process arrive in the order P1, P2, P3, and P4; we get
the result show in the following Gannt chart.
Scheduling Algorithms(Cont’d)

Process P1 P2 P3 P4
Average waiting time (AWT) 0 21 24 30
Scheduling Algorithms(Cont’d)
 Example 2: Consider the process arrives in the order P4, P2, P3,
P1 that have been given in Example1; Calculate the average
waiting time and compare the result
Process Burst time
P4 2
P2 3
P3 6
p1 21
Solution: If the process arrive in the order P4, P2, P3, and P1; we
get the result show in the following Gannt chart.
Scheduling Algorithms(Cont’d)

Process P4 P2 P3 P1
Average waiting time (AWT) 0 2 5 11
Scheduling Algorithms(Cont’d)
Example 3: Consider 5 processes whose arrival and burst time are
given in the table below. Calculate average waiting and average
turned around time, if FCFS scheduling algorithm is followed.
PID Arrival Burst time -
m
time Co e
= m
1 4 5 e l ti nd
m
ti iva u
d rr ro
n a a
2 6 4
r ou e - n ed e
a im ur m
3 0 3 d
e nt T ti
rn i o = rst
Tu pilat e u
4 6 2 it m - B
n g e
i i
t tim
5 5 4 a
W
Scheduling Algorithms(Cont’d)
Solution:
GANTT
Chart

PID Arrival Burst Completion Turned around Waiting


time time time time time
1 4 5 9 9-4 = 5 5-5=0
2 6 4 17 17-6 = 11 11 - 4 = 7
3 0 3 3 3-0 = 3 3-3=0
4 6 2 19 19-6 = 13 13-2 = 11
5 5 4 13 13-5 = 8 8–4=4
Scheduling Algorithms(Cont’d)
Scheduling Algorithms(Cont’d)
Draw backs of FCFS
►Non-preemptive scheduling algorithm witch means process

priority doesn’t allow.


► Not optimal average waiting time.

► Resource utilization in parallel is not possible, which leads

to Convoy Effect and hence poor resource (CPU, I/O) etc.


►Question?: What is Convoy Effect?

► Ans: A situation where many processes, who need to use re-


source for short time are block by one process holding that re-
source for a long time.
Scheduling Algorithms(Cont’d)
Shortest Job First(SJF) Scheduling
► SJF works based on the process with the shortest burst
time or duration first.
► This is the best approach to minimize waiting time.

►This is used in Batch Systems.

►Two types:

► Non Pre-emptive

► Pre-emptive

►The duration time of the processes should be known to the processor

in advance, which is practically not feasible all the time.


►This scheduling algorithm is optimal if all the processes are available

at the same time.


Scheduling Algorithms(Cont’d)
Non Pre-emptive Shortest Job First
► The shortest process is executed first

►Example 4: Consider the below processes available in the

ready queue for execution, with arrival time as 0 for all and
given burst times.
PID Burst time
1 21
2 3
3 6
4 2
Scheduling Algorithms(Cont’d)
► As you can see in the GANTT chart above, the process
P4 will be picked up first as it has the shortest burst
time, then P2, followed by P3 and at last P1.
► By comparison, If we were using FCFS algorithm in

Example 1, we have gotten average waiting time to be


18.75ms, whereas with SJF, the average waiting time
comes out 4.5ms.
► Problem with Non Pre-emptive SJF

► If the arrival time for processes are different, all the pro-
cesses are not available in the ready queue at time 0.
Scheduling Algorithms(Cont’d)
Pre-emptive Shortest Job First
►Also known as Shortest remaining Time(SRT) or Shortest

Next Time(SNT).
►Jobs are put into ready queue as they arrive, but as a process

with short burst time arrives, the existing process is pre-


empted or removed from execution, and the shorter job is ex-
ecuted first.
►If two processes have same execution time, then jobs are

based on FCFS algorithms.


Scheduling Algorithms(Cont’d)
Example 5: Consider the following table, and calculate average
waiting time (AWT).

PID Arrival time Burst time


1 0 6
2 1 4
3 2 2
4 3 3 GANTT chart
Scheduling
Algorithms(Cont’d)
 Waiting time = Completion time – Burst Time – Ar-
rival Time
 P1 waiting time = (15-6-0) = 9ms
 P2 waiting time = (7-4-1) = 2ms
 P3 waiting time = (4-2-2 )=0ms
 P4 waiting time = (10-3-3)= 4ms
 Therefore The average waiting time is ( 9 +2 + 0 + 4)/4
= 15/4 = 3.75
Scheduling Algorithms(Cont’d)

Recall the formula :

Completion Time – Burst Time – Arrival Time

OR

Total Waiting Time – Process Executed – Arrival Time


Scheduling Algorithms(Cont’d)
Exercise 1 : An Operating System uses Shortest Remaining Time
First Scheduling Algorithm as the information given in the table be-
low. What is the average waiting time of processes? Ans: 5.5, How?

PID Arrival time Burst time


1 0 12
2 2 4
3 3 6
4 8 5
Scheduling Algorithms(Cont’d)
► P1 waiting time = (15-6-0) = 9ms
► P2 waiting time = (7-4-1) = 2ms

► P3 waiting time = (4-2-2 )=0ms

► P4 waiting time = (10-3-3)= 4ms

⸫ AWT = ( 9 +2 + 0 + 4)/4 = 15/4 = 3.75


Problem with SJF
► It can’t be implemented at the level of short term CPU

Scheduling.
► There is no way to know the length of the next CPU burst.
Scheduling Algorithms(Cont’d)
Priority CPU Scheduling
►It is one of the most common scheduling algorithms in

batch systems.
►Each process is assigned a priority.

► Process with highest priority is to be executed first.

►Processes with same priority are executed on FCFS basis.

►The lower the priority value, the higher the priority.

►Priority can be decided based on memory requirements,

time requirements or any other resource requirement.


► We can say SJF is a priority scheduling algorithm, the

lowest the CPU burst the highest the priority.


Scheduling Algorithms(Cont’d)
Priority scheduling can be of two types:
►Preemptive Priority Scheduling:

► If the new process arrived at the ready queue has a higher


priority than the currently running process, the CPU is pre-
empted
►Non-Preemptive Priority Scheduling:

► If a new process arrives with a higher priority than the cur-


rent running process, the incoming process is put at the
head of the ready queue, which means after the execution of
the current process it will be processed.
Scheduling Algorithms(Cont’d)
Example 6: Consider the following set of processes, assumed
to have arrived at the time 0, in the order P1, P2, P3, P4, and
P5 with the length of CPU burst given in milliseconds. Calcu-
late the average waiting time using priority scheduling algo-
rithm.
PID Priority Burst time
1 3 10
2 1 1
3 4 2
4 5 1
5 2 5
Scheduling Algorithms(Cont’d)
Solution :

► Waiting time for P1 = 6ms


► Waiting time for P2 = 0ms
► Waiting time for P3 = 16ms Average Waiting time
► Waiting time for P4 = 18ms
► Waiting time for P5 = 1ms
Scheduling Algorithms(Cont’d)
Problems in low priority Scheduling:
►Major problem Indefinite blocking, or Starvation

► A process is considered blocked when it is ready to run


but has to wait for the CPU as some other process is
running currently.
► Leave some low priority process and waiting indefi-
nitely.
► In heavily loaded computer system, a steady stream of
higher priority process can prevent low priority process
from ever getting the CPU.
Scheduling Algorithms(Cont’d)
► Solution to Starvation: Aging
► Question: What is Aging? Ans: Aging is a technique of
gradually increasing the priority of processes that wait in the
system for a long time.
Scheduling Algorithms(Cont’d)
Round Robin Scheduling (RR)
►RR scheduling algorithm is mainly designed for time-shar-
ing systems.
►It is similar to FCFS scheduling, but in RR preemption is
added which enables the system to switch between pro-
cesses.
► A fixed time is allotted to each process, called a quantum,
for execution.
►Once a process is executed for the given time period that
process is preempted and another process executes for the
given time period.
Scheduling Algorithms(Cont’d)
► Context switching is used to save states of preempted pro-
cesses.
► This algorithm is simple and easy to implement.

► It is starvation-free as all processes get a fair share of CPU.

► Length of time quantum is generally from 10 to 100 mil-

liseconds in length.
Some important characteristics of the RR Algorithm:
► It resides under the category of Preemptive Algorithms.

► It is one of the oldest, easiest, and fairest algorithm.

► It is a real-time algorithm b/c it responds to the event within

a specific time limit.


Scheduling Algorithms(Cont’d)
► The time slice should be the minimum that is assigned to a
specific task that needs to be processed.
► This is a hybrid model and is clock-driven in nature.

► This is a widely used scheduling method in the traditional

operating system.
Example 7: There are six processes, their arrival and burst
time are given below in the table. The time quantum of the
system is 4 units. The calculate the average waiting times of
the give processes.
Scheduling Algorithms(Cont’d)
Remaining
PID Arrival Burst Burst Time
time time
1 0 5 P2 P3 P4 P5 P1 P6 P1: 1
1 2 3 4 6
2 1 6 P2:2
3 2 3
P2 P5
Ready Queue P5:1
4 3 1
5 4 5
Gantt chart
6 6 4

P1 P2 P3 P4 P5 P1 P6 P2 P5
1
0 4 8 11 12 6 17 21 23 24
Scheduling Algorithms(Cont’d)
Turn Around Time = Completion Time - Arrival Time
To recap
Waiting Time = Turn Around Time - Burst Time
PID Arrival time Burst time Completion time Turn Around Time Waiting Time
1 0 5 17 17 12
2 1 6 23 22 16
3 2 3 11 9 6
4 3 1 12 9 8
5 4 5 24 20 15
6 6 4 21 15 11
Scheduling Algorithms(Cont’d)
Advantages of RR Scheduling Algorithm
►In terms of average response time, this algorithm gives the

best performance.
► All the processes get a fair allocation of CPU.

►There are no issues of starvation or convoy effect.

► The newly created process is added to the end of the ready

queue.
Scheduling Algorithms(Cont’d)

Disadvantages of RR Scheduling Algorithm


►Spends more time on context switches.
►For small quantum, it is time-consuming scheduling.

►Offers a larger waiting time and response time.

►Low throughput.
Deadlock
► Deadlock is a situation where a set of processes are
blocked because each process is holding a resource and
waiting for another resource acquired by some other
process.

Fig: Deadlock in Operating


Deadlock (Cont’d)
► In the above Fig, the process 1 has resource R1 and
needs to acquire resource R2.
► Similarly process 2 has resource R2 and needs to ac-

quire resource R1.


► Therefore process P1 and process P2 are in deadlock

as each of them needs the other’s resource to complete


their execution but neither of them is willing to release
their resources.
Deadlock (Cont’d)
► In normal operation, utilization of resources by a
process is in the following.
► Requests a resource
► Use the resource

► Releases the resource

Requests a Resource
► Firstly, the process requests the resource. If the request can-
not be granted immediately(e.g.: resource is being used by
any other process). Then the requesting process must wait
until it can acquire the resource.
Deadlock (Cont’d)
Use Resource
►The Process can operate on the resource ( e.g. if the resource is
a printer then in that case process can print on the printer).
Release Resource
►The Process releases the resource.
Deadlock Characterization
► A deadlock situation can arise if the following four con-
ditions hold simultaneously in a system:
I. Mutual exclusion:
► Only one process at a time can use the resource.
► If another process requests that resource, the requesting
process must be wait until the resource has been released.
II. Hold and wait :
► There must exist a process that is holding at least one re-
source and is waiting to acquire additional resources that are
currently being held by other processes.
Deadlock Characterization (Cont’d)
III.No pre-emption:
► Resources cannot be pre-empted;
► Resource can be released only voluntarily by the process
holding it, after that process, has completed its task.
IV. Circular wait:

► There must exist a set {P0, P1, ..., Pn } of waiting pro-


cesses such that P0 is waiting for a resource that is held
by P1, P1 is waiting for a resource that is held by P2,
…., Pn-1 is waiting for a resource that is held by Pn, and
Pn is waiting for a resource that is held by P0.
Resource Allocation Graph(RAG)
► Deadlock can be described by using a directed graph called
resource allocation graph.
► The graph consists of a set of vertices V and a set of edges E
► V is partitioned into two types:
► P = {P1, P2, …, Pn}, the set consisting of all the processes

in the system.
► R = {R1, R2, …, Rm}, the set consisting of all resource

types in the system.


► The edge E also can be two types
► Requested: directed edge P1 Rj

► Allocation : directed edge RjPi


RAG (Cont’d)
► Representation:
► Process represented by a circle

► Resource represented by square

► Dots represent number of instances of resource in type


..

Fig: Resource allocation graph


RAG (Cont’d)

The process states


►P1 is holding an in-
stance of R2 and is wait-
ing for an instance of R1.
►P2 is holding an in-
stance of R2, and waiting
for an instance of R3
►P3 is holding an in-
stance of R3 and waiting
for an instance of R2.
Fig: Resource allocation graph with a dead-
lock
RAG (Cont’d)

The process states


►P1 is holding an in-
stance of R2 and is wait-
ing for an instance of R1.
►P2 is holding an in-
stance of R1
►P3 is holding an in-
stance of R1 and waiting
for an instance of R2
►P4 is holding an in-
Fig: Resource allocation graph with cycle but stance of R2
no a deadlock
RAG (Cont’d)
Basic facts
►If graph contains no cycles :

► no deadlock

► If graph contains a cycle :

► if only one instance per resource type, then deadlock


► if several instances per resource type, possibility of
deadlock
Methods for handling Deadlocks

► Deadlock problems can be handled in one of the follow-


ing 3 ways:
I. Ensure that the system will never enter a deadlock state
► Deadlock prevention
► Deadlock avoidance

II. Allow the system to enter a deadlock state and then re-
cover
III. Ignore the problem and pretend that deadlocks never occur
in the system; used by most operating systems, including
UNIX
Methods for handling Deadlocks (Cont’d)

Deadlock Prevention
► Prevent a deadlock before it can occur.
► The system checks each transaction before it is executed to

make sure it does not lead to deadlock


► Do not allow one of the four conditions to occur.

I.Mutual Exclusion: not required for sharable resources; must


hold for non-sharable resources.
Methods for handling Deadlocks (Cont’d)

II. Hold and Wait: whenever a process requests a resource,


it does not hold any other resources
► Require process to request and be allocated all its resources be-

fore it begins execution, or allow process to request re-


sources only when the process has none
► Low resource utilization; starvation possible

III. No Preemption:
► If a process that is holding some resources requests an-
other resource that cannot be immediately allocated to it,
then all resources currently being held are released
► Preempted resources are added to the list of resources for
which the process is waiting
Methods for handling Deadlocks (Cont’d)

► Process will be restarted only when it can regain its old


resources, as well as the new ones that it is requesting
IV. Circular Wait: impose a total ordering of all re-
source types, and require that each process requests
resources in an increasing order of enumeration
Methods for handling Deadlocks (Cont’d)

Deadlock Avoidance
Requires that the system has some additional a priori informa-
tion available.
►Each process declare the maximum number of resources of

each type that it may need


►The deadlock-avoidance algorithm dynamically examines the

resource-allocation state to ensure that there can never be a


circular-wait condition
►Resource-allocation state is defined by the number of avail-

able and allocated resources, and the maximum demands of


the processes
Methods for handling Deadlocks (Cont’d)

Deadlock avoidance: Safe State


►If a system is in a safe state, then there are no deadlocks.
►If a system is in unsafe state, then there is a possibility of

deadlock
►Deadlock avoidance method ensures that a system will never

enter an unsafe state


Methods for handling Deadlocks (Cont’d)

Deadlock Detection
►If a system does not implement either deadlock pre-

vention or avoidance, deadlock may occur. Hence the


system must provide
► A deadlock detection algorithm that examines the state of
the system if there is an occurrence of deadlock
► An algorithm to recover from the deadlock
Reading Assignment
Banker’s algorithm
The End

You might also like