0% found this document useful (0 votes)
12 views84 pages

CSE316 - CPU Scheduling Unit2

The document discusses CPU and process scheduling, focusing on multi-programming objectives to maximize CPU utilization. It covers various scheduling algorithms including non-preemptive and preemptive scheduling, First-Come, First-Served (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin scheduling. Additionally, it highlights scheduling criteria such as CPU utilization, throughput, turnaround time, waiting time, and response time, along with examples and calculations for each algorithm.

Uploaded by

Arunav Tiwary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views84 pages

CSE316 - CPU Scheduling Unit2

The document discusses CPU and process scheduling, focusing on multi-programming objectives to maximize CPU utilization. It covers various scheduling algorithms including non-preemptive and preemptive scheduling, First-Come, First-Served (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin scheduling. Additionally, it highlights scheduling criteria such as CPU utilization, throughput, turnaround time, waiting time, and response time, along with examples and calculations for each algorithm.

Uploaded by

Arunav Tiwary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 84

Unit 2: CPU/ Process Scheduling

Basic Concept
 CPU Scheduling is:
 Basis of Multi-programmed OS

 Objective of Multi-programming:
 To have some processes running all the time to
maximize CPU Utilization.
CPU and I/O Bursts
Sequence of CPU and I/O Bursts
CPU Scheduling Decisions

1. When process switches from 1. Non-preemptive

Running State to Waiting State


(i/o request or wait)
2. When process switches from 2. Pre-emptive
Running to Ready State
(interrupt)
3. When process switches from 3. Pre-emptive
Waiting State to Ready State
(at completion of i/o)
4. Non-Preemptive
4. When a process terminates (allow)
Process States
Scheduling

 Non-Preemptive

 Preemptive
Non-Preemptive
 Once the CPU is allocated to a process, process keeps
the CPU until:
 it releases when it completes
 by switching to waiting state
E.g : 1. Windows 3.x and Apple Macintosh operating
systems uses non-preemptive scheduling
2. Windows (also 10) uses a round-robin technique with
a multi-level feedback queue for priority scheduling
 Process is executed till completion. It cannot be
interrupted.
Eg First In First Out
Preemptive Scheduling
 The running process is interrupted for some
time and resumed later on, when the priority
task has finished its execution.

 CPU /resources is/are taken away from the


process when some high priority process
needs execution.
Dispatcher
 A module that gives control of CPU to the process
selected by short-term scheduler.
 Functions:
 Switching Context
 Switching to User mode
 Jumping to proper location to restart the program.
 The dispatcher should be as fast as possible, given that
it is invoked during every process switch
 Dispatch Latency:
 Time taken for the dispatcher to stop one process and start
another running.
Scheduling Criteria
 Which algorithm to use in a particular situation

1. CPU Utilization: CPU should be busy to the fullest

2. Throughput: No. of processes completed per unit of


time.
3. Turnaround Time: The time interval from submitting a
process to the time of completion.

Turnaround Time= Time spent to get into memory + waiting


in ready queue + doing I/O + executing on CPU
(It is the amount of time taken to execute a particular process)
Scheduling Criteria
4. Waiting Time: Time a process spends in a ready queue.
Amount of time a process has been waiting in the ready
queue to acquire control on the CPU.

5. Response Time: Time from the submission of a request


to the first response, Not Output

6. Load Average: It is the average number of processes


residing in the ready queue waiting for their turn to get
into the CPU.
Scheduling Algorithm Optimization Criteria

 Max CPU utilization


 Max throughput
 Min turnaround time
 Min waiting time
 Min response time
Formula

Turn around time= Completion Time- Arrival Time

Waiting Time= Turn around Time-Burst Time

OR
Turnaround time = Burst time + Waiting time
First-Come, First-Served (FCFS)

 Processes that request CPU first, are allocated the CPU


first
 It is non-preemptive scheduling algorithm
 FCFS is implemented with FIFO queue.
 A process is allocated the CPU according to their arrival
times.
 When process enters the ready queue, its PCB is attached
to the Tail of queue, When CPU is free, it is allocated to the
process selected from Head/Front of queue.
First-Come, First-Served (FCFS)
 “Run until Completed:” FIFO algorithm
 Example: Consider three processes arrive in order
P1, P2, and P3.
 P1 burst time: 24
 P2 burst time: 3
 P3 burst time: 3
 Draw the Gantt Chart and compute Average Waiting
Time and Average Turn Around Time.

Sol: As arrival time is not given assume order of arrival


as: P1,P2,P3

17
Example: First-Come, First-Served (FCFS)
Consider AT(0,1,2,3)
First Come First Serve

Process AT BT CT TAT WT
P1 0 2
P2 3 1
P3 5 6
First Come First Serve (Convoy Effect
H.W
Process AT BT CT TAT WT
P1 0 4
Calculate avg. P2 1 3
CT, TAT,WT P3 2 1
P4 3 2
P5 4 5
First Come First Serve (Convoy Effect

Solve: FCFS Arrival Time

Process AT BT CT TAT WT
P1 0 4 4
P2 1 3 7
P3 2 1 8
P4 3 2 10
P5 4 5 15

Turn around time= Completion Time- Arrival Time

Waiting Time= Turn around Time-Burst Time


(2) Shortest Job First
 Processes with least execution time are selected first.
 CPU is assigned to process with less CPU burst time.
 SJF:
 Non-Preemption: CPU is always allocated to the process with least
burst time and Process Keeps CPU with it until it is completed.

 Pre-Emption: When a new process enters the queue, scheduler


checks its execution time and compare with the already running
process.
 If Execution time of running process is more, CPU is taken from it
and given to new process.
Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 4
P2 0 6
P3 0 4
Cal. Completion time, turn around time and avg. waiting time.
SJF(Pre-emptive)-> SRTF
Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 9
P2 1 4
P3 2 9
Cal. Completion time, turn around time and avg. waiting time.
SJF(Pre-emptive)-> SRTF
Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 5
P2 1 7
P3 3 4
Cal. Completion time, turn around time and avg. waiting time.
SJF(Pre-emptive)-> SRTF
Shortest Job First(Preemptive)
Shortest Job First(Preemptive) SRTF
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 5
P2 1 3
P3 2 3
P4 3 1
Cal. Completion time, turn around time and avg. waiting time.
Shortest Job First (Non Preemption)
 P1 burst time: 15
 P2 burst time: 8
 P3 burst time: 10
 P4 burst time: 3
Consider AT(0,1,2,3)

29
H.W. Practice: Shortest Job First (Non
Preemption)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 1 7
P2 2 5
P3 3 1
P4 4 2
P5 5 8
Cal. Completion time, turn around time and avg. waiting time.
H.W. Practice: Shortest Job First
(Preemption)

Process Arrival Burst CT Turn Waiting


Time Time around Time
time
P1 0 8
P2 1 4
P3 2 9
P4 3 5

PRE-EMPTIVE SJF IS SHORTEST REMAINING TIME


FIRST
H.W. Practice: Shortest Job First
(Preemption)

Process Arrival Burst CT Turn Waiting


Time Time around Time
time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
SRTF Example
CPU idle time: (idle time/ total time spent) * 100
SRTF Example
Consider three processes, all arriving at time zero, with total
execution time of 10, 20 and 30 units, respectively. Each
process spends the first 20% of execution time doing I/O, the
next 70% of time doing computation, and the last 10% of time
doing I/O again. The operating system uses a shortest
remaining compute time first scheduling algorithm and
schedules a new process either when the running process gets
blocked on I/O or when the running process finishes its
compute burst. Assume that all I/O operations can be
overlapped as much as possible. For what percentage of time
does the CPU remain idle?

(A) 0% (B) 10.6% (C) 30.0% (D) 89.4%


SRTF Example
Explanation: Let three processes be p0, p1 and p2. Their execution time is 10, 20
and 30 respectively. p0 spends first 2 time units in I/O, 7 units of CPU time and
finally 1 unit in I/O. p1 spends first 4 units in I/O, 14 units of CPU time and finally 2
units in I/O. p2 spends first 6 units in I/O, 21 units of CPU time and finally 3 units in
I/O.

idle p0 p1 p2 idle
0 2 9 23 44 47
Total time spent = 47
Idle time = 2 + 3 = 5
Percentage of idle time = (5/47)*100 = 10.6 %
Priority Scheduling
 Priority is associated with each process.
 CPU is allocated to the process with highest
priority.
 If 2 processes have same priority  FCFS

Disadvantage: Starvation (Low priority Processes


wait for long)
Solution of Starvation: Aging
Aging: Priority of process is increased gradually (e.g
after every 5 min priority is incremented by 1)
Priority Scheduling (Preemptive)
Process Arrival Priority Burst Completion Consider 4 as
Time Time Time
Highest and 7 as
P1 1 5 4
Lowest Priority
P2 2 7 2
P3 3 4 3
Priority Scheduling (Preemptive)
Process Arrival Priority Burst Completion
Time Time Time

P1 0 2 10
P2 2 1 5 Consider 0 as
P3 3 0 2 Lowest and 3
P4 5 3 20 as Highest
Priority
Priority Scheduling (Preemptive)
Process Arrival Priority Burst Completion
Time Time Time

P1 0 2 10
P2 2 1 5 Consider 3 as
P3 3 0 2 Lowest and 0
P4 5 3 20 as Highest
Priority
Priority Scheduling (Preemptive)
Process Arrival Priority Burst Completion
Time Time Time

P1 1 4 4
P2 2 5 2 Consider 4 as
P3 2 7 3 Lowest and 8
P4 3 8 5 as Highest
P5 3 5 1 Priority
P6 4 6 2
Priority Scheduling (Preemptive)
Process Arrival Priority Burst Completion
Time Time Time

P1 1 2 4
P2 1 2 2 Consider 2 as
P3 2 10 5 Lowest and 10
P4 3 6 3 as Highest
Priority
Priority Scheduling (Preemptive)
Process Arrival Priority Burst Completion
Time Time Time

P1 0 2 4
P2 1 4 2 Consider 2 as
P3 2 6 3 Lowest and 12
P4 3 10 5 as Highest
P5 4 8 1 Priority
P6 5 12 4
P7 6 9 6
Priority Scheduling (Non-Preemptive)
Process Arrival Priority Burst Completion
Time Time Time

P1 0 4 4
P2 1 5 5 Consider 7 as
P3 2 7 1 Lowest and 1
P4 3 2 2 as Highest
P5 4 1 3 Priority
P6 5 6 6
Round Robin Scheduling
 A Time Quantum is associated to all processes

 Time Quantum: Maximum amount of time for which


process can run once it is scheduled.

 RR scheduling is always Pre-emptive.


Round Robin

Process Arrival Burst Completion


TQ: 2 Time Time Time

P1 0 5
P2 1 7
P3 2 1

Process Arrival Burst Completion


Time Time Time

P1 0 3
P2 3 4
P3 4 6
Round Robin

Process Arrival Burst Completion


TQ: 2 Time Time Time

P1 0 4 8
P2 1 5 18
P3 2 2 6
P4 3 1 9
P5 4 6 21
P6 6 3 19
Multilevel Queue
A multilevel queue scheduling algorithm partitions
the ready queue into several separate queues.

For Example: a multilevel queue scheduling algorithm


with five queues, listed below in order of priority:

1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student/ user processes
Multilevel Queue
 A process can move between various queues
 Multilevel Queue Scheduler defined by the following
parameters:
 No. of queues
 Scheduling algorithms for each queue
 Method used to determine when to upgrade / demote
a process
 Method used to determine which queue a process
will enter and when that process needs service.
Multilevel Queue
Multilevel Queue
Multilevel Queue
1. System Processes: These are programs that belong to OS
(System Files)

2. Interactive Processes: Real-Time Processes e.g. playing


games online, listening to music online, etc.

3. Batch Processes: Lots of processes are pooled and one


process at a time is selected for execution. I/O by Punch Cards

4. Student/User Processes
Multilevel Queue

 No process in the batch queue, could run unless the queues


for system processes, interactive processes, and interactive
editing processes were all empty.
 If an interactive editing process entered the ready queue
while a batch process was running, the batch process would
be preempted.
Multilevel Queue
 Processes can be :
 Foreground Process: processes that are running
currently  RR Scheduling is applied
 Background Process: Processes that are running in the
background but its effects are not visible to user. FCFS
 Multilevel queue scheduling divides ready queue into several
queues.
 Processes are permanently assigned to one queue on some
property like memory size, process priority, process type.
 Each queue has its own scheduling algorithm
Multilevel Queue
 As different type of processes are there so all cant be put
into same queue and apply same scheduling algorithm.

Disadvantages:
1. Until high priority queue is not empty, No process from
lower priority queues will be selected.
2. Starvation for lower priority processes

Advantage:
Can apply separate scheduling algorithm for each queue.
Multilevel Feedback Queue
 Solution is: Multilevel Feedback Queue
 If a process is taking too long to execute.. Pre-empt
it send it to low priority queue.
 Don’t allow a low priority process to wait for long.
 After some time move a least priority process to
high priority queue  Aging
Multilevel Feedback Queue
Allows a process to move between queues.

The idea is to separate processes according to the


characteristics of their CPU bursts.

If a process uses too much CPU time, it will be moved to a


lower-priority queue.
This scheme leaves I/O-bound and interactive processes in
the higher-priority queues.

In addition, a process that waits too long in a lower-priority


queue may be moved to a higher-priority queue. This form of
aging prevents starvation.
Multilevel Feedback Queue
Multilevel Feedback Queue
Multi-processor Scheduling
Concerns:
 If multiple CPUs are available, load sharing becomes
possible.
 Concentration is on systems in which the processors
are identical—homogeneous in terms of their
functionality.
 Use any available processor to run any process in the
queue
There are also different types of limitations:
 A system with an I/O device attached to a private bus of
one processor. Processes that wish to use that device
must be scheduled to run on that processor
Approaches to Multiple-Processor Scheduling

 1. Asymmetric multiprocessing

All scheduling decisions, I/O processing, and other system


activities handled by a single processor—the master
server. The other processors execute only user code.

 Only one processor accesses the system data structures,


reducing the need for data sharing.
Approaches to Multiple-Processor Scheduling

 2. Symmetric multiprocessing (SMP)


• Each processor is self-scheduling.

• All processes may be in a common ready queue, or each


processor may have its own private queue of ready
processes.

• Scheduling proceeds by having the scheduler for each


processor examine the ready queue and select a process to
execute.

Virtually all modern operating systems support SMP, including


Windows, Linux, and Mac OS X
Issues concerning SMP systems

 1. Processor Affinity
(a process has an affinity for the processor on which it is currently
running.)
 Consider what happens to cache memory when a process
has been running on a specific processor?

The data most recently accessed by the process populate


the cache for the processor. As a result, successive memory
accesses by the process are often satisfied in cache
memory.
Issues concerning SMP systems

1. Processor Affinity

 If the process migrates to another processor.


 The contents of cache memory must be invalidated for the
first processor, and the cache for the second processor must
be repopulated.
Issues concerning SMP systems

 1. Processor Affinity

Because of the high cost of invalidating and repopulating


caches, most SMP systems try to avoid migration of
processes from one processor to another and instead
attempt to keep a process running on the same
processor.

This is known as processor affinity—that is, a process


has an affinity for the processor on which it is currently
running.
Issues concerning SMP systems

Forms of Processor Affinity


1. Soft affinity
When an operating system has a policy of attempting to keep a
process running on the same processor—but not
guaranteeing that it will do so—we have a situation known
as soft affinity.

2. Hard affinity
Operating system will attempt to keep a process on a single
processor, it is possible for a process to migrate between
processors.
Issues concerning SMP systems

Hard affinity

Example:

Many systems provide both soft and hard affinity.

For example, Linux implements soft affinity, but it also provides


the schedsetaffinity() system call, which supports hard
affinity.
Issues concerning SMP systems

2. Load Balancing

On SMP systems, it is important to keep the workload


balanced among all processors to fully utilize the benefits of
having more than one processor.

Need of Load Balancing


One or more processors may sit idle while other processors
have high workloads, along with lists of processes awaiting the
CPU.
Issues concerning SMP systems

2. Load Balancing
Load balancing is necessary only on systems where each
processor has its own private queue of eligible processes to
execute.

On systems with a common run queue, load balancing is often


unnecessary, because once a processor becomes idle, it
immediately extracts a run able process from the common run
queue.

In most operating systems that support SMP, each processor


have a private queue of eligible processes.
Issues concerning SMP systems

2. Load Balancing
Two approaches to load balancing: push migration and pull
migration.

1. Push migration: a specific task periodically checks the


load on each processor and—if it finds an imbalance—evenly
distributes the load by moving (or pushing) processes from
overloaded to idle or less-busy processors.

2. Pull migration: occurs when an idle processor pulls a


waiting task from a busy processor.
Issues concerning SMP systems

3. Multicore Processors

A multi-core processor is a single computing component


with two or more independent processing units called cores,
which read and execute program instructions.
A processor, or more commonly a CPU, is an individual
processing device. It may contain multiple cores.
A core is a bank of registers and dedicated cache (ALU)
Core is a structure that performs all of a processor's tasks,
but is not an entire processor.
Issues concerning SMP systems

3. Multicore Processors

Multicore processors may complicate scheduling issues:

When a processor accesses memory, it spends a significant


amount of time waiting for the data to become available. This
situation, known as a memory stall.

Memory Stall may occur due to a cache miss (accessing


data that are not in cache memory).
Issues concerning SMP systems

Memory Stall

the processor can spend up to 50 percent of its time waiting for


data to become available from memory.
Real Time Scheduling
Analysis and testing of the scheduler system and the
algorithms used in real-time applications

a) soft real-time systems b) hard real-time systems

a)Soft real-time systems provide no guarantee as to when


a critical real-time process will be scheduled. They
guarantee only that the process will be given preference over
noncritical processes.
b)Hard real-time systems have stricter requirements. A
task must be serviced by its deadline;
Service after the deadline has expired is the same as no
service at all.
Rate-Monotonic Scheduling
The rate-monotonic scheduling algorithm schedules
periodic tasks using a static priority policy with preemption.

rate-monotonic scheduling (RMS) is a priority assignment


algorithm used in real-time operating systems (RTOS) with a
static-priority scheduling class.

The static priorities are assigned according to the cycle


duration of the job, so a shorter cycle duration results in a
higher job priority
Rate-Monotonic Scheduling

If a lower-priority process is running and a higher-priority


process becomes available to run, it will preempt the lower-
priority process.

Upon entering the system, each periodic task is assigned a


priority inversely based on its period. The shorter the period, the
higher the priority; the longer the period, the lower the priority.

Assign a higher priority to tasks that require the CPU more


often (less)
Rate-Monotonic Scheduling
rate-monotonic scheduling assumes that the processing time of
a periodic process is the same for each CPU burst.

For example
Consider P1, P2 with time period 50,100 resp. and B.T 20,35
resp.
Cal. CPU utilization of each process and total CPU
utilization.
Sol:
CPU utilization=(Burst time/Time period)= (Ti/Pi)
For P1: (20/50)=0.40 i.e 40%
For P2: (35/100)=0.35 i.e 35%
Total CPU utilization is 75%
Earliest Deadline First Scheduling
Earliest-deadline-first (EDF) scheduling dynamically assigns
priorities according to deadline.

The earlier the deadline, the higher the priority the later the
deadline, the lower the priority.

Under the EDF policy, when a process becomes runnable, it


must announce its deadline requirements to the system. Priorities
may have to be adjusted to reflect the deadline of the newly
runnable process.

You might also like