0% found this document useful (0 votes)
46 views57 pages

CPU Scheduling Policies

The document discusses different CPU scheduling algorithms used by operating systems. It describes goals for schedulers like maximizing CPU utilization and throughput while minimizing response time, wait time, and turnaround time. It also explains non-preemptive and preemptive scheduling, and scheduling algorithms like FIFO, round robin, shortest job first, and shortest remaining time first.

Uploaded by

M. Talha Nadeem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views57 pages

CPU Scheduling Policies

The document discusses different CPU scheduling algorithms used by operating systems. It describes goals for schedulers like maximizing CPU utilization and throughput while minimizing response time, wait time, and turnaround time. It also explains non-preemptive and preemptive scheduling, and scheduling algorithms like FIFO, round robin, shortest job first, and shortest remaining time first.

Uploaded by

M. Talha Nadeem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 57

CPU Scheduling

Operating System - Spring-2019


CPU Scheduler

 A CPU scheduler is responsible for


 Removal of running process from the CPU
 Selection of the next running process
 Based on a particular strategy/policy/algorithm
Goals for a Scheduler

 Maximize
 CPU utilization: keep the CPU as busy as
possible
 Throughput: the number of processes
completed per unit time
Goals for a Scheduler

 Minimize
 Response time: the time of submission of any
request (to the OS) to the time the first response
is produced
 Wait time: total time spent waiting in the ready
queue
 Turnaround time: the time of submission of any
process (to the OS) to the time of completion
(when the process terminates)
Goals for a Scheduler

 Suppose we have processes A, B, and C,


submitted at time 0
 We want to know the response time, waiting
time, and turnaround time of process A
turnaround time
wait time + +
response time = 0
A B C A B C A C A C Time
Goals for a Scheduler

 Suppose we have processes A, B, and C,


submitted at time 0
 We want to know the response time, waiting
time, and turnaround time of process B
turnaround time
wait time +
response time
A B C A B C A C A C Time
Goals for a Scheduler

 Suppose we have processes A, B, and C,


submitted at time 0
 We want to know the response time, waiting
time, and turnaround time of process C
turnaround time
wait time + + +
response time
A B C A B C A C A C Time
Goals for a Scheduler

 Achieve fairness
 There are tensions among these goals
 Pending Discussion
Assumptions

 Each user runs one process


 Each process is single threaded
 Processes are independent

 They are not realistic assumptions; they


serve to simplify analysis
Scheduling Types

 Non-Preemptive Scheduling
 Preemptive Scheduling
Non-Preemptive Scheduling
 A scheduling discipline is said to be non-preemptive if,
once a process has been given the CPU, it cannot be
taken away from that process (i.e. the process cannot be
preempted).
 All jobs are given equal fair treatment, thus, the short
jobs (i.e. those requiring less amount of time for
completion) are made to wait by longer jobs.
 Response times are more predictable because incoming
high priority jobs cannot displace (i.e. shift) the waiting
jobs.
Preemptive Scheduling

 A scheduling discipline is said to be preemptive if, once a


process has been given the CPU, it can be taken away
from that process (i.e. the process can be preempted).
 +ve: Preemption is useful in those systems where high
priority processes require immediate attention, e.g.
RTOS.
 -ve: The direct result of preemption is Context Switching
which is in turn an overhead for the CPU.
Scheduling Policies

 FIFO (first in, first out)


 Round Robin
 SJF (shortest job first)
 Multilevel feedback queues
 Lottery scheduling
FIFO

 FIFO: assigns the CPU based on the order


of requests
 Nonpreemptive: A process keeps running on a
CPU until it is blocked or terminated
 Also known as FCFS (first come, first serve)

+ Simple
- Short jobs can get stuck behind long jobs
First-Come, First-Served (FCFS) Scheduling

Process CPU Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30
 Waiting time for P1 = 0; P2 = 24; P3 = 27
 Average waiting time: (0 + 24 + 27)/3 = 17

Operating System Concepts – 7th Edition, Feb 2, 2005 5.15 Silberschatz, Galvin and Gagne ©2005
FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order


P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30
 Waiting time for P1 = 6; P2 = 0; P3 = 3
 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy Effect  short process behind long process
 FCFS favors CPU-bound processes. Why?

Operating System Concepts – 7th Edition, Feb 2, 2005 5.16 Silberschatz, Galvin and Gagne ©2005
Round Robin

 Round Robin (RR) periodically releases the


CPU from long-running jobs
 Based on timer interrupts so short jobs can get a
fair share of CPU time
 Preemptive: a process can be forced to leave its
running state and replaced by another running
process (when it’s time quantum expires)
 Time slice: interval between timer interrupts
More on Round Robin

 If time slice is too long


 Scheduling degrades to FIFO
 If time slice is too short
 Throughput suffers
 Context switching cost dominates
Example of RR with Time Quantum = 20

Process CPU Burst Time


P1 53
P2 17
P3 68
P4 24
 The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

 Typically, higher average turnaround than SJF, but better response

Operating System Concepts – 7th Edition, Feb 2, 2005 5.19 Silberschatz, Galvin and Gagne ©2005
Time Quantum and Context Switch Time

Operating System Concepts – 7th Edition, Feb 2, 2005 5.20 Silberschatz, Galvin and Gagne ©2005
FIFO vs. Round Robin

 With zero-cost context switch, is RR always


better than FIFO?
FIFO vs. Round Robin

 Suppose we have three jobs of equal length


turnaround time of C
turnaround time of B
turnaround time of A
A B C A B C A B C Time
Round Robin

turnaround time of C
turnaround time of B
turnaround time of A
A B C Time
FIFO
FIFO vs. Round Robin

 Round Robin
+ Shorter response time
+ Fair sharing of CPU
- Not all jobs are preemptive
- Not good for jobs of the same length
Shortest Job First (SJF)

 SJF runs whatever job puts the least demand


on the CPU, also known as STCF (shortest
time to completion first)
+ Provably optimal
+ Great for short jobs
+ Small degradation for long jobs
 Real life example: supermarket express
checkouts
SJF Illustrated

turnaround time of C
turnaround time of B
turnaround time of A
wait time of C
wait time of B
wait time of A = 0
response time of C
response time of B
response time of A = 0

A B C Time
Shortest Job First
Shortest Remaining Time First
(SRTF)
 SRTF: a preemptive version of SJF
 If a job arrives with a shorter time to completion,
SRTF preempts the CPU for the new job
 Also known as SRTCF (shortest remaining time
to completion first)
 Generally used as the base case for comparisons
Shortest-Job-First (SJF) Scheduling
 Associate with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time
 Two schemes:
 Non-preemptive – once CPU given to the process it cannot be
preempted until completes its CPU burst
 Preemptive – if a new process arrives with CPU burst length
less than remaining time of current executing process,
preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF)
 SJF is optimal – gives minimum average waiting time for a given
set of processes
 Decision is based on expected processor burst time
 SJF favors short jobs over long ones

Operating System Concepts – 7th Edition, Feb 2, 2005 5.27 Silberschatz, Galvin and Gagne ©2005
Example of Non-Preemptive SJF

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 SJF (non-preemptive)

P1 P3 P2 P4

0 3 7 8 12 16

 Average waiting time = (0 + 6 + 3 + 7)/4 = 4

Operating System Concepts – 7th Edition, Feb 2, 2005 5.28 Silberschatz, Galvin and Gagne ©2005
Example of Preemptive SJF

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 SJF (preemptive) P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

 Average waiting time = (9 + 1 + 0 +2)/4 = 3

Operating System Concepts – 7th Edition, Feb 2, 2005 5.29 Silberschatz, Galvin and Gagne ©2005
Drawbacks of Shortest Job First

- Starvation: constant arrivals of short jobs


can keep long ones from running.
- There is no way to know the completion time
of jobs accurately (most of the time).
 Some solutions
 Aging: Increase the priority of the that process that gets
preempted most of the time
 Set a threshold to determine that
SJF and SRTF vs. FIFO and Round
Robin
 If all jobs are the same length, SJF  FIFO
 FIFO is the best you can do
 If jobs have varying length
 Short jobs do not get stuck behind long jobs under
SRTF
A More Complicated Scenario (Arrival
Times = 0)
 Process A (6 units of CPU request)
 100% CPU A
 0% I/O Time

 Process B (6 units of CPU request)


 100% CPU B

 0% I/O Time

 Process C (infinite loop)


 33% CPU C C C

 67% I/O C C C Time


A More Complicated Scenario

 FIFO
 CPU A B C
 I/O C

Poor response and wait time for process C Time

 Round Robin with time slice = 3 units


 CPU A B C A B C
 I/O C C

Disk utilization: 29% (2 out of 7 units) Time


A More Complicated Scenario

 Round Robin with time slice = 1 unit


 CPU A B C A B C A B C A B C A B C A B
 I/O C C C C C

Disk utilization 66% (2 out of 3 units) Time

 SRTCF
 CPU C A C A C A C B C B
 I/O C C C C C

Disk utilization: 66% (2 out of 3 units) Time


Priority Scheduling
 A priority number (integer) is associated with each
process
 The CPU is allocated to the process with the highest
priority (smallest integer  highest priority)
 Preemptive

 Non-preemptive

 SJF is a priority scheduling policy where priority is the


predicted next CPU burst time
 Problem  Starvation – low priority processes may never
execute
 Solution  Aging – as time progresses increase the
priority of the process
Priority Scheduling (Multilevel Queues)

 Priority scheduling: The process with the


highest priority runs first
 Priority 0: C

 Priority 1: A
 Priority 2: B

 Assume that low numbers represent high


priority
C A B Time
Priority Scheduling
Priority Scheduling

+ Generalization of SJF
 With SJF, priority = 1/requested_CPU_time
- Starvation: high priority jobs keep on
preempting low priority jobs
Multilevel Feedback Queue
 A process can move between the various queues;
aging can be implemented this way
 Multilevel-feedback-queue scheduler defined by the
following parameters:
 number of queues

 scheduling algorithms for each queue

 method used to determine when to upgrade a

process
 method used to determine when to demote a

process
 method used to determine which queue a process

will enter when that process needs service


Example of Multilevel Feedback Queue

 Three queues:
 Q – RR with time quantum 8 milliseconds
0

 Q1 – RR time quantum 16 milliseconds


 Q2 – FCFS
 Scheduling
 A new job enters queue Q which is served FCFS. When it
0
gains CPU, job receives 8 milliseconds. If it does not finish
in 8 milliseconds, job is moved to queue Q1.
 At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted
and moved to queue Q2.
Multilevel Feedback Queues
Multilevel Feedback Queues

 Approximates SRTF
 A CPU-bound job drops like a rock
 I/O-bound jobs stay near the top
 Still unfair for long running jobs
 Counter-measure: Aging
 Increase the priority of long running jobs if they are not
serviced for a period of time
 Tricky to tune aging
Multilevel Feedback Queues

 Multilevel feedback queues use multiple


queues with different priorities
 Round robin at each priority level
 Run highest priority jobs first
 Once those finish, run next highest priority, etc
 Jobs start in the highest priority queue
 If time slice expires, drop the job by one level
 If time slice does not expire, push the job up by
one level
Multilevel Feedback Queues
time = 0

 Priority 0 (time slice = 1): A B C

 Priority 1 (time slice = 2):


 Priority 2 (time slice = 4):

Time
Multilevel Feedback Queues
time = 1

 Priority 0 (time slice = 1): B C

 Priority 1 (time slice = 2): A

 Priority 2 (time slice = 4):

A Time
Multilevel Feedback Queues
time = 2

 Priority 0 (time slice = 1): C

 Priority 1 (time slice = 2): A B

 Priority 2 (time slice = 4):

A B Time
Multilevel Feedback Queues
time = 3

 Priority 0 (time slice = 1):


 Priority 1 (time slice = 2): A B C

 Priority 2 (time slice = 4):

A B C Time
Multilevel Feedback Queues
time = 3

 Priority 0 (time slice = 1):


 Priority 1 (time slice = 2): A B C

 Priority 2 (time slice = 4):

suppose process A is blocked on an I/O

A B C Time
Multilevel Feedback Queues
time = 3

 Priority 0 (time slice = 1): A

 Priority 1 (time slice = 2): B C

 Priority 2 (time slice = 4):

suppose process A is blocked on an I/O

A B C Time
Multilevel Feedback Queues
time = 5

 Priority 0 (time slice = 1): A

 Priority 1 (time slice = 2): C

 Priority 2 (time slice = 4):

suppose process A is returned from an I/O

A B C B Time
Multilevel Feedback Queues
time = 6

 Priority 0 (time slice = 1):


 Priority 1 (time slice = 2): C

 Priority 2 (time slice = 4):

A B C B A Time
Multilevel Feedback Queues
time = 8

 Priority 0 (time slice = 1):


 Priority 1 (time slice = 2):
 Priority 2 (time slice = 4): C

A B C B A C Time
Multilevel Feedback Queues
time = 9

 Priority 0 (time slice = 1):


 Priority 1 (time slice = 2):
 Priority 2 (time slice = 4):

A B C B A C C Time
Multilevel Queue Scheduling
 Ready queue is partitioned into separate queues:
 foreground queue (interactive) – 80%
 background queue (batch) – 20%
 Each queue has its own scheduling algorithm
 foreground – RR
 background – FCFS
 Scheduling must be done between the queues
 Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
 20% to background in FCFS

Operating System Concepts – 7th Edition, Feb 2, 2005 5.53 Silberschatz, Galvin and Gagne ©2005
Multilevel Queue Scheduling

Operating System Concepts – 7th Edition, Feb 2, 2005 5.54 Silberschatz, Galvin and Gagne ©2005
Lottery Scheduling
(Homework Reading Assignment)
 Lottery scheduling is an adaptive
scheduling approach to address the fairness
problem
 Each process owns some tickets
 On each time slice, a ticket is randomly picked
 On average, the allocated CPU time is
proportional to the number of tickets given to each
job
Lottery Scheduling

 To approximate SJF, short jobs get more


tickets
 To avoid starvation, each job gets at least
one ticket
Lottery Scheduling Example

 short jobs: 10 tickets each


 long jobs: 1 ticket each
# short jobs/# % of CPU for % of CPU for
long jobs each short job each long job
1/1 91% 9%
0/2 0% 50%
2/0 50% 0%
10/1 10% 1%
1/10 50% 5%

You might also like