0% found this document useful (0 votes)
21 views29 pages

Part3 (CPU Scheduling)

Uploaded by

icdisw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views29 pages

Part3 (CPU Scheduling)

Uploaded by

icdisw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Part 3: Process Scheduling

• Basic Concepts
• Scheduling Criteria
• Scheduling Algorithms
– Uniprocessor (single-core)
scheduling
– Multiprocessor (multi-core)
scheduling

Operating Systems 3.1 Part 3 Process


Basic Concepts
• CPU–I/O Bursts: Process execution
alternates between CPU executions and
I/O operations
– CPU burst: Duration of one CPU execution
cycle
– I/O burst: Duration of one I/O operation
(wait time)

• CPU scheduling objective: Keep the CPU


busy as much as possible (i.e., using
multiprogramming)

Operating Systems 3.2 Part 3 Process


Alternating Sequence of CPU And I/O
Bursts

Operating Systems 3.3 Part 3 Process


CPU Scheduler (Short-Term)
• Goal: Select one or more of the processes in
the ready queue, and allocate it to the CPU
(one process per core)

• Scheduler may run whenever a process


changes
4. state2. 5.
In which
case(s) does
CPU
scheduling
have to occur
in order to
3. keep the CPU
1. Answer:
cores busy?
Only 1 and 5

Operating Systems 3.4 Part 3 Process


Types of CPU Schedulers

1.Nonpreemptive means that once the CPU has


been allocated to a process, the process keeps
the CPU until it voluntarily releases the CPU
either by terminating or requesting I/O/event
wait
– Scheduling is nonpreemptive if it happens
only for transitions 1 and 5 (also 4, when a
CPU core is idle)

2.Preemptive means that the CPU can be taken


away from a running process at any time by
the scheduler
– Scheduling is preemptive if it also happens
for transitions 2, 3.5
Operating Systems 3, 4 or at any other time
Part 3 Process
Scheduling Objectives

System-wide Objectives:
1. Max. CPU Utilization: Keep the CPU as
busy as possible
– Percentage of time during which the
CPU cores are busy executing
processes.
– (Sum of execution time on each
core) divided by (total time
multiplied by number of cores)
2. Max. Throughput: Number of processes
that complete their execution per unit
Operating Systems 3.6 Part 3 Process
Scheduling Objectives (Cont.)
Individual Process Objectives:
1. Min. Turnaround Time: Amount of time
to execute a particular process, i.e.,
from the time of creation to the time of
termination
– Time elapsed between “admitted”
and “exit” transitions for a process
– Average over all processes is
denoted as average turnaround time

Operating Systems 3.7 Part 3 Process


Scheduling Objectives (Cont.)
2. Min. Waiting Time: Amount of time a
process has been waiting in the “ready”
state
– We do not consider time in “waiting”
state; why?
– Turnaround time components: CPU burst
(running), I/O burst (waiting) and waiting
time (ready)
– If all processes have a single CPU burst
(no I/O), then waiting time = turnaround
time – CPU burst

3. Min. Response Time:


Operating Systems 3.8 Time from a Part 3 Process
Scheduling Algorithms: Uniprocessor
System

1. First-Come, First-Served (FCFS)


2. Shortest Job First (SJF)
3. Priority-based Scheduling
4. Round Robin (RR)
5. Multilevel Queue Scheduling
We assume a single CPU burst for each
process, a single CPU core and focus on
the average waiting time
Operating Systems 3.9 Part 3 Process
First-Come, First-Served (FCFS)
Scheduling
• Example: Process CPU Burst Length
P1 24
P2 3
P3 3
• Suppose that processes arrive in the order P1, P2, P3, all
at time 0

• The Gantt Chart


P
for the schedule is:
P P3
1 2

0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: 3.10
Operating Systems (0 + 24 + 27)/3 = 17 Part 3 Process
FCFS Scheduling (Cont.)
• Suppose that the processes arrive in the
order P2, P3, P1, again all at time 0

• The Gantt chart for the schedule is:


P2 P3 P1

0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
– Much better than the previous case

Operating Systems 3.11 Part 3 Process


FCFS Scheduling: Properties

• Is FCFS preemptive or nonpreemptive?


– Nonpreemptive, because processes
have to voluntarily release the CPU
once allocated

• Main problem with FCFS is the convoy


effect
– Short processes suffer increased
waiting times due to an earlier arrived
long process
• We saw this effect in the previous
Operating Systems 3.12 Part 3 Process
Shortest-Job-First (SJF)
Scheduling
• Prioritize processes based on their CPU burst
lengths
– Shorter burst implies higher priority
– Intuitive way to handle the convoy effect of
FCFS

• Two schemes:
1. Nonpreemptive: Once core is given to a
process, it cannot be preempted until it
completes its CPU burst
2. Preemptive (also known as Shortest
Remaining-Time First or SRTF): If a newly
created process has CPU burst length less
than the remaining CPU burst of currently
Operating Systems 3.13 Part 3 Process
Example of Nonpreemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4

• SJF (Nonpreemptive) Gantt Chart:


P1 P3 P2 P4

0 3 7 8 12 16

• Average waiting time = (0 + 6 + 3 + 7)/4 = 4

Operating Systems 3.14 Part 3 Process


Example of Preemptive SJF (SRTF)
Process Arrival Time Burst Time
P1 0.0 7 5 0
P2 2.0 4 2 0
P3 4.0 1
0
P4 5.0 4 0

• SJF (Preemptive) Gantt Chart:

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16
P1 P2 P3 P4
• Average waiting time = (9 + 1 + 0 +2)/4 = 3

Operating Systems 3.15 Part 3 Process


Priority-based Scheduling

• A priority number (integer) is associated with


each process

• CPU is allocated to the process with the highest


priority
– Usually, smallest integer implies highest
priority
– Two schemes: Preemptive and
nonpreemptive
– FCFS: Priority based on arrival order
– SJF: Priority based on CPU burst length

• Problem of Starvation: Lower priority processes


may never execute in a heavily loaded system
Operating Systems 3.16 Part 3 Process
Round Robin (RR) Scheduling

• Fixed time quantum for scheduling (q): A


Process is allocated CPU for q time units,
preempted thereafter and inserted at the
end of the ready queue
– Processes are scheduled cyclically
based on q
– Usually q is small, around 10-100
milliseconds

• Performance:
– n processes in ready queue implies
waiting time is no more than (n-1)q
time units
Operating Systems 3.17 Part 3 Process
Example: RR with Time Quantum
= 20
Process Burst Time
P1 53 33

P2 17 0

P3 68 48

P4 24 4

• All processes come at time 0 in the order of


P1, P2, P3, P4

• RR scheduling Gantt Chart:


P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

• Typically, higher average waiting/turnaround


time than SJF, but better response time
Operating Systems 3.18 Part 3 Process
Smaller Time Quantum Increases Context
Switches

Operating Systems 3.19 Part 3 Process


Turnaround Time Varies With Time Quantum

• No well-
defined
relationship
between
average
Assume arrival turnaround
order is P1, P2, time and
P3, P4 all at quantum size
time 0
• Average
turnaround
time remains
fixed once
quantum size
is larger than
maximum CPU
burst length
Operating Systems 3.20 Part 3 Process
Multilevel Queue Scheduling

• Different processes have different


requirements
– Foreground processes like those
handling I/O need to be interactive (RR
is preferred)
– Background processes need NOT be
interactive; scheduling overhead can
be low (FCFS)

• Solution: Multi-level Queue Scheduling


– Ready queue is partitioned into several
queues
Operating Systems 3.21 Part 3 Process
Multilevel Queue Scheduling
(Cont.)
• Two schemes for inter-queue scheduling
– Fixed priority scheduling
• Queues served in priority order
(e.g., foreground before
background)
• Starvation for lower priority queues

– Time-slice based scheduling


• Each queue gets a fixed time
quantum on the CPU (e.g., 80ms for
foreground, 3.22
Operating Systems
20ms for background,
Part 3 Process
Scheduling Algorithms: Multiprocessor
System

1. Partitioned Scheduling (Asymmetric


Multiprocessing - AMP)
2. Global Scheduling (Symmetric
Multiprocessing - SMP)

We assume a multi-core CPU and that


each process may only execute on one
CPU core at any time instant

Operating Systems 3.23 Part 3 Process


Partitioned Scheduling (AMP)

• Processes are partitioned apriori (at


process creation time) among CPU
cores
– Each process is mapped to one core
– Separate uniprocessor CPU
• Advantages
scheduling for each core (asymmetric
– scheduling)
Core-specific scheduling strategy is
feasible (FCFS, SJF, RR, multi-level,
etc.)
– Easy and simple extension of
uniprocessor CPU scheduling to
multiprocessor
Operating Systems CPU scheduling
3.24 Part 3 Process
Partitioned Scheduling: Issues

• How to map cores to processes?


– Poor mapping can lead to unequal
loads: a core is idling while another is
overloaded
• May reduce system/process
performance
• NP-Hard problem: Similar to the
well-known knapsack problem!
• Heuristics such as best-fit could be
used if CPU burst lengths are
known
Operating Systems 3.25 Part 3 Process
Global Scheduling (SMP)

• One or more ready queues for the entire


system (no mapping of queues to CPU
cores)
– When any core becomes idle, CPU
scheduler runs and allocates the next
process to execute
– Process selection based on strategy
applied globally or symmetrically
across all cores (FCFS, SJF, etc.)
– It is possible that the same process
may execute on different cores at
different times
Operating Systems 3.26 Part 3 Process
Example of SRTF (dual-core)
Process Arrival Time Burst Time
P1 0.0 7 3 0
P2 2.0 4 0
P3 4.0 1
0
P4 5.0 4 0
• SJF (Preemptive) on dual-core Gantt Chart:

Core P1 P3 P1
1
Core P2 P4
2
0 2 4 5 6 8 10
P1 P2 P3 P4

• Average waiting time = (1 + 0 + 0 + 1)/4 = 0.5


Operating Systems 3.27 Part 3 Process
Example of RR with q = 20 (dual-
core)
Process Burst Time
P1 53 33 13 0
P2 17 0

P3 68 48 28 0

P4 24 4 0

• All processes come at time 0 in the order of


P1, P2, P3, P4

• RR0 scheduling
20 40 60 73
dual-core Gantt Chart:
P1 P4 P3 P1
Core
1
Core P2 P3 P1 P4 P3
2
0 17 37 57 61 89
Operating Systems 3.28 Part 3 Process
Global Scheduling: Issues

• Implementation complexity is high when


compared to partitioned scheduling
– Need to synchronize clocks across
cores
– Global strategy for process selection
combining all the cores
• Implementation overhead is high
when compared to partitioned
scheduling
– Process could context switch from
one core and be scheduled on another
(private core-specific
Operating Systems 3.29 cache data Part 3 Process

You might also like