0% found this document useful (0 votes)
14 views59 pages

Chapter 5 Slides

The document discusses CPU scheduling concepts, including various scheduling criteria and algorithms such as FCFS, SJF, and Round-Robin. It highlights the importance of maximizing CPU utilization and minimizing waiting time, turnaround time, and response time for processes. Additionally, it covers advanced topics like multilevel feedback queues and scheduling in multiprocessor systems.

Uploaded by

tam.le2302220
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views59 pages

Chapter 5 Slides

The document discusses CPU scheduling concepts, including various scheduling criteria and algorithms such as FCFS, SJF, and Round-Robin. It highlights the importance of maximizing CPU utilization and minimizing waiting time, turnaround time, and response time for processes. Additionally, it covers advanced topics like multilevel feedback queues and scheduling in multiprocessor systems.

Uploaded by

tam.le2302220
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

School of Engineering

LE QUOC HUY
! 5.1. Basic Concepts
! 5.2. Scheduling Criteria
! 5.3. Scheduling Algorithms
! 5.4. Thread Scheduling
! 5.5. Multi-Processor Scheduling
of Chapter 5 ! 5.7. Operating Systems
Examples
Several processes are kept in memory at one
time.
When one process has to wait, the operating
system takes the CPU away from that process
and gives the CPU to another process.
Almost all computer resources are scheduled
before use

Maximum CPU utilization obtained with


multiprogramming
! CPU–I/O Burst Cycle –
Process execution consists
of a cycle of CPU execution
and I/O wait
! CPU burst followed by I/O
burst
! CPU burst distribution is of
main concern
Typical Histogram of CPU-burst Times

An I/O-bound
program typically
has many short CPU
bursts.
A CPU-bound
program might have
a few long CPU
Large number of short bursts bursts.
Small number of longer bursts
! …selects from among the processes in ready
queue, and allocates a CPU core to one of them
! Ready queue may be ordered in various ways:

• FIFO queue,
• a priority queue,
• a tree,
• an unordered linked
list.
! CPU scheduling decisions may take place
when a process:
1. Running state to waiting state
2. Running state to ready state
3. Waiting state to ready state
4. Terminates
! Scheduling under 1 and 4 is nonpreemptive
! All other scheduling is preemptive
! Consider access to shared data
! Consider preemption while in kernel mode
! Consider interrupts occurring during crucial
OS activities
…gives control of the CPU to the
process selected by the CPU scheduler.
…involves:
! switching context
! switching to user mode
! jumping to the proper location in
the user program to restart that
program
! Dispatch latency – time it takes for
the dispatcher to stop one process
and start another running
! CPU utilization – keep the CPU as busy as possibleè Maximize
! Throughput – # of processes that complete their execution per
time unit è Maximize
! Turnaround time – amount of time to execute a particular
process è Minimize
! Waiting time – amount of time a process has been waiting in
the ready queue è Minimize
! Response time – amount of time it takes from when a request
was submitted until the first response is produced, not output
(for time-sharing environment) è Minimize
• the process that requests the CPU first is
allocated the CPU first.
• The implementation of the FCFS policy is easily
managed with a FIFO queue.
• When a process enters the ready queue, its
PCB is linked onto the tail of the queue. When
the CPU is free, it is allocated to the process at
the head of the queue.
Gantt chart Process Burst Time (milisecond)
P1 24
P2 3
P3 3
! Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

! Waiting time for P1 = 0; P2 = 24; P3 = 27


! Average waiting time: (0 + 24 + 27)/3 = 17
Process Burst Time (milisecond)
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P2 , P3 , P1
! The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30

! Waiting time for P1 = 6; P2 = 0; P3 = 3


! Average waiting time: (6 + 0 + 3)/3 = 3
! Much better than previous case
Convoy effect

…. all the other processes wait for a big process to


get off the CPU.
… results in lower CPU and device utilization than
might be possible if the shorter processes were
allowed to go first
! Consider one CPU-bound and many I/O-bound
processes
! Associate with each process the length of its next CPU
burst
! Use these lengths to schedule the process with the
shortest time
! SJF is optimal – gives minimum average waiting time
for a given set of processes
! The difficulty is knowing the length of the next CPU
request
Process Arrival Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

! Gann chart P4 P1 P3 P2
0 3 9 16 24

! Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Determining Length of Next CPU Burst
! Can only estimate the length 𝝉𝒏"𝟏 = 𝜶𝒕𝒏 + (𝟏 − 𝜶) 𝝉𝒏
! Then pick process with shortest predicted next CPU
burst
! Exponential averaging: use the length of previous CPU
bursts to guess the next length 1. t = actual length of n th CPU burst
n
2. t n +1 = predicted value for the next CPU burst
3. a , 0 £ a £ 1
4. Define :

! Commonly, α set to ½ 𝝉𝒏"𝟏 = 𝜶𝒕𝒏 + (𝟏 − 𝜶) 𝝉𝒏


! Preemptive version called shortest-remaining-time-first
Prediction of the Length of the Next CPU Burst
𝟏
(𝜶 = , 𝝉𝟎 = 𝟏𝟎)
𝟐
! a =0
! tn+1 = tn è Recent history does not count
! a =1
Examples
of ! tn+1 = a tnè Only the actual last CPU burst counts
Exponential ! If we expand the formula, we get:
Averaging tn+1 = a tn+(1 - a)a tn -1 + …
+(1 - a )j a tn -j + …+(1 - a )n +1 t0

! Since both a and (1 - a) are less than or equal to 1,


each successive term has less weight than its
predecessor
" Now we add the concepts of varying arrival
times and preemption to the analysis
Process Arrival Time B.urst Time
Examples
P1 0 8
of
Preemptive P2 1 4

SJF, (or P3 2 9

Shortest- P4 3 5

remaining- " Preemptive SJF Gantt Chart

time-first) P1 P2 P4 P1 P3
0 1 5 10 17 26

" Average waiting time = [(10-1)+(1-1)+(17-


2)+5-3)]/4 = 26/4 = 6.5 msec
! Each process gets a small unit of CPU time (time quantum
q), usually 10-100 milliseconds. After this time has
elapsed, the process is preempted and added to the end of
the ready queue.
! n processes in the ready queue, time quantum is q then
each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-
1)q time units.
! Timer interrupts every quantum to schedule next process
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
! Gantt chart: P P P3 P1 P1 P1 P1 P1
1 2

0 4 7 10 14 18 22 26 30

! Typically, higher average turnaround than SJF, but better


response
! q should be large compared to context switch time: q usually
10ms to 100ms, context switch < 10 microsec
Time Quantum and Context Switch Time
Performance of RR:
q large Þ RR=FCFS
q small Þ large number of context switches
! A priority number (integer) is associated with each process
! The CPU is allocated to the process with the highest priority
(smallest integer º highest priority)
! Preemptive

! Nonpreemptive
! SJF is priority scheduling where priority is the inverse of
predicted next CPU burst time
! Equal-priority processes are scheduled in FCFS order
! Problem: Starvation – low priority processes may never
execute
! Solution: Aging – as time progresses increase the priority of
the process
Example of Priority Scheduling
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

! Average waiting time = 8.2 msec


Process Burst Time Priority
P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
q Run the process with the highest priority.
Processes with the same priority run Round-Robin

" Gantt Chart with 2 ms time quantum


! With priority scheduling,
have separate queues for
each priority.
! Schedule the process in
the highest-priority
queue!
! Prioritization based upon process type
! A process can move between the various queues;
aging can be implemented this way
! idea: is to separate processes according to the
characteristics of their CPU bursts.
! If
a process uses too much CPU time, it will be
moved to a lower-priority queue
MFQ scheduler defined by the following parameters:
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a
process
• method used to determine when to demote a
process
• method used to determine which queue a
process will enter when that process needs
service
Example of Multilevel Feedback Queue
" Suppose:
! Q0 – RR with time quantum 8 milliseconds
! Q1 – RR time quantum 16 milliseconds
! Q2 – FCFS
" Scheduling
! A new job enters queue Q0 which is served FCFS
4When it gains CPU, job receives 8 milliseconds
4If it does not finish in 8 milliseconds, job is moved to queue Q1
! At Q1 job is again served FCFS and receives 16 additional milliseconds
4If it still does not complete, it is preempted and moved to queue Q2
When threads supported, threads scheduled, not processes.
! In many-to-one and many-to-many models, thread library
schedules user-level threads to run on LWP
! Known as process-contention scope (PCS) since scheduling
competition takes place among threads belonging to the
same process
! Typically done via priority set by programmer
! Kernel thread scheduled onto available CPU is system-
contention scope (SCS) – competition among all threads in
system
!Turnaroud time (TAT): The time interval
from a process into the ready queue to its
completion.
!TAT= Burst time + Waiting time
= Exit time – Arrival time
The arrival order
P1, P2, P3, P4, P5,
all at time 0

(a) Draw four Gantt charts for: FCFS, SJF, nonpreemptive


priority (a larger priority number implies a higher priority),
and RR (quantum time= 2).
(b) What is the turnaround time of each process for each of
the scheduling algorithms in part (a)?
(c) What is the waiting time of each process for each of these
scheduling algorithms?
(d) Which of the algorithms results in the minimum average
waiting time (over all processes)?
being scheduled
using a
preemptive,
priority-based,
round-robin

A higher number indicating a higher relative priority. The scheduler


will execute the highest priority process. For processes with the
same priority, a round-robin scheduler will be used with a time
quantum of 10 units. If a process is preempted by a higher-priority
process, the preempted process is placed at the end of the queue.
(a) Show the scheduling order of the processes using a Gantt chart.
(b) What is the turnaround time for each process?
(c) What is the waiting time for each process?

P1 P2 P3 P4 P3 P5 P4 P6 P3 P4 P2
0 15 20 30 40 45 50 55 70 75 80 95
being scheduled
using a
preemptive,
priority-based,
round-robin

A higher number indicating a higher relative priority. The scheduler


will execute the highest priority process. For processes with the
same priority, a round-robin scheduler will be used with a time
quantum of 10 units. If a process is preempted by a higher-priority
process, the preempted process is placed at the end of the queue.
(a) Show the scheduling order of the processes using a Gantt chart.
(b) What is the turnaround time for each process?
(c) What is the waiting time for each process?
! CPU scheduling more complex when multiple CPUs
are available
! Multiprocessors may be any one of the following
architectures:
! Multicore CPUs
! Multithreaded cores
! NUMA systems
! Heterogeneous multiprocessing
SMP vs. ASMP
• all
each scheduling
processor is decisions,
self- I/O
scheduling. processing,
Scheduling and other
proceeds by system
having the activities
scheduler for handled by a
each single
processor processor —
examine the the master
ready queue server.
and select a • other
thread to run processors
execute only
user code
! Symmetric multiprocessing (SMP) is where each processor is self
scheduling.
! All threads may be in a common ready queue (a)
! Each processor may have its own private queue of threads (b)
! Recent trend to place multiple processor cores on same
physical chipèFaster and consumes less power
! Multiple threads per core also growing
! Takesadvantage of memory stall to make progress on
another thread while memory retrieve happens

Memory stall: when a processor accesses memory, it spends a significant


amount of time waiting for the data to become available.
Multithreaded Multicore System

Each core has > 1 hardware threads.

If one thread has a memory stall, switch to another thread!

a dual-threaded processing core on which the execution of thread 0


and the execution of thread 1 are interleaved
Chip Multithreading
! Chip-multithreading (CMT)
assigns each core multiple
hardware threads.
(Intel refers to this as
hyperthreading or simultaneous
multithreading (SMT).)

Ex: On a quad-core system with 2


hardware threads per core, the
operating system sees 8 logical
processors.
Two levels of scheduling:
a multithreaded, multicore processor actually
requires two different levels of scheduling

1. The operating system


deciding which software
thread to run on a logical
CPU
2. How each core decides which
hardware thread to run on the
physical core.
! If SMP, it is important to keep all CPUs loaded for
efficiency
! Load balancing attempts to keep workload
evenly distributed across all processors in an
SMP system.
Push Pull
migration migration

periodic task idle


checks load on processors
each processor, pulls waiting
and if found pushes task from
task from busy
overloaded CPU to processor
other CPUs
Warm cache: The data most recently accessed by the thread
populate the cache for the processor. As a result, successive memory
accesses by the thread are often satisfied in cache memory.

Most OSes with SMP attempts to keep a thread running on the


same processor and take advantage of a warm cache

Processor Affinity: a process has an affinity for the processor on which it


is currently running
! Load balancing may affect processor affinity as a thread may be moved
from one processor to another to balance loads, yet that thread loses
the contents of what it had in the cache of the processor it was moved
off of.
Forms

Soft Hard
affinity affinity

the operating system allows a process


attempts to keep a thread to specify a set
running on the same of processors it
processor, but no guarantees. may run on
! Linux uses the completely fair scheduler (CFS),
which assigns a proportion of CPU processing
time to each task. The proportion is based on the
virtual runtime (vruntime) value associated with
each task.
Windows scheduling uses a preemptive, 32-level
priority scheme to determine the order of thread
scheduling.
Solaris identifies six unique scheduling classes that
are mapped to a global priority. CPU-intensive
threads are generally assigned lower priorities
(and longer time quantums), and I/O-bound threads
are usually assigned higher priorities (with shorter
time quantums.)
Thank You
Quoc Huy LE
097 487 7148
[email protected]
https://huyle84.github.io

You might also like