Module 3 - CPU Scheduling
Module 3 - CPU Scheduling
Module 3 - CPU Scheduling
26/12/2021
College of Computing and Informatics
OPERATING SYSTEMS
Operating Systems
Module 3
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multi-Processor Scheduling
Weekly LEARNING OUTCOMES
Describe CPU scheduling algorithms and their differences.
P1 P2 P3
Waiting time for P1 = 0; P2 = 24; P3 = 27
0 24 27 30
Average waiting time: (0 + 24 + 27)/3 = 17
FCFS SCHEDULING (CONT.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
P4 P1 P3 P2
0 3 9 16 24
Can only estimate the length – should be similar to the previous one
Then pick process with shortest predicted next CPU burst
Can be done by using the length of previous CPU bursts, using exponential
averaging
1. t n actual length of n th CPU burst
2. n 1 predicted value for the next CPU burst
3. , 0 1
4. Define :
Commonly, α set to ½
PREDICTION OF THE LENGTH OF THE NEXT CPU BURST
EXAMPLES OF EXPONENTIAL AVERAGING
• =0
• n+1 = n
• Recent history does not count
• =1
• n+1 = tn
• Only the actual last CPU burst counts
• If we expand the formula, we get:
• n+1 = tn+(1 - ) tn -1 + …
• +(1 - )j tn -j + …
• +(1 - )n +1 0
• Since both and (1 - ) are less than or equal to 1, each successive
EXAMPLE OF SHORTEST-REMAINING-TIME-FIRST
Now we add the concepts of varying arrival times and preemption to the analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
• The CPU is allocated to the process with the highest priority (smallest integer highest
priority)
• Preemptive
• Nonpreemptive
• SJF is priority scheduling where priority is the inverse of predicted next CPU burst time
• When a thread has been running on one processor, the cache contents of that
processor stores the memory accesses by that thread.
• We refer to this as a thread having affinity for a processor (i.e., “processor
affinity”)
• Load balancing may affect processor affinity as a thread may be moved from one
processor to another to balance loads, yet that thread loses the contents of what
it had in the cache of the processor it was moved off of.
• Soft affinity – the operating system attempts to keep a thread running on the
same processor, but no guarantees.
• Hard affinity – allows a process to specify a set of processors it may run on.
NUMA AND CPU SCHEDULING
If the operating system is NUMA-aware, it will assign memory closes to
the CPU the thread is running on.
Required Reading
1. Chapter 5: CPU Scheduling (Operating System Concepts by
Silberschatz, Abraham, et al. 10th ed., ISBN: 978-1-119-32091-3,
2018)
Recommended Reading
1. Chapter 2.4 (Modern Operating Systems by Andrew S. Tanenbaum and
Herbert Bos. 4th ed., ISBN-10: 0-13-359162-X, ISBN-13: 978-0-13-
359162-0, 2015)
This Presentation is mainly dependent on the textbook: Operating System Concepts by Silberschatz, Abraham, et al. 10th ed
Thank You