0% found this document useful (0 votes)
78 views78 pages

CH 6

This document discusses CPU scheduling in operating systems. It covers basic concepts of CPU scheduling including multiprogramming and the CPU-I/O burst cycle. It then discusses various scheduling criteria like CPU utilization, turnaround time, throughput, and waiting time. Finally, it covers the first-come, first-served (FCFS) CPU scheduling algorithm and provides an example of how FCFS scheduling works.

Uploaded by

Ilive ToLearn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views78 pages

CH 6

This document discusses CPU scheduling in operating systems. It covers basic concepts of CPU scheduling including multiprogramming and the CPU-I/O burst cycle. It then discusses various scheduling criteria like CPU utilization, turnaround time, throughput, and waiting time. Finally, it covers the first-come, first-served (FCFS) CPU scheduling algorithm and provides an example of how FCFS scheduling works.

Uploaded by

Ilive ToLearn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 78

Chapter 6: CPU Scheduling

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne


Chapter 6: CPU Scheduling
 Basic Concepts
 Scheduling Criteria
 Scheduling Algorithms
 Thread Scheduling
 Multiple-Processor Scheduling
 Real-Time CPU Scheduling
 Operating Systems Examples
 Algorithm Evaluation

Operating System Concepts – 9th Edition 6.2 Silberschatz, Galvin and Gagne
Objectives
 To introduce CPU scheduling, which is
the basis for multiprogrammed
operating systems
 To describe various CPU-scheduling
algorithms
 To discuss evaluation criteria for
selecting a CPU-scheduling algorithm
for a particular system
 To examine the scheduling algorithms
of several operating systems

Operating System Concepts – 9th Edition 6.3 Silberschatz, Galvin and Gagne
Basic Concepts
 In a single-processor system, only one process can
run at a time. Others must wait until the CPU is free
 The objective of multiprogramming is to have some
process running at all times, to maximize CPU
utilization.
 CPU scheduling is the basis of multiprogramming
systems
 By switching the CPU among processes, the OS can make
the computer system more productive
 The idea of multiprogramming:
 Several processes are kept in memory at one time
 A process is executed until it must wait (requesting I/O)
 OS takes the CPU away from process and gives the CPU to
another process
 The selected process is executed until it must wait
 This pattern Continues

Operating System Concepts – 9th Edition 6.4 Silberschatz, Galvin and Gagne
CPU–I/O Burst Cycle
 Process execution consists of
a cycle of CPU execution and
I/O
 Processes alternate between CPU
execution state and I/O wait state
 Process execution begins with
a CPU burst, followed by I/O
burst,
 The final CPU burst will end
with exit() system call
 Scheduling is a fundamental
OS function. Almost all
computer resources are
scheduled before use. The
CPU is, one of the primary
computer resources.
Operating System Concepts – 9th Edition 6.5 Silberschatz, Galvin and Gagne
Histogram of CPU-burst Times

Operating System Concepts – 9th Edition 6.6 Silberschatz, Galvin and Gagne
CPU Scheduler
 Whenever the CPU becomes idle,
 OS selects one of the processes in the ready queue to be
executed,
 OS allocates the CPU to it
 The selection is carried out by the CPU scheduler (short-term
scheduler)
 The ready queue is not necessarily a fist-in first-out (FIFO)
queue
 It may be implemented as a FIFO queue,
queue priority queue,
unordered linked list queue, …
 The records in the queue are PCBs of the processes
 CPU scheduling decisions may take place when a process
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
 Scheduling under 1 and 4 is nonpreemptive
 Scheduling under 2 and 3 is preemptive Silberschatz, Galvin and Gagne
Operating System Concepts – 9 Edition
th 6.7
Nonpreemptive Scheduling
 Under nonpreemptive scheduling, once the CPU
has been allocated to a process, the process
keeps the CPU until it releases the CPU either
 By terminating or
 By switching to the waiting state
 Nonpreemptive scheduling does not require
special HW like timer
 Nonpreemptive scheduling is used
 Windows 3.1 OS
 Apple Macintosh OS (old versions)
 Windows 95 and all subsequent versions of
Windows uses preemptive scheduling
 Preemptive scheduling can result in race
conditions
Operating System Concepts – 9th Edition 6.8 Silberschatz, Galvin and Gagne
Dispatcher
 Dispatcher module gives control of the CPU to
the process selected by the short-term
scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to
restart that program
 Dispatch latency – time it takes for the
dispatcher to stop one process and start
another running

Operating System Concepts – 9th Edition 6.9 Silberschatz, Galvin and Gagne
Scheduling Criteria
 Different CPU-scheduling algorithms have different
properties,
properties and the choice of a particular algorithm may
favor one class of processes over another.
 Many criteria have been suggested for comparing CPU-
scheduling algorithms.
 CPU utilization – keep the CPU as busy as possible
 In real a system, it should range from 40% (lightly
weight system) to 90% (heavily weight system)
 Turnaround time – From the point of view of a particular
process, the important criterion is how long it takes to
execute that process.
 The interval from the time of submission of a process to
the time of completion is the turnaround time.
 Turnaround time is the sum of the periods spent waiting to
get into memory, waiting in the ready queue, executing on
the CPU, and doing I/O.
Operating System Concepts – 9th Edition 6.10 Silberschatz, Galvin and Gagne
Scheduling Criteria (Cont.)
 Throughput – # of processes that complete their
execution per time unit
 For long processes  1 process / hour
 For short transactions  10 processes / second
 Waiting time – amount of time a process has been
waiting in the ready queue
 The CPU scheduling algorithm does not affect the
amount of time during which a process executes
or does I/O
 The CPU scheduling algorithm affects only the
amount of time that a process spends waiting in
the ready queue
 Response time – amount of time it takes from a
request was submitted until the first response is
produced, not output (for time-sharing environment)
Operating System Concepts – 9th Edition 6.11 Silberschatz, Galvin and Gagne
Scheduling Algorithm Optimization Criteria

 Max CPU utilization


 Max throughput
 Min turnaround time
 Min waiting time
 Min response time
 Optimize the average measure
 Optimize the minimum and maximum
values

Operating System Concepts – 9th Edition 6.12 Silberschatz, Galvin and Gagne
First-Come, First-Served (FCFS) Scheduling
 The simplest CPU-scheduling algorithm. The process
that requests the CPU first is allocated the
CPU first. Also, called First-In First-Out (FIFO).
 Implementation
 FIFO queue: When a process enters the
ready queue, its PCB is linked onto the tail
of the queue
 When the CPU free, it is allocated to the
process at the head of the queue
 The running process is removed form the
queue
 The code for FCFS scheduling is simple to
write and understand

Operating System Concepts – 9th Edition 6.13 Silberschatz, Galvin and Gagne
FCFS Scheduling: Example

 Consider the following set of processes that arrive at time 0, with the
length of the CPU-burst time given in milliseconds:
Process CPU-Burst Time
P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

 Waiting time for P1 = 0 ms; P2 = 24 ms; P3 = 27 ms


 Average waiting time: (0 + 24 + 27)/3 = 17 milliseconds

Operating System Concepts – 9th Edition 6.14 Silberschatz, Galvin and Gagne
FCFS Scheduling (Cont.)
Process CPU-Burst Time
P2 3
P3 3
P1 24
Suppose that the processes arrive in the order
P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30
 Waiting time for P = 6 ms; P2 = 0 ms, P3 = 3 ms
1
 Average waiting time: (6 + 0 + 3)/3 = 3 milliseconds
 Much better than previous case

Operating System Concepts – 9th Edition 6.15 Silberschatz, Galvin and Gagne
Convoy Effect
 Convoy effect:
effect short process behind long process
 Assume we have one CPU-bound and many I/O-bound
processes
 The CPU bound process will get the CPU and hold it
 During this long time all the other processes will finish
their I/O an move into the ready queue waiting for CPU
 I/O devices are idle
 After the CPU-bound process finishes its CPU burst, it
moves to an I/O device
 All the I/O-bound processes (have short CPU bursts)
execute quickly and move back to the I/O queues  CPU
sets idle
 The FCFS scheduling algorithm is nonpreemptive
 Once the CPU has been allocated to a process, that
process keeps the CPU until It releases the CPU
 By terminating
 By requesting I/O
Operating System Concepts – 9th Edition 6.16 Silberschatz, Galvin and Gagne
Shortest-Job-First (SJF) Scheduling
(Shortest Next CPU Burst First)
 This algorithm associates with each process the
length of its next CPU burst.
 It uses these lengths to schedule the process with
the shortest time
 Two schemes:
 nonpreemptive – once CPU given to the process it
cannot be preempted until completes its CPU
burst
 preemptive – if a new process arrives with CPU
burst length less than remaining time of current
executing process, preempt. This scheme is
know as the Shortest-Remaining-Time-First (SRTF)
 SJF is optimal – gives minimum average waiting time
for a given set of processes
 It is difficult to predict the process running time.

Operating System Concepts – 9th Edition 6.17 Silberschatz, Galvin and Gagne
Determining Length of Next CPU Burst
 Can only estimate the length
 Can be done by using the length of previous
CPU bursts, using exponential averaging
1. t n  actual lenght of n th CPU burst
2.  n 1  predicted value for the next CPU burst
3.  , 0    1

 n1   tn  1    n .
4. Define :

 =0
 n+1 = n  Recent history does not count
  =1
 n+1 =  tn  Only the actual last CPU burst
counts
Operating System Concepts – 9th Edition 6.18 Silberschatz, Galvin and Gagne
Prediction of the Length of the Next CPU Burst

Operating System Concepts – 9th Edition 6.19 Silberschatz, Galvin and Gagne
Example
 Consider the following set of processes, with the length of
the CPU-burst time given in milliseconds:
Process CPU-Burst Time
P1 6
P2 8
P3 7
P4 3

P4 P1 P3 P2
0 3 9 1 2
6 4
 The waiting time is
 3 milliseconds for P1 16 milliseconds for P2
 9 milliseconds for P3 0 milliseconds for P4
 The average waiting time is (3+16+9+0)/4=7 milliseconds in
case of SJP
 The average waiting time is (0+6+14+21)/4=10.25
Operating System Concepts – 9th Edition 6.20 Silberschatz, Galvin and Gagne
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 SJF (non-preemptive)

P1 P3 P2 P4

0 3 7 8 12 16

 The waiting time is


 0 milliseconds for P1 (8-2) milliseconds for P2
 (7-4) milliseconds for P3 (12-5) milliseconds for P4
 Average waiting time = (0 6.21
+ 6 + 3 + 7)/4 = 4 Silberschatz,
milliseconds Galvin and Gagne
Operating System Concepts – 9th Edition
Example of Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 SJF (preemptive)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

 The waiting time is


 (11-2) milliseconds for P1 (5-4) milliseconds for P2
 (4-4) milliseconds for P3 (7-5) milliseconds for P4
 Average waiting time = (9 + 1 + 0 +2)/4 = 3 milliseconds
Operating System Concepts – 9th Edition 6.22 Silberschatz, Galvin and Gagne
Priority Scheduling
 A priority number (integer) is associated with
each process
 The CPU is allocated to the process with the
highest priority (smallest integer or highest
integer).
 A priority scheduling algorithm could be,
 Preemptive
 nonpreemptive
 Two types of priority are available,
Internal. criteria controlled by the operating system

 External. criteria outside the operating system
 Problem  Starvation (indefinite blocking)– low
priority processes may never execute
 Solution  Aging – as time progresses increase
the priority of the process
6.23 Silberschatz, Galvin and Gagne
Operating System Concepts – 9th Edition
Example of Priority Scheduling

ProcessAarri Burst TimeT Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

 Priority scheduling Gantt Chart

P1 P2 P1 P3 P4
0 1 6 16 18 19

 Average waiting time = 8.2 msec

Operating System Concepts – 9th Edition 6.24 Silberschatz, Galvin and Gagne
Round Robin (RR)
 The round-robin scheduling algorithm is designed
especially for time-sharing systems.
systems
 It is similar to FCFS scheduling, but preemption is added
to switch between processes.
 Each process gets a small unit of CPU time (time
quantum),
usually 10-100 milliseconds.
 After this time has elapsed, the process is preempted
and added to the end of the ready queue.
 RR implementation: ready queue  FIFO queue
 If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU
time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
 Performance
 q large  FIFO
 q small  processor sharing
Operating System Concepts – 9th Edition 6.25 Silberschatz, Galvin and Gagne
Time Quantum and Context Switch Time
 q must be large with respect to context
switch, otherwise overhead is too high

Operating System Concepts – 9th Edition 6.26 Silberschatz, Galvin and Gagne
Turnaround Time Varies With The Time Quantum

Operating System Concepts – 9th Edition 6.27 Silberschatz, Galvin and Gagne
Example of RR with Time Quantum = 4
 Consider the following set of processes that arrive at time 0, with
the length of CPU-burst time given in millisecond
Process Burst Time
P1 24
P2 3
P3 3
 The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
 The 0
waiting
4 time
7 is
10 14 18 22 26 30
 (10-4) milliseconds for P1 (4) milliseconds for P2
 (7) milliseconds for P3
 Average waiting time = (6 + 4 + 7)/3 = 17/3 = 5.66 milliseconds

Operating System Concepts – 9th Edition 6.28 Silberschatz, Galvin and Gagne
Example of RR with Time Quantum = 20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
 The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

 Typically, higher average turnaround than SJF, but


better response

Operating System Concepts – 9th Edition 6.29 Silberschatz, Galvin and Gagne
20 17 20 20 20 20 4 1 20 8
3
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162


 The waiting time is
 ( (77-20) + (121-97) ) milliseconds for P1
 (20) milliseconds for P2
 (37 + (97-57) + (134-117) ) milliseconds for P3
 (57 + (117-77) ) milliseconds for P4
 Average waiting time = (81 + 20 + 94 + 97)/4 = 292/4 = 73
milliseconds

 The turnaround time is


 134 milliseconds for P1, 37 milliseconds for P2,
 162 milliseconds for P3, 121 milliseconds for P4
 Average turnaround time = (134 + 37 + 162 + 121)/4 = 454/4
= 113.5 ms
Operating System Concepts – 9th Edition 6.30 Silberschatz, Galvin and Gagne
Multilevel Queue
 Another class of scheduling algorithms has been created for
situations in which processes are easily classified into different
groups.
 Common division is made between foreground (interactive)
processes and background (batch) processes.
 These two types of processes have different response-time
requirements and so may have different scheduling needs.
 A multilevel queue scheduling algorithm partitions the ready
queue into several separate queues. Each queue has its own
scheduling algorithm:
 foreground – RR
 background – FCFS
 Scheduling must be done between the queues:
1. Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
2. Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR and 20% to background in FCFS
Operating System Concepts – 9th Edition 6.31 Silberschatz, Galvin and Gagne
Multilevel Queue Scheduling

Processes do not move between


assigned to a queue on entry to
Processes are permanently

the system

queues
• Each has absolute priority over the lower priority queues
• No process in the batch queue could run unless the queues for
system processes, interactive processes, and interactive
editing processes were empty
• If an interactive editing process entered the ready queue while
a back process was running 6.32
Operating System Concepts – 9 Edition
, the batch processSilberschatz,
th
would be Galvin and Gagne
Multilevel Feedback Queue
 Normally, when the multilevel queue scheduling algorithm is used,
processes are permanently assigned to a queue when they enter the
system.
 This setup has the advantage of low scheduling overhead, but it is inflexible.
 The multilevel feedback queue scheduling algorithm, in contrast, allows a
process to move between queues.
 A process can move between the various queues.
 Separate processes according to the characteristics of their CPU bursts. If a
process uses too much CPU time,
time it will be moved to a lower-priority queue.
queue
 This scheme leaves I/O-bound and interactive processes in the higher-
priority queues.
queues In addition, a process that waits too long in a lower-priority
queue may be moved to a higher-priority queue.
 Multilevel-feedback-queue scheduler defined by the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter when that
process needs service

Operating System Concepts – 9th Edition 6.33 Silberschatz, Galvin and Gagne
Example of Multilevel Feedback Queue
 Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS
 Scheduling
 A new job enters queue Q0 which is served FCFS.
When it gains CPU, job receives 8 milliseconds. If it
does not finish in 8 milliseconds, job is moved to
queue Q1.
 At Q1 job is again served FCFS and receives 16
additional milliseconds. If it still does not complete,
it is preempted and moved to queue Q2.

Operating System Concepts – 9th Edition 6.34 Silberschatz, Galvin and Gagne
Multilevel Feedback Queues

Operating System Concepts – 9th Edition 6.35 Silberschatz, Galvin and Gagne
Thread Scheduling

 Distinction between user-level and kernel-level threads


 When threads supported, threads scheduled, not
processes
 Many-to-one and many-to-many models, thread library
schedules user-level threads to run on LWP
 Known as process-contention scope (PCS) since
scheduling competition is within the process
 Typically done via priority set by programmer
 Kernel thread scheduled onto available CPU is system-
contention scope (SCS) – competition among all threads
in system

Operating System Concepts – 9th Edition 6.36 Silberschatz, Galvin and Gagne
Pthread Scheduling

 API allows specifying either PCS or SCS during


thread creation
 PTHREAD_SCOPE_PROCESS schedules threads
using PCS scheduling
 PTHREAD_SCOPE_SYSTEM schedules threads
using SCS scheduling
 Can be limited by OS – Linux and Mac OS X only
allow PTHREAD_SCOPE_SYSTEM

Operating System Concepts – 9th Edition 6.37 Silberschatz, Galvin and Gagne
Pthread Scheduling API
#include <pthread.h>
#include <stdio.h>
#define NUM_THREADS 5
int main(int argc, char *argv[]) {
int i, scope;
pthread_t tid[NUM THREADS];
pthread_attr_t attr;
/* get the default attributes */
pthread_attr_init(&attr);
/* first inquire on the current scope */
if (pthread_attr_getscope(&attr, &scope) != 0)
fprintf(stderr, "Unable to get scheduling scope\n");
else {
if (scope == PTHREAD_SCOPE_PROCESS)
printf("PTHREAD_SCOPE_PROCESS");
else if (scope == PTHREAD_SCOPE_SYSTEM)
printf("PTHREAD_SCOPE_SYSTEM");
else
fprintf(stderr, "Illegal scope value.\n");
}

Operating System Concepts – 9th Edition 6.38 Silberschatz, Galvin and Gagne
Pthread Scheduling API
/* set the scheduling algorithm to PCS or SCS */
pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM);
/* create the threads */
for (i = 0; i < NUM_THREADS; i++)
pthread_create(&tid[i],&attr,runner,NULL);
/* now join on each thread */
for (i = 0; i < NUM_THREADS; i++)
pthread_join(tid[i], NULL);
}
/* Each thread will begin control in this function */
void *runner(void *param)
{
/* do some work ... */
pthread_exit(0);
}

Operating System Concepts – 9th Edition 6.39 Silberschatz, Galvin and Gagne
Multiple-Processor Scheduling
 CPU scheduling more complex when multiple CPUs are available
 If multiple CPUs are available, load sharing becomes possible
 Assume Homogeneous processors within a multiprocessor
 Asymmetric multiprocessing – all scheduling decisions, I/O
processing, and other system activities handled by a single
processor—the master server. The other processors execute only
user code.
 Symmetric multiprocessing (SMP) – each processor is self-
scheduling, all processes in common ready queue, or each has its
own private queue of ready processes. (access and update a
common data structure)
 Currently, most common in all operating systems (Windows,
Linux, Mac)
 Processor affinity – process has affinity for processor on which it is
currently running
 soft affinity
 hard affinity
 Variations including processor sets

Operating System Concepts – 9th Edition 6.40 Silberschatz, Galvin and Gagne
NUMA and CPU Scheduling

Note that memory-placement algorithms can also consider affinity

Operating System Concepts – 9th Edition 6.41 Silberschatz, Galvin and Gagne
Multiple-Processor Scheduling – Load Balancing
 If SMP, need to keep all CPUs loaded for efficiency. Otherwise,
one or more processors may sit idle while other processors
have high workloads
 Load balancing attempts to keep workload evenly distributed
 On systems with a common run queue, it is often unnecessary
 There are two general approaches to load balancing:
 Push migration – periodic task checks load on each processor,
and if found pushes task from overloaded CPU to other CPUs
 Pull migration – idle processors pulls waiting task from busy
processor
 Push and pull migration need not be mutually exclusive and are
in fact often implemented in parallel on load-balancing
systems.
 Linux scheduler and the FreeBSD systems implement both
techniques.

Operating System Concepts – 9th Edition 6.42 Silberschatz, Galvin and Gagne
Multicore Processors
 Recent trend to place multiple processor cores on same physical chip
 Each core maintains its architectural state and thus appears to the
operating system to be a separate physical processor.
 Faster and consumes less power
 when a processor accesses memory, it spends a significant amount of
time waiting for the data to become available.
 The processor can spend up to 50 percent of its time waiting for data
to become available from memory.
 Many recent hardware designs have implemented multithreaded
processor cores in which two (or more) hardware threads are
assigned to each core.
 From an operating-system perspective, each hardware thread appears
as a logical processor
 There are two ways to multithread a processing core
 Coarse-grained multithreading, a thread executes on a processor until
a memory stall occurs. Processor must switch to another thread.
 Fine-grained (or interleaved) multithreading switches between threads
at the boundary of an instruction cycle.

Operating System Concepts – 9th Edition 6.43 Silberschatz, Galvin and Gagne
Multithreaded Multicore System

Operating System Concepts – 9th Edition 6.44 Silberschatz, Galvin and Gagne
Multithreaded Multicore System
 Notice that a multithreaded multicore processor actually
requires two different levels of scheduling.
 On one level are the scheduling decisions that must be
made by the operating system as it chooses which
software thread to run on each hardware thread (logical
processor).
 For this level of scheduling, the operating system may
choose any scheduling algorithm, described in Section
6.3.
 A second level of scheduling specifies how each core
decides which hardware thread to run. There are several
strategies to adopt in this situation.
 The Ultra SPARC T3, uses a simple roundrobin algorithm
to schedule the eight hardware threads to each core.

Operating System Concepts – 9th Edition 6.45 Silberschatz, Galvin and Gagne
Real-Time CPU Scheduling
 Can present obvious
challenges
 Soft real-time systems – no
guarantee as to when critical
real-time process will be
scheduled
 Hard real-time systems – task
must be serviced by its
deadline
 Two types of latencies affect
performance
1. Interrupt latency – time from arrival
of interrupt to start of routine that
services interrupt
2. Dispatch latency – time for
schedule to take current process
off CPU and switch to another

Operating System Concepts – 9th Edition 6.46 Silberschatz, Galvin and Gagne
Real-Time CPU Scheduling (Cont.)

 Conflict phase of
dispatch latency:
1. Preemption of
any process
running in
kernel mode
2. Release by
low-priority
process of
resources
needed by
high-priority
processes

Operating System Concepts – 9th Edition 6.47 Silberschatz, Galvin and Gagne
Priority-based Scheduling
 For real-time scheduling, scheduler must support
preemptive, priority-based scheduling
 But only guarantees soft real-time
 For hard real-time must also provide ability to meet
deadlines
 Processes have new characteristics: periodic ones
require CPU at constant intervals
 Has processing time t, deadline d, period p
 0≤t≤d≤p
 Rate of periodic task is 1/p

Operating System Concepts – 9th Edition 6.48 Silberschatz, Galvin and Gagne
Virtualization and Scheduling
 Virtualization software schedules multiple
guests onto CPU(s)
 Each guest doing its own scheduling
 Not knowing it doesn’t own the CPUs
 Can result in poor response time
 Can effect time-of-day clocks in guests
 Can undo good scheduling algorithm efforts of
guests

Operating System Concepts – 9th Edition 6.49 Silberschatz, Galvin and Gagne
Rate Montonic Scheduling
 A priority is assigned based on the inverse of its
period

 Shorter periods = higher priority;

 Longer periods = lower priority

 P1 is assigned a higher priority than P2.

Operating System Concepts – 9th Edition 6.50 Silberschatz, Galvin and Gagne
Missed Deadlines with Rate Monotonic Scheduling

Operating System Concepts – 9th Edition 6.51 Silberschatz, Galvin and Gagne
Earliest Deadline First Scheduling (EDF)

 Priorities are assigned according to deadlines:

the earlier the deadline, the higher the priority;


the later the deadline, the lower the priority

Operating System Concepts – 9th Edition 6.52 Silberschatz, Galvin and Gagne
Proportional Share Scheduling

 T shares are allocated among all processes in the


system

 An application receives N shares where N < T

 This ensures each application will receive N / T of


the total processor time

Operating System Concepts – 9th Edition 6.53 Silberschatz, Galvin and Gagne
POSIX Real-Time Scheduling
 The POSIX.1b standard
 API provides functions for managing real-time threads
 Defines two scheduling classes for real-time threads:
n SCHED_FIFO - threads are scheduled using a FCFS
strategy with a FIFO queue. There is no time-slicing for
threads of equal priority
n SCHED_RR - similar to SCHED_FIFO except time-slicing
occurs for threads of equal priority
 Defines two functions for getting and setting
scheduling policy:
n pthread_attr_getsched_policy(pthread_attr_t *attr,
int *policy)
n pthread_attr_setsched_policy(pthread_attr_t *attr,
int policy)

Operating System Concepts – 9th Edition 6.54 Silberschatz, Galvin and Gagne
POSIX Real-Time Scheduling API
#include <pthread.h>
#include <stdio.h>
#define NUM_THREADS 5
int main(int argc, char *argv[])
{
int i, policy;
pthread_t_tid[NUM_THREADS];
pthread_attr_t attr;
/* get the default attributes */
pthread_attr_init(&attr);
/* get the current scheduling policy */
if (pthread_attr_getschedpolicy(&attr, &policy) != 0)
fprintf(stderr, "Unable to get policy.\n");
else {
if (policy == SCHED_OTHER) printf("SCHED_OTHER\n");
else if (policy == SCHED_RR) printf("SCHED_RR\n");
else if (policy == SCHED_FIFO) printf("SCHED_FIFO\n");
}

Operating System Concepts – 9th Edition 6.55 Silberschatz, Galvin and Gagne
POSIX Real-Time Scheduling API (Cont.)

/* set the scheduling policy - FIFO, RR, or OTHER */


if (pthread_attr_setschedpolicy(&attr, SCHED_FIFO) != 0)
fprintf(stderr, "Unable to set policy.\n");
/* create the threads */
for (i = 0; i < NUM_THREADS; i++)
pthread_create(&tid[i],&attr,runner,NULL);
/* now join on each thread */
for (i = 0; i < NUM_THREADS; i++)
pthread_join(tid[i], NULL);
}

/* Each thread will begin control in this function */


void *runner(void *param)
{
/* do some work ... */
pthread_exit(0);
}

Operating System Concepts – 9th Edition 6.56 Silberschatz, Galvin and Gagne
Operating System Examples

 Linux scheduling

 Windows scheduling

 Solaris scheduling

Operating System Concepts – 9th Edition 6.57 Silberschatz, Galvin and Gagne
Linux Scheduling Through Version 2.5
 Prior to version 2.5, ran variation of standard UNIX scheduling
algorithm
 It did not adequately support systems with multiple processors.
 With Version 2.5 of the kernel, the scheduler was overhauled to
include a scheduling algorithm—known as O(1)—that ran in constant
time regardless of the number of tasks in the system.
 It delivered excellent performance on SMP systems but it led to poor
response times for the interactive processes
 During development of the 2.6 kernel, the scheduler was again
revised; and in release 2.6.23 of the kernel, the Completely Fair
Scheduler (CFS) became the default Linux scheduling algorithm.
 Scheduling in the Linux system is based on scheduling classes.
 Each class is assigned a specific priority. By using different
scheduling classes, the kernel can accommodate different
scheduling algorithms based on the needs of the system and its
processes.
 The scheduling criteria for a Linux server, for example, may be
different from those for a mobile device running Linux.

Operating System Concepts – 9th Edition 6.58 Silberschatz, Galvin and Gagne
Linux Scheduling in Version 2.6.23 +
 Scheduler selects the highest-priority task belonging to the highest-
priority scheduling class
 Standard Linux kernels implement two scheduling classes:
 Default scheduling class using the CFS scheduling algorithm
 Real-time scheduling class.
 CFS scheduler assigns a proportion of CPU time to each task.
 This proportion is calculated based on the nice value assigned to
each task. Nice values range from −20 to +19, where a numerically
lower nice value indicates a higher relative priority.
priority Tasks with
lower nice values receive a higher proportion of CPU processing
time.
 CFS doesn’t use discrete values of time slices and instead
identifies a targeted latency, which is an interval of time during
which every runnable task should run at least once.
 Proportions of CPU time are allocated from the value of targeted
latency. In addition to having default and minimum values, targeted
latency can increase if the number of active tasks in the system
grows beyond a certain threshold.

Operating System Concepts – 9th Edition 6.59 Silberschatz, Galvin and Gagne
Linux Scheduling in Version 2.6.23 +
 The CFS scheduler doesn’t directly assign priorities.
 It records how long each task has run by maintaining the virtual
run time of each task using the per-task variable vruntime.
 The virtual run time is associated with a decay factor based on
the priority of a task: lower-priority tasks have higher rates of
decay than higher-priority tasks.
 if a task with default priority runs for 200 milliseconds, its
vruntime will also be 200 ms. However, if a lower-priority task
runs for 200 ms, its vruntime will be higher than 200
milliseconds. Similarly, if a higher-priority task runs for 200 ms,
its vruntime will be less than 200 ms.
 To decide which task to run next, the scheduler simply selects
the task that has the smallest vruntime value.
 In addition, a higher-priority task that becomes available to run
can preempt a lower-priority task.

Operating System Concepts – 9th Edition 6.60 Silberschatz, Galvin and Gagne
Linux Scheduling (Cont.)
 Assume that two tasks have the same nice values.
One task is I/O-bound and the other is CPU-bound.
 Typically, the I/O-bound task will run only for short
periods before blocking for additional I/O, and the
CPU-bound task will exhaust its time period
whenever it has an opportunity to run on a processor.
 Therefore, the value of vruntime will eventually be
lower for the I/O-bound task than for the CPU-bound
task, giving the I/O-bound task higher priority than
the CPU-bound task.
 At that point, if the CPU-bound task is executing
when the I/O-bound task becomes eligible to run the
I/O-bound task will preempt the CPU-bound task.

Operating System Concepts – 9th Edition 6.61 Silberschatz, Galvin and Gagne
Windows Scheduling
 Windows scheduler is using a priority-based, preemptive
scheduling.
 The scheduler ensures that the highest-priority thread will
always run.
 A thread selected to run by the dispatcher will run until it is:
 blocks, uses time slice, preempted by higher-priority thread,
terminated.
 The dispatcher uses a 32-level priority scheme to determine the
order of thread execution. Priorities are divided into two classes.
 Variable class is 1-15, real-time class is 16-31
 Priority 0 is memory-management thread
 The dispatcher uses a queue for each scheduling priority and
traverses the set of queues from highest to lowest until it finds a
thread that is ready to run.
 If no run-able thread, runs idle thread
 There is a relationship between the numeric priorities of the
Windows kernel and the Windows API.
Operating System Concepts – 9th Edition 6.62 Silberschatz, Galvin and Gagne
Windows Priority Classes
 Win32 API identifies several priority classes to
which a process can be
 REALTIME_PRIORITY_CLASS,
HIGH_PRIORITY_CLASS,
ABOVE_NORMAL_PRIORITY_CLASS,
NORMAL_PRIORITY_CLASS,
BELOW_NORMAL_PRIORITY_CLASS,
IDLE_PRIORITY_CLASS
 All are variable except REALTIME
 A thread within a given priority class has a relative
priority
 TIME_CRITICAL, HIGHEST, ABOVE_NORMAL,
NORMAL, BELOW_NORMAL, LOWEST, IDLE
 Priority class and relative priority combine to give
numeric priority
Operating System Concepts – 9th Edition 6.63 Silberschatz, Galvin and Gagne
Windows Priorities

Operating System Concepts – 9th Edition 6.64 Silberschatz, Galvin and Gagne
Windows Priority Classes (Cont.)
 Processes are typically members of the NORMAL
PRIORITY CLASS Unless another class was
specified when the process was created.
 The priority class of a process can be altered with
the SetPriorityClass() function in the Windows API.
 If a thread’s time quantum runs out, it is
interrupted. If it is in the variable-priority class, its
priority is lowered. The priority is never lowered
below the base priority
 If wait occurs, priority boosted depending on what
was waited for and that to give good response
times to interactive threads.
 Windows increases the scheduling quantum by 3
for Foreground process

Operating System Concepts – 9th Edition 6.65 Silberschatz, Galvin and Gagne
Solaris
 Priority-based scheduling. Six classes available and each
class has its own scheduling algorithm.
 Time sharing (default) (TS), Interactive (IA), Real time
(RT), System (SYS), Fair Share (FSS), Fixed priority (FP)
 The default scheduling class for a process is time
sharing. The scheduling policy for the time-sharing class
assigns time slices of different lengths using a multilevel
feedback queue.
 By default, The higher the priority, the smaller the time
slice; and the lower the priority, the larger the time slice.
 Interactive processes typically have a higher priority;
CPU-bound processes, a lower priority.
 The interactive class uses the same scheduling policy as
the time-sharing class, but it gives windowing
applications a higher priority for better performance.

Operating System Concepts – 9th Edition 6.66 Silberschatz, Galvin and Gagne
Solaris Dispatch Table

Operating System Concepts – 9th Edition 6.67 Silberschatz, Galvin and Gagne
Solaris Scheduling

Operating System Concepts – 9th Edition 6.68 Silberschatz, Galvin and Gagne
Solaris Scheduling (Cont.)
 Real-time class are given the highest priority.
 Solaris uses the System class to run kernel
threads
 Each scheduling class includes a set of
priorities.
 The scheduler converts the class-specific
priorities into global priorities and selects the
thread with the highest global priority to run.
 The selected thread runs on the CPU until it,
 blocks, uses time slice, preempted by higher-
priority thread, terminated.
 Multiple threads at same priority selected via
RR
Operating System Concepts – 9th Edition 6.69 Silberschatz, Galvin and Gagne
Homework
 6.2, 6.3, 6.4, 6.6, 6.10, 6.11,
6.13, 6.16, 6.17, 6.19, 6.21,
6.24, 6.25, 6.27

Operating System Concepts – 9th Edition 6.70 Silberschatz, Galvin and Gagne
Algorithm Evaluation
 How to select CPU-scheduling algorithm for an OS?
 Determine criteria, then evaluate algorithms
 Deterministic modeling
 Type of analytic evaluation
 Takes a particular predetermined workload and
defines the performance of each algorithm for that
workload
 Consider 5 processes arriving at time 0:

Operating System Concepts – 9th Edition 6.71 Silberschatz, Galvin and Gagne
Deterministic Evaluation

 For each algorithm, calculate minimum average waiting


time
 Simple and fast, but requires exact numbers for input,
applies only to those inputs
 FCS is 28ms:

 Non-preemptive SFJ is 13ms:

 RR is 23ms:

Operating System Concepts – 9th Edition 6.72 Silberschatz, Galvin and Gagne
Queueing Models
 Describes the arrival of processes, and CPU and I/O
bursts probabilistically
 Commonly exponential, and described by mean
 Computes average throughput, utilization,
waiting time, etc
 Computer system described as network of servers,
each with queue of waiting processes
 Knowing arrival rates and service rates
 Computes utilization, average queue length,
average wait time, etc

Operating System Concepts – 9th Edition 6.73 Silberschatz, Galvin and Gagne
Little’s Formula
 n = average queue length
 W = average waiting time in queue
 λ = average arrival rate into queue
 Little’s law – in steady state, processes leaving
queue must equal processes arriving, thus:
n=λxW
 Valid for any scheduling algorithm and arrival
distribution
 For example, if on average 7 processes arrive per
second, and normally 14 processes in queue, then
average wait time per process = 2 seconds

Operating System Concepts – 9th Edition 6.74 Silberschatz, Galvin and Gagne
Simulations
 Queueing models limited
 Simulations more accurate
 Programmed model of computer system
 Clock is a variable
 Gather statistics indicating algorithm performance
 Data to drive simulation gathered via
 Random number generator according to probabilities
 Distributions defined mathematically or empirically
 Trace tapes record sequences of real events in real
systems

Operating System Concepts – 9th Edition 6.75 Silberschatz, Galvin and Gagne
Evaluation of CPU Schedulers by Simulation

Operating System Concepts – 9th Edition 6.76 Silberschatz, Galvin and Gagne
Implementation
 Even simulations have limited accuracy
 Just implement new scheduler and test in real
systems
 High cost, high risk
 Environments vary
 Most flexible schedulers can be modified per-site or
per-system
 Or APIs to modify priorities
 But again environments vary

Operating System Concepts – 9th Edition 6.77 Silberschatz, Galvin and Gagne
End of Chapter 6

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne

You might also like