Chapter 6: Process
Scheduling
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Chapter 6: CPU Scheduling
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multiple-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation
Operating System Concepts – 9th Edition 6.2 Silberschatz, Galvin and Gagne ©2013
Objectives
To introduce CPU scheduling, which is the basis for
multiprogrammed operating systems
To describe various CPU-scheduling algorithms
To discuss evaluation criteria for selecting a CPU-scheduling
algorithm for a particular system
To examine the scheduling algorithms of several operating
systems
Operating System Concepts – 9th Edition 6.3 Silberschatz, Galvin and Gagne ©2013
Basic Concepts
Maximum CPU utilization
obtained with multiprogramming
CPU–I/O Burst Cycle – Process
execution consists of a cycle of
CPU execution and I/O wait
CPU burst followed by I/O burst
CPU burst distribution is of main
concern
Operating System Concepts – 9th Edition 6.4 Silberschatz, Galvin and Gagne ©2013
Histogram of CPU-burst Times
Operating System Concepts – 9th Edition 6.5 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
Short-term scheduler selects from among the processes in
ready queue, and allocates the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive or cooperative
Example: Microsoft Windows 3.x used non-preemptive- CPU
will wait for process to leave voluntarily
Windows 95 onwards- preemptive
Operating System Concepts – 9th Edition 6.6 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
scheduling in scenarios 2, 3 is preemptive
Pt 2 in book: When a process switches from the running
state to the ready state (for example, when an interrupt
occurs)
Pt 3. When a process switches from the waiting state to the
ready state (for
example, at completion of I/O)
Operating System Concepts – 9th Edition 6.7 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
Points to consider/worry
Consider access to shared data
Consider interrupts occurring during crucial OS activities
Interrupts can’t be ignored
sections of code affected by interrupts must be guarded from
simultaneous use
these sections of code are not accessed concurrently by
several processes, they disable interrupts at entry and re-
enable interrupts at exit.
Consider preemption while in kernel mode
can cause chaos;
solution: let the system call complete or reach a checkpoint
Solution Not suitable for real time OS
Operating System Concepts – 9th Edition 6.8 Silberschatz, Galvin and Gagne ©2013
Dispatcher
Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the user program to
restart that program
Dispatch latency – time it takes for the dispatcher to stop
one process and start another running
Operating System Concepts – 9th Edition 6.9 Silberschatz, Galvin and Gagne ©2013
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per
time unit
(#Processes/hr for long processes and #Processes/sec for short
processes )
Turnaround time – amount of time to execute a particular
process
Waiting time – amount of time a process has been waiting in the
ready queue
Response time – amount of time it takes from when a request
was submitted until the first response is produced, not output (for
time-sharing environment)
Operating System Concepts – 9th Edition 6.10 Silberschatz, Galvin and Gagne ©2013
Scheduling Algorithm Optimization Criteria
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
Operating System Concepts – 9th Edition 6.11 Silberschatz, Galvin and Gagne ©2013
First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
Operating System Concepts – 9th Edition 6.12 Silberschatz, Galvin and Gagne ©2013
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P 2 , P3 , P 1
The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
Consider one CPU-bound and many I/O-bound processes
Operating System Concepts – 9th Edition 6.13 Silberschatz, Galvin and Gagne ©2013
FCFS Scheduling (Cont.)
Given the burst times (BT) of the processes, for FCFS:
Turnaround time (TT) for a process i,
TT(i)= WT(i) + BT(i)
Wait time using Gantt chart without any arrival time
WT(1) =0 (wait time (WT) for first process is 0)
For a process i, i>1 draw the gantt chart and find the time schedule (GT)
WT(i) = GT(i)
GT(i): time at which process i begins as per the Gantt chart
Wait time using Gantt chart with arrival time
WT(1) =0 (wait time (WT) for first process is 0)
For a process i, i>1 draw the gantt chart and find the time schedule (GT)
WT(i) = GT(i) - AT(i)
Operating System Concepts – 9th Edition 6.14 Silberschatz, Galvin and Gagne ©2013
First- Come, First-Served (FCFS) Scheduling
Arrival Time Process Burst Time Wait time
0 P1 24 0
1 P2 3 0+24-(1-0)
2 P3 3 23+3- (2-1)
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 23; P3 = 25
Operating System Concepts – 9th Edition 6.15 Silberschatz, Galvin and Gagne ©2013
First- Come, First-Served (FCFS) Scheduling
Arrival Time Process Burst Time
0 P1 2
3 P2 24
4 P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
Process GT WT
P1 0 0 (0)
P2 3 0 (0+2- (3-0))
P3 27 23 (0+24 – (4-3))
P1 P2 P3
0 23
Operating System Concepts – 9th Edition
27 6.16
30
Silberschatz, Galvin and Gagne ©2013
FCFS Scheduling (Cont.)
Wait time using a formula for FCFS
WT(1) =0 (wait time (WT) for first process is 0)
For a process i, i>1
When no arrival time is mentioned
WT(i) = WT(i-1) + BT(i-1)
When arrival time (AT) is given:
Val=WT(i-1) + BT(i-1) – [AT(i)-AT(i-1)]
WT(i) = val, if val >=0
0, if val <0
Operating System Concepts – 9th Edition 6.17 Silberschatz, Galvin and Gagne ©2013
Shortest-Job-First (SJF) Scheduling
Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest
time
SJF is optimal – gives minimum average waiting time for a given
set of processes
The difficulty is knowing the length of the next CPU request
Could ask the user
Operating System Concepts – 9th Edition 6.18 Silberschatz, Galvin and Gagne ©2013
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
SJF scheduling chart
P4 P1 P3 P2
0 3 9 16 24
Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
Operating System Concepts – 9th Edition 6.19 Silberschatz, Galvin and Gagne ©2013
Determining Length of Next CPU Burst
Can only estimate the length
Then pick process with shortest predicted next CPU burst
Can be done by using the length of previous CPU bursts, using
exponential averaging
1. t n =actual length of nth CPU burst
2. τ n+1= predicted value for the next CPU burst
3. α , 0≤α≤1
4. Define: τ n+1 =αt n + ( 1− α ) τ n .
Commonly, α set to ½
Preemptive version called shortest-remaining-time-first
Operating System Concepts – 9th Edition 6.20 Silberschatz, Galvin and Gagne ©2013
Examples of Exponential Averaging
=0
n+1 = n
Recent history does not count
=1
n+1 = tn
Only the actual last CPU burst counts
If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
Since both and (1 - ) are less than or equal to 1, each
successive term has less weight than its predecessor
Operating System Concepts – 9th Edition 6.21 Silberschatz, Galvin and Gagne ©2013
Prediction of the Length of the Next CPU Burst
Operating System Concepts – 9th Edition 6.22 Silberschatz, Galvin and Gagne ©2013
Example of Shortest-remaining-time-first
Now we add the concepts of varying arrival times and preemption to
the analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5
msec
Operating System Concepts – 9th Edition 6.23 Silberschatz, Galvin and Gagne ©2013
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority
(smallest integer highest priority)
Preemptive
Nonpreemptive
SJF is priority scheduling where priority is the inverse of predicted
next CPU burst time
Problem Starvation – low priority processes may never execute
Solution Aging – as time progresses increase the priority of the
process
Operating System Concepts – 9th Edition 6.24 Silberschatz, Galvin and Gagne ©2013
Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
P1 P2 P1 P3 P4
0 1 6 16 18 19
Average waiting time = 8.2 msec
Operating System Concepts – 9th Edition 6.25 Silberschatz, Galvin and Gagne ©2013
Round Robin (RR)
Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum
is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-1)q
time units.
Timer interrupts every quantum to schedule next process
Performance
q large FIFO
q small q must be large with respect to context switch,
otherwise overhead is too high
Operating System Concepts – 9th Edition 6.26 Silberschatz, Galvin and Gagne ©2013
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but better
response
q should be large compared to context switch time
q usually 10ms to 100ms, context switch < 10 usec
Operating System Concepts – 9th Edition 6.27 Silberschatz, Galvin and Gagne ©2013
Time Quantum and Context Switch Time
* If there is a single process requiring 10 units of time
* What is the number of context switches required when,
quantum size of 12/6/1 time units
Operating System Concepts – 9th Edition 6.28 Silberschatz, Galvin and Gagne ©2013
Turnaround Time Varies With The Time Quantum
80% of CPU bursts should
be shorter than q
Operating System Concepts – 9th Edition 6.29 Silberschatz, Galvin and Gagne ©2013
Multiple-Processor Scheduling
CPU scheduling more complex when multiple CPUs are
available
We concentrate on Homogeneous processors within a
multiprocessor- systems in which the processors are identical.
Limitations on scheduling: For Example: A system with an I/O
device attached to a private bus of one processor => Processes
that wish to use that device must be scheduled to run on that
processor.
Approaches to Multiple-Processor Scheduling
Asymmetric multiprocessing – only one processor accesses
the system data structures, alleviating the need for data sharing
Operating System Concepts – 9th Edition 6.38 Silberschatz, Galvin and Gagne ©2013
Multiple-Processor Scheduling
Symmetric multiprocessing (SMP) – each processor is self-
scheduling, all processes in common ready queue, or each has
its own private queue of ready processes
Currently, most common: Windows, Linux, and Mac OS X
Processor affinity – process has affinity for processor on which
it is currently running
When a process has been running on a specific processor,
CACHE is populated => used in successive memory
accesses
If the process migrates to another processor, cache
memory must be invalidated for the first processor, and the
cache for the second processor must be repopulated-------->
EXPENSIVE
Soft affinity – OS attempts to keep a process running on the same
processor—but does not guarantee that it will do so. Eg. Linux
Operating System Concepts – 9th Edition 6.39 Silberschatz, Galvin and Gagne ©2013
NUMA and CPU Scheduling
NUMA: Non uniform memory access
An architecture, CPU has faster access to some parts of
main memory than to other parts.
Typically, in systems containing combined CPU
and memory boards.
The CPUs on a board can access the memory on that
board faster than they can access memory on other boards in
the system.
If the CPU scheduler and memory-placement algorithms
work together---> a process that is assigned affinity to a
particular CPU can be allocated memory on the same board.
Operating System Concepts – 9th Edition 6.40 Silberschatz, Galvin and Gagne ©2013
NUMA and CPU Scheduling
Note that memory-placement algorithms can also consider affinity
NUMA: Non uniform memory access
Operating System Concepts – 9th Edition 6.41 Silberschatz, Galvin and Gagne ©2013
Multiple-Processor Scheduling – Load Balancing
If SMP, need to keep all CPUs loaded for efficiency
Load balancing attempts to keep workload evenly distributed
Systems with a common ready queue: load balancing is often
unnecessary, because once a processor becomes idle, it
immediately extracts a runnable process from the common run
queue.
Systems where each processor has its own private ready
queue: Load balancing is necessary- Normal scenario in
today's systems
2 approaches:
Push migration – periodic task checks load on each
processor, and if found moves (pushes) task from
overloaded CPU to other CPUs
Pull migration – idle processors pulls waiting task from
busy processor
Operating System Concepts – 9th Edition 6.42 Silberschatz, Galvin and Gagne ©2013
Multiple-Processor Scheduling – Load Balancing
Push and pull migration need not be mutually exclusive and are
in fact often implemented in parallel on load-balancing systems.
For example, the Linux scheduler and the ULE scheduler
available for Free BSD systems implement both techniques.
Load balancing often counteracts the benefits of processor
affinity
Operating System Concepts – 9th Edition 6.43 Silberschatz, Galvin and Gagne ©2013
Multicore Processors
Recent trend to place multiple processor cores on same
physical chip
Faster and consumes less power
Memory stall: When a processor accesses memory, it spends
a significant amount of time waiting for the data to become
available. The processor can spend up to 50 percent of its time
waiting for data to become available from memory.
Operating System Concepts – 9th Edition 6.44 Silberschatz, Galvin and Gagne ©2013
Multicore Processors
Multithreaded processor cores in which two (or more) hardware threads
are assigned to each core.
If one thread stalls, core switches to another thread.
Example: UltraSPARC T3: RR to schedule the eight hardware
threads to each core, has 16 cores per chip.
Operating System Concepts – 9th Edition 6.45 Silberschatz, Galvin and Gagne ©2013
Multithreaded multicore processor
Requires two different levels of scheduling
1. Which software thread to run on each
hardware thread (logical processor).
2. How each core decides which hardware
thread to run.
Operating System Concepts – 9th Edition 6.46 Silberschatz, Galvin and Gagne ©2013
Multithreaded Multicore System
2 ways to multithread a processing core: coarse-grained and fine-
grained multithreading.
Coarse-grained multithreading:
A thread executes on a processor until a long-latency event such as
a memory stall occurs.
The cost of switching between threads is high, since the
instruction pipeline must be flushed before the other thread can
begin execution on the processor core.
Once this new thread begins execution, it begins filling the pipeline
with its instructions.
Fine-grained (or interleaved) multithreading switches between
threads at a much finer level of granularity—typically at the boundary of
an instruction cycle.
Operating System Concepts – 9th Edition 6.47 Silberschatz, Galvin and Gagne ©2013
End of Chapter 6
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013