0% found this document useful (0 votes)
33 views70 pages

Silberschatz ch05 Cpu Scheduling

The document discusses different CPU scheduling algorithms used in operating systems including first come first served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It covers basic concepts of CPU scheduling such as processes alternating between CPU and I/O bursts. Evaluation criteria for scheduling algorithms like CPU utilization, throughput, response time, and waiting time are also presented.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views70 pages

Silberschatz ch05 Cpu Scheduling

The document discusses different CPU scheduling algorithms used in operating systems including first come first served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It covers basic concepts of CPU scheduling such as processes alternating between CPU and I/O bursts. Evaluation criteria for scheduling algorithms like CPU utilization, throughput, response time, and waiting time are also presented.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 70

Chapter 5

CPU Scheduling (Algorithms)


Chapter 5: CPU Scheduling
 5.1 Basic Concepts
 5.2 Scheduling Criteria
 5.3 Scheduling Algorithms
 5.4 Multiple-Processor Scheduling
 5.5 Thread Scheduling (skip)
 5.6 Operating System Examples
 5.7 Algorithm Evaluation

perating System Concepts – 7th Edition, Feb 2, 2005 5.2 Silberschatz, Galvin and Gagne
5.1 Basic Concepts
Basic Concepts
 Maximum CPU utilization is obtained with multiprogramming
 Several processes are kept in memory at one time
 Every time a running process has to wait, another process
can take over use of the CPU
 Scheduling of the CPU is fundamental to operating system
design
 Process execution consists of a cycle of a CPU time burst and an
I/O time burst (i.e. wait) as shown on the next slide
 Processes alternate between these two states (i.e., CPU
burst and I/O burst)
 Eventually, the final CPU burst ends with a system request to
terminate execution

perating System Concepts – 7th Edition, Feb 2, 2005 5.4 Silberschatz, Galvin and Gagne
Alternating Sequence of CPU And I/O
Bursts

perating System Concepts – 7th Edition, Feb 2, 2005 5.5 Silberschatz, Galvin and Gagne
Histogram of CPU-burst Times

CPU bursts tend to have a frequency curve similar to the


exponential curve shown above. It is characterized by a large
number of short CPU bursts and a small number of long CPU
bursts. An I/O-bound program typically has many short CPU
bursts; a CPU-bound program might have a few long CPU
bursts.
perating System Concepts – 7th Edition, Feb 2, 2005 5.6 Silberschatz, Galvin and Gagne
CPU Scheduler
 The CPU scheduler selects from among the processes in memory
that are ready to execute and allocates the CPU to one of them
 CPU scheduling is affected by the following set of circumstances:
1. (N) A process switches from running to waiting state
2. (P) A process switches from running to ready state
3. (P) A process switches from waiting to ready state
4. (N) A processes switches from running to terminated state
 Circumstances 1 and 4 are non-preemptive; they offer no schedule
choice
 Circumstances 2 and 3 are pre-emptive; they can be scheduled

perating System Concepts – 7th Edition, Feb 2, 2005 5.7 Silberschatz, Galvin and Gagne
Dispatcher

 The dispatcher module gives control of the CPU to the


process selected by the short-term scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to
restart that program
 The dispatcher needs to run as fast as possible, since it is
invoked during process context switch
 The time it takes for the dispatcher to stop one process and
start another process is called dispatch latency

perating System Concepts – 7th Edition, Feb 2, 2005 5.8 Silberschatz, Galvin and Gagne
5.2 Scheduling Criteria
Scheduling Criteria
 Different CPU scheduling algorithms have different properties
 The choice of a particular algorithm may favor one class of processes over
another
 In choosing which algorithm to use, the properties of the various algorithms
should be considered
 Criteria for comparing CPU scheduling algorithms may include the following
 CPU utilization – percent of time that the CPU is busy executing a
process
 Throughput – number of processes that are completed per time unit
 Response time – amount of time it takes from when a request was
submitted until the first response occurs (but not the time it takes to
output the entire response)
 Waiting time – the amount of time before a process starts after first
entering the ready queue (or the sum of the amount of time a process
has spent waiting in the ready queue)
 Turnaround time – amount of time to execute a particular process from
the time of submission through the time of completion

perating System Concepts – 7th Edition, Feb 2, 2005 5.10 Silberschatz, Galvin and Gagne
Optimization Criteria

 It is desirable to
 Maximize CPU utilization
 Maximize throughput
 Minimize turnaround time
 Minimize start time
 Minimize waiting time
 Minimize response time
 In most cases, we strive to optimize the average measure of
each metric
 In other cases, it is more important to optimize the minimum
or maximum values rather than the average

perating System Concepts – 7th Edition, Feb 2, 2005 5.11 Silberschatz, Galvin and Gagne
5.3a Single Processor Scheduling
Algorithms
Single Processor Scheduling
Algorithms

 First Come, First Served (FCFS)


 Shortest Job First (SJF)
 Priority
 Round Robin (RR)

perating System Concepts – 7th Edition, Feb 2, 2005 5.13 Silberschatz, Galvin and Gagne
First Come, First Served (FCFS)
Scheduling
First-Come, First-Served (FCFS)
Scheduling

Process Burst Time


P1 24
P2 3
P3 3
 With FCFS, the process that requests the CPU first is allocated the CPU
first
 Case #1: Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30
 Waiting time for P1 = 0; P2 = 24; P3 = 27
 Average waiting time: (0 + 24 + 27)/3 = 17
 Average turn-around time: (24 + 27 + 30)/3 = 27

perating System Concepts – 7th Edition, Feb 2, 2005 5.15 Silberschatz, Galvin and Gagne
FCFS Scheduling (Cont.)
 Case #2: Suppose that the processes arrive in the order: P2 , P3 , P1

 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30
 Waiting time for P1 = 6; P2 = 0; P3 = 3
 Average waiting time: (6 + 0 + 3)/3 = 3 (Much better than Case
#1)
 Average turn-around time: (3 + 6 + 30)/3 = 13
 Case #1 is an example of the convoy effect; all the other processes
wait for one long-running process to finish using the CPU
 This problem results in lower CPU and device utilization; Case #2
shows that higher utilization might be possible if the short
processes were allowed to run first
 The FCFS scheduling algorithm is non-preemptive
 Once the CPU has been allocated to a process, that process keeps
the CPU until it releases it either by terminating or by requesting
I/O
 It is a troublesome algorithm for time-sharing systems
perating System Concepts – 7th Edition, Feb 2, 2005 5.16 Silberschatz, Galvin and Gagne
Shortest Job First (SJF)
Scheduling
Shortest-Job-First (SJF) Scheduling
 The SJF algorithm associates with each process the length
of its next CPU burst
 When the CPU becomes available, it is assigned to the
process that has the smallest next CPU burst (in the case of
matching bursts, FCFS is used)
 Two schemes:
 Nonpreemptive – once the CPU is given to the process, it
cannot be preempted until it completes its CPU burst
 Preemptive – if a new process arrives with a CPU burst
length less than the remaining time of the current
executing process, preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF)

perating System Concepts – 7th Edition, Feb 2, 2005 5.18 Silberschatz, Galvin and Gagne
Example #1: Non-Preemptive SJF
(simultaneous arrival)
ProcessArrival TimeBurst Time
P1 0.0 6
P2 0.0 4
P3 0.0 1
P4 0.0 5
 SJF (non-preemptive, simultaneous arrival)
P
P3 P2 P1
4

0 1 5 10 16

 Average waiting time = (0 + 1 + 5 + 10)/4 = 4


 Average turn-around time = (1 + 5 + 10 + 16)/4 = 8

perating System Concepts – 7th Edition, Feb 2, 2005 5.19 Silberschatz, Galvin and Gagne
Example #2: Non-Preemptive SJF
(varied arrival times)
ProcessArrival TimeBurst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 SJF (non-preemptive, varied arrival times)

P1 P3 P2 P4

 0 3 7 8 12 16
Average waiting time
= ( (0 – 0) + (8 – 2) + (7 – 4) + (12 – 5) )/4

= (0 + 6 + 3 + 7)/4 = 4
 Average turn-around time:
= ( (7 – 0) + (12 – 2) + (8 - 4) + (16 – 5))/4
= ( 7 + 10 + 4 + 11)/4 = 8
Waiting time : sum of time that a process has spent waiting in the ready queue
perating System Concepts – 7th Edition, Feb 2, 2005 5.20 Silberschatz, Galvin and Gagne
Example #3: Preemptive SJF
(Shortest-remaining-time-first)
ProcessArrival TimeBurst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 SJF (preemptive, varied arrival times)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16
 Average waiting time
= ( [(0 – 0) + (11 - 2)] + [(2 – 2) + (5 – 4)] + (4 - 4) + (7 – 5) )/4
= 9 + 1 + 0 + 2)/4
=3
 Average turn-around time = (16 + 7 + 5 + 11)/4 = 9.75

Waiting time : sum of time that a process has spent waiting in the ready queue
perating System Concepts – 7th Edition, Feb 2, 2005 5.21 Silberschatz, Galvin and Gagne
Priority Scheduling
Priority Scheduling
 The SJF algorithm is a special case of the general priority
scheduling algorithm
 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority
(smallest integer = highest priority)
 Priority scheduling can be either preemptive or non-preemptive
 A preemptive approach will preempt the CPU if the priority
of the newly-arrived process is higher than the priority of
the currently running process
 A non-preemptive approach will simply put the new process
(with the highest priority) at the head of the ready queue
 SJF is a priority scheduling algorithm where priority is the
predicted next CPU burst time
 The main problem with priority scheduling is starvation, that is,
low priority processes may never execute
 A solution is aging; as time progresses, the priority of a
process in the ready queue is increased

perating System Concepts – 7th Edition, Feb 2, 2005 5.23 Silberschatz, Galvin and Gagne
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling

 In the round robin algorithm, each process gets a small unit of


CPU time (a time quantum), usually 10-100 milliseconds. After
this time has elapsed, the process is preempted and added to
the end of the ready queue.
 If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits more
than (n-1)q time units.
 Performance of the round robin algorithm
 q large  FCFS
 q small  q must be greater than the context switch time;
otherwise, the overhead is too high
 One rule of thumb is that 80% of the CPU bursts should be
shorter than the time quantum

perating System Concepts – 7th Edition, Feb 2, 2005 5.25 Silberschatz, Galvin and Gagne
Example of RR with Time Quantum
= 20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
 The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
 0 higher
Typically, 20 37 57turnaround
average 77 97than
117 121
SJF, but134 response
154
better 162 time
 Average waiting time
= ( [(0 – 0) + (77 - 20) + (121 – 97)] + (20 – 0) + [(37 – 0) + (97 - 57) + (134 –
117)] + [(57 – 0) + (117 – 77)] ) / 4
= (0 + 57 + 24) + 20 + (37 + 40 + 17) + (57 + 40) ) / 4
= (81 + 20 + 94 + 97)/4
= 292 / 4 = 73
 Average turn-around time = 134 + 37 + 162 + 121) / 4 = 113.5

perating System Concepts – 7th Edition, Feb 2, 2005 5.26 Silberschatz, Galvin and Gagne
Time Quantum and Context
Switches

perating System Concepts – 7th Edition, Feb 2, 2005 5.27 Silberschatz, Galvin and Gagne
Turnaround Time Varies With The Time
Quantum

As can be seen from this graph, the average turnaround time of a set
of processes does not necessarily improve as the time quantum size
increases. In general, the average turnaround time can be improved
if most processes finish their next CPU burst in a single time quantum.
perating System Concepts – 7th Edition, Feb 2, 2005 5.28 Silberschatz, Galvin and Gagne
5.3b Multi-level Queue Scheduling
Multi-level Queue Scheduling
 Multi-level queue scheduling is used when processes can be classified into
groups
 For example, foreground (interactive) processes and background (batch)
processes
 The two types of processes have different response-time requirements and so
may have different scheduling needs
 Also, foreground processes may have priority (externally defined) over
background processes
 A multi-level queue scheduling algorithm partitions the ready queue into several
separate queues
 The processes are permanently assigned to one queue, generally based on some
property of the process such as memory size, process priority, or process type
 Each queue has its own scheduling algorithm
 The foreground queue might be scheduled using an RR algorithm
 The background queue might be scheduled using an FCFS algorithm
 In addition, there needs to be scheduling among the queues, which is commonly
implemented as fixed-priority pre-emptive scheduling
 The foreground queue may have absolute priority over the background queue

perating System Concepts – 7th Edition, Feb 2, 2005 5.30 Silberschatz, Galvin and Gagne
Multi-level Queue Scheduling
 One example of a multi-level queue are the five queues shown
below
 Each queue has absolute priority over lower priority queues
 For example, no process in the batch queue can run unless the
queues above it are empty
 However, this can result in starvation for the processes in the
lower priority queues

perating System Concepts – 7th Edition, Feb 2, 2005 5.31 Silberschatz, Galvin and Gagne
Multilevel Queue Scheduling
 Another possibility is to time slice among the queues
 Each queue gets a certain portion of the CPU time, which it
can then schedule among its various processes
 The foreground queue can be given 80% of the CPU time
for RR scheduling
 The background queue can be given 20% of the CPU time
for FCFS scheduling

perating System Concepts – 7th Edition, Feb 2, 2005 5.32 Silberschatz, Galvin and Gagne
5.3c Multi-level Feedback Queue
Scheduling
Multilevel Feedback Queue
Scheduling

 In multi-level feedback queue scheduling, a process can


move between the various queues; aging can be
implemented this way
 A multilevel-feedback-queue scheduler is defined by the
following parameters:
 Number of queues
 Scheduling algorithms for each queue
 Method used to determine when to promote a process
 Method used to determine when to demote a process
 Method used to determine which queue a process will
enter when that process needs service

perating System Concepts – 7th Edition, Feb 2, 2005 5.34 Silberschatz, Galvin and Gagne
Example of Multilevel Feedback
Queue
 Scheduling
 A new job enters queue Q0 (RR) and is placed at the end.
When it gains the CPU, the job receives 8 milliseconds. If
it does not finish in 8 milliseconds, the job is moved to
the end of queue Q1.
 A Q1 (RR) job receives 16 milliseconds. If it still does not
complete, it is preempted and moved to queue Q2 (FCFS).

Q0

Q1

Q2

perating System Concepts – 7th Edition, Feb 2, 2005 5.35 Silberschatz, Galvin and Gagne
5.4 Multiple-Processor
Scheduling
Multiple-Processor Scheduling

 If multiple CPUs are available, load sharing among them


becomes possible; the scheduling problem becomes more
complex
 We concentrate in this discussion on systems in which the
processors are identical (homogeneous) in terms of their
functionality
 We can use any available processor to run any process in
the queue
 Two approaches: Asymmetric processing and symmetric
processing (see next slide)

perating System Concepts – 7th Edition, Feb 2, 2005 5.37 Silberschatz, Galvin and Gagne
Multiple-Processor Scheduling
 Asymmetric multiprocessing (ASMP)
 One processor handles all scheduling decisions, I/O
processing, and other system activities
 The other processors execute only user code
 Because only one processor accesses the system data
structures, the need for data sharing is reduced
 Symmetric multiprocessing (SMP)
 Each processor schedules itself
 All processes may be in a common ready queue or each
processor may have its own ready queue
 Either way, each processor examines the ready queue and
selects a process to execute
 Efficient use of the CPUs requires load balancing to keep the
workload evenly distributed
 In a Push migration approach, a specific task regularly
checks the processor loads and redistributes the waiting
processes as needed
 In a Pull migration approach, an idle processor pulls a
waiting job from the queue of a busy processor
 Virtually all modern operating systems support SMP, including
Windows XP, Solaris, Linux, and Mac OS X
perating System Concepts – 7th Edition, Feb 2, 2005 5.38 Silberschatz, Galvin and Gagne
Symmetric Multithreading
 Symmetric multiprocessing systems allow several threads to run
concurrently by providing multiple physical processors
 An alternative approach is to provide multiple logical rather than
physical processors
 Such a strategy is known as symmetric multithreading (SMT)
 This is also known as hyperthreading technology
 The idea behind SMT is to create multiple logical processors on
the same physical processor
 This presents a view of several logical processors to the
operating system, even on a system with a single physical
processor
 Each logical processor has its own architecture state, which
includes general-purpose and machine-state registers
 Each logical processor is responsible for its own interrupt
handling
 However, each logical processor shares the resources of its
physical processor, such as cache memory and buses
 SMT is a feature provided in the hardware, not the software
 The hardware must provide the representation of the
architecture state for each logical processor, as well as
interrupt handling (see next slide)
perating System Concepts – 7th Edition, Feb 2, 2005 5.39 Silberschatz, Galvin and Gagne
A typical SMT architecture

SMT = Symmetric Multi-threading

perating System Concepts – 7th Edition, Feb 2, 2005 5.40 Silberschatz, Galvin and Gagne
5.6 Operating System Examples
Operating System Examples

 Solaris scheduling
 Windows XP scheduling
 Linux scheduling

perating System Concepts – 7th Edition, Feb 2, 2005 5.42 Silberschatz, Galvin and Gagne
Solaris
Solaris Scheduling
 Solaris uses priority-based thread scheduling
 It has defined four classes of scheduling (in order of
priority)
 Real time
 System (kernel use only)
 Time sharing (the default class)
 Interactive
 Within each class are different priorities and
scheduling algorithms

perating System Concepts – 7th Edition, Feb 2, 2005 5.44 Silberschatz, Galvin and Gagne
Solaris Scheduling
Solaris uses priority-based thread scheduling and has four
Scheduling classes

perating System Concepts – 7th Edition, Feb 2, 2005 5.45 Silberschatz, Galvin and Gagne
Solaris Scheduling
 The default scheduling class for a process is time sharing
 The scheduling policy for time sharing dynamically alters
priorities and assigns time slices of different lengths
using a multi-level feedback queue
 By default, there is an inverse relationship between
priorities and time slices
 The higher the priority, the lower the time slice (and
vice versa)
 Interactive processes typically have a higher priority
 CPU-bound processes have a lower priority
 This scheduling policy gives good response time for
interactive processes and good throughput for CPU-bound
processes
 The interactive class uses the same scheduling policy as
the time-sharing class, but it gives windowing
applications a higher priority for better performance

perating System Concepts – 7th Edition, Feb 2, 2005 5.46 Silberschatz, Galvin and Gagne
Solaris Dispatch Table
 The figure below shows the dispatch table for
scheduling interactive and time-sharing threads
 In the priority column, a higher number indicates a
higher priority

perating System Concepts – 7th Edition, Feb 2, 2005 5.47 Silberschatz, Galvin and Gagne
Windows XP
Windows XP Scheduling
 Windows XP schedules threads using a priority-based,
preemptive scheduling algorithm
 The scheduler ensures that the highest priority thread
will always run
 The portion of the Windows kernel that handles
scheduling is called the dispatcher
 A thread selected to run by the dispatcher will run until it
is preempted by a higher-priority thread, until it
terminates, until its time quantum ends, or until it calls a
blocking system call such as I/O
 If a higher-priority real-time thread becomes ready while
a lower-priority thread is running, lower-priority thread
will be preempted
 This preemption gives a real-time thread preferential
access to the CPU when the thread needs such
access

perating System Concepts – 7th Edition, Feb 2, 2005 5.49 Silberschatz, Galvin and Gagne
Windows XP Scheduling
 The dispatcher uses a 32-level priority scheme to
determine the order of thread execution
 Priorities are divided into two classes
 The variable class contains threads having priorities 1
to 15
 The real-time class contains threads with priorities
ranging from 16 to 31
 There is also a thread running at priority 0 that is used
for memory management
 The dispatcher uses a queue for each scheduling priority
and traverses the set of queues from highest to lowest
until it finds a thread that is ready to run
 If no ready thread is found, the dispatcher will execute a
special thread called the idle thread

perating System Concepts – 7th Edition, Feb 2, 2005 5.50 Silberschatz, Galvin and Gagne
Windows XP Scheduling
 There is a relationship between the numeric priorities of
the Windows XP kernel and the Win32 API
 The Windows Win32 API identifies six priority classes to
which a process can belong as shown below
 Real-time priority class
 High priority class
 Above normal priority class
 Normal priority class
 Below normal priority class
 Low priority class
 Priorities in all classes except the read-time priority
class are variable
 This means that the priority of a thread in one of these
classes can change

perating System Concepts – 7th Edition, Feb 2, 2005 5.51 Silberschatz, Galvin and Gagne
Windows XP Scheduling
 Within each of the priority classes is a relative priority
as shown below
 The priority of each thread is based on the priority
class it belongs to and its relative priority within that
class

perating System Concepts – 7th Edition, Feb 2, 2005 5.52 Silberschatz, Galvin and Gagne
Windows XP Scheduling

 The initial priority of a thread is typically the base priority of


the process that the thread belongs to
 When a thread’s time quantum runs out, that thread is
interrupted
 If the thread is in the variable-priority class, its priority is
lowered
 However, the priority is never lowered below the base
priority
 Lowering the thread’s priority tends to limit the CPU
consumption of compute-bound threads

perating System Concepts – 7th Edition, Feb 2, 2005 5.53 Silberschatz, Galvin and Gagne
Windows XP Scheduling
 When a variable-priority thread is released from a wait
operation, the dispatcher boosts the priority
 The amount of boost depends on what the thread was
waiting for
 A thread that was waiting for keyboard I/O would get a
large increase
 A thread that was waiting for a disk operation would
get a moderate increase
 This strategy tends to give good response time to interactive
threads that are using the mouse and windows
 It also enables I/O-bound threads to keep the I/O devices
busy while permitting compute-bound threads to use spare
CPU cycles in the background
 This strategy is used by several time-sharing operating
systems, including UNIX
 In addition, the window with which the user is currently
interacting receives a priority boost to enhance its response
time

perating System Concepts – 7th Edition, Feb 2, 2005 5.54 Silberschatz, Galvin and Gagne
Windows XP Scheduling
 When a user is running an interactive program, the system
needs to provide especially good performance for that
process
 Therefore, Windows XP has a special scheduling rule for
processes in the normal priority class
 Windows XP distinguishes between the foreground process
that is currently selected on the screen and the background
processes that are not currently selected
 When a process moves in the foreground, Windows XP
increases the scheduling quantum by some factor – typically
by 3
 This increase gives the foreground process three times
longer to run before a time-sharing preemption occurs

perating System Concepts – 7th Edition, Feb 2, 2005 5.55 Silberschatz, Galvin and Gagne
Linux
Linux Scheduling
 Linux does not distinguish between processes and threads;
thus, we use the term task when discussing the Linux
scheduler
 The Linux scheduler is a preemptive, priority-based algorithm
with two separate priority ranges
 A real-time range from 0 to 99
 A nice value ranging from 100 to 140
 These two ranges map into a global priority scheme whereby
numerically lower values indicate higher priorities
 Unlike Solaris and Windows, Linux assigns higher-priority
tasks longer time quanta and lower-priority tasks shorter
time quanta
 The relationship between priorities and time-slice length is
shown on the next slide

perating System Concepts – 7th Edition, Feb 2, 2005 5.57 Silberschatz, Galvin and Gagne
Linux Scheduling

perating System Concepts – 7th Edition, Feb 2, 2005 5.58 Silberschatz, Galvin and Gagne
Linux Scheduling
 A runnable task is considered eligible for execution on the
CPU as long as it has time remaining in its time slice
 When a task has exhausted its time slice, it is considered
expired and is not eligible for execution again until all other
tasks have also exhausted their time quanta
 The kernel maintains a list of all runnable tasks in a runqueue
data structure
 Because of its support for SMP, each processor maintains its
own runqueue and schedules itself independently
 Each runqueue contains two priority arrays
 The active array contains all tasks with time remaining in
their time slices
 The expired array contains all expired tasks
 Each of these priority arrays contains a list of tasks indexed
according to priority
 The scheduler selects the task with the highest priority from
the active array for execution on the CPU
 When all tasks have exhausted their time slices (that is, the
active array is empty), then the two priority arrays exchange
roles
(See the next slide)
perating System Concepts – 7th Edition, Feb 2, 2005 5.59 Silberschatz, Galvin and Gagne
List of Tasks Indexed According to
Prorities

perating System Concepts – 7th Edition, Feb 2, 2005 5.60 Silberschatz, Galvin and Gagne
Linux Scheduling
 Linux implements real-time POSIX scheduling
 Real-time tasks are assigned static priorities
 All other tasks have dynamic priorities that are based on their nice
values plus or minus the value 5
 The interactivity of a task determines whether the value 5 will be
added to or subtracted from the nice value
 A task’s interactivity is determined by how long it has been
sleeping while waiting for I/O
 Tasks that are more interactive typically have longer sleep times
and are adjusted by –5 as the scheduler favors interactive tasks
 Such adjustments result in higher priorities for these tasks
 Conversely, tasks with shorter sleep times are often more CPU-
bound and thus will have priorities lowered
 The recalculation of a task’s dynamic priority occurs when the task
has exhausted it time quantum and is moved to the expired array
 Thus, when the two arrays switch roles, all tasks in the new active
array have been assigned new priorities and corresponding time
slices

perating System Concepts – 7th Edition, Feb 2, 2005 5.61 Silberschatz, Galvin and Gagne
5.7 Algorithm Evaluation
Techniques for Algorithm
Evaluation

 Deterministic modeling – takes a particular


predetermined workload and defines the
performance of each algorithm for that workload
 Queueing models
 Simulation
 Implementation

perating System Concepts – 7th Edition, Feb 2, 2005 5.63 Silberschatz, Galvin and Gagne
Deterministic Modeling:
Using FCFS scheduling

Process Burst Time


P1 10
P2 29
P3 3
P4 7
P5 12

perating System Concepts – 7th Edition, Feb 2, 2005 5.64 Silberschatz, Galvin and Gagne
Deterministic Modeling:
Using nonpreemptive SJF scheduling

Process Burst Time


P1 10
P2 29
P3 3
P4 7
P5 12

perating System Concepts – 7th Edition, Feb 2, 2005 5.65 Silberschatz, Galvin and Gagne
Deterministic Modeling:
Using round robin scheduling
(Time quantum = 10ms)

Process Burst Time


P1 10
P2 29
P3 3
P4 7
P5 12

perating System Concepts – 7th Edition, Feb 2, 2005 5.66 Silberschatz, Galvin and Gagne
Queueing Models
 Queueing network analysis
 The computer system is described as a network of servers
 Each server has a queue of waiting processes
 The CPU is a server with its ready queue, as is the I/O system with its device
queues
 Knowing the arrival rates and service rates, we can compute utilization,
average queue length, average wait time, etc.
 If the system is in a steady state, then the number of processes leaving the
queue must be equal to the number of processes that arrive
 Little’s formula (n = λ x W)
 # processes in the queue = #processes/time * average waiting time
 Formula is valid for any scheduling algorithm or arrival distribution
 It can be used to compute one of the variables given the other two
 Example
 7 processes arrive on the average of every second
 There are normally 14 processes in the queue
 Therefore, the average waiting time per process is 2 seconds
 Queueing models are often only approximations of real systems
 The classes of algorithms and distributions that can be handled are fairly
limited
 The mathematics of complicated algorithms and distributions can be difficult
to work with
 Arrival and service distributions are often defined in unrealistic, but
mathematically tractable ways

perating System Concepts – 7th Edition, Feb 2, 2005 5.67 Silberschatz, Galvin and Gagne
Evaluation of CPU schedulers by
simulation

perating System Concepts – 7th Edition, Feb 2, 2005 5.68 Silberschatz, Galvin and Gagne
Implementation
 The only completely accurate way to evaluate a scheduling algorithm is to code it
up, put it in the operating system and see how it works
 The algorithm is put to the test in a real system under real operating
conditions
 The major difficulty with this approach is high cost
 The algorithm has to be coded and the operating system has to be modified to
support it
 The users must tolerate a constantly changing operating system that greatly
affects job completion time
 Another difficulty is that the environment in which the algorithm is used will
change
 New programs will be written and new kinds of problems with be handled
 The environment will change as a result of performance of the scheduler
 If short processes are given priority, then users may break larger
processes into sets of smaller processes
 If interactive processes are given priority over non-interactive processes,
then users may switch to interactive use

 The most flexible scheduling algorithms are those that can be altered by the
system managers or by the users so that they can be tuned for a specific
application or set of applications
 For example, a workstation that performs high-end graphical applications may
have scheduling needs different from those of a web server or file server
 Another approach is to use APIs (POSIX, WinAPI, Java) that modify the priority of
a process or thread
 The downfall of this approach is that performance tuning a specific system or
application most often does not result in improved performance in more
general situations

perating System Concepts – 7th Edition, Feb 2, 2005 5.69 Silberschatz, Galvin and Gagne
End of Chapter 5

You might also like