0% found this document useful (0 votes)
20 views

CSI3131 Module 4

This document discusses process scheduling in operating systems. It covers basic concepts like ready queues, scheduling criteria to optimize CPU usage, response time, throughput, etc. It then explains common scheduling algorithms like First Come First Served (FCFS), Shortest Job First (SJF) and how they aim to minimize average waiting time. SJF can be non-preemptive or preemptive to allow shorter jobs to preempt longer currently running jobs.

Uploaded by

Myummie e
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

CSI3131 Module 4

This document discusses process scheduling in operating systems. It covers basic concepts like ready queues, scheduling criteria to optimize CPU usage, response time, throughput, etc. It then explains common scheduling algorithms like First Come First Served (FCFS), Shortest Job First (SJF) and how they aim to minimize average waiting time. SJF can be non-preemptive or preemptive to allow shorter jobs to preempt longer currently running jobs.

Uploaded by

Myummie e
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Module 4 - Process scheduling

Reading: Chapter 6 (Silberschatz)

Ch. 6 1
Module overview
Basic concepts
Scheduling criteria
Scheduling algorithms
Multiprocessor scheduling
Algorithm evaluation

2 Ch. 6
State transition diagram of a process

3 Ch. 6
Process queues for scheduling

Ready
queue

We will assume that the first process in a queue is the one that uses the
resource: here, proc7 executes
4 Ch. 6
Basic concepts
Multiprogramming is designed to achieve
maximum use of resources, especially the CPU
The CPU scheduler is the part of the OS that
decides which process in the ready queue gets
the CPU when it becomes free
should aim for optimal use of the CPU
CPU is the most valuable resource in a computer,
so we are talking about it
However, the principles that we will see also apply to
the scheduling of other resources (I / O units, etc.).
Must understand process behavior
To make the right scheduling decision

5 Ch. 6
The cycles of a process

CPU and I / O cycles (bursts): the execution of a process


consists of execution sequences on CPUs and I / O waits

6 Ch. 6
When to invoke the scheduler
The scheduler must make its decision every time the executing
process is interrupted, i.e.
a process presents itself as new or finished
an executing process becomes stuck waiting
a process changes from running to ready
a process changes from waiting to ready
in conclusion, any event in a system causes an interruption of the
CPU and the intervention of the scheduler, who will have to make a
decision about which proc. or thread will have the CPU after
Preemption: we have preemption in the last two cases if we remove
the CPU from a process that had it and can continue to use it
In the first two cases, there is no preemption
Several problems to resolve in the case of preemption

7 Ch. 6
Dispatcher

The code of the OS that gives control to


the process chosen by the scheduler. It
must be concerned with:
change context
change to user mode
restart the chosen process
Dispatcher latency
the time required to perform the duties of the
dispatcher
it is often overlooked, it must be assumed that
it is small compared to the length of a cycle

8 Ch. 6
Scheduling criteria
There will normally be several processes in the
ready queue

When CPU becomes available, which one to


choose?

The general idea is to make the choice in the


interest of the efficient use of the machine

But the latter can be judged according to


different criteria ...

9 Ch. 6
Scheduling criteria
Main reason for scheduling
Percentage of use: keep CPUs and I / O modules busy
Time-sharing systems?
Response time (for interactive systems): the time between a
request and the response
Servers?
Throughput: number of processes that complete in the unit of
time
Batch processing systems?
Turnaround time: the time taken by the process from its
arrival to its end.
Loaded systems?
Waiting time: waiting in ready queue (sum of all time spent in
ready queue)

10 Ch. 6
Scheduling criteria: maximize / minimize
To maximize
CPU use: percentage of use
Throughput: number of processes that complete in the
unit of time
To minimize
Response time (for interactive systems): the time
between a request and the response
Turnaround time: completion time minus arrival time
Waiting time: waiting in ready queue

11 Ch. 6
Scheduling Criteria Example
P1 P2 P3 P4 Process arrival

P1 P2 P3 P4 P1 P2
Time 0 45 7 10,11,12 20 22 24
CPU utilization:
100%
Throughput :
4/24
Turnaround time (P3, P2):
P3: 5
P2: 20
Waiting time (P2):
P2: 13
Response time (P3, P2):
P3: 3
12 P2: 1
Now let's take a look at several scheduling
methods and see how they behave against
these criteria.

we will study specific cases


the study of the general case would require recourse to probabilistic or
simulation techniques

13 Ch. 6
First come, first served (First come, first serve, FCFS)

• Note, no preemption

Example: Process Cycle time


P1 24
P2 3
P3 3
If the processes arrive at time 0 in the order: P1, P2, P3
The Gantt chart is:

P1 P2 P3

0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27) / 3 = 17

14 Ch. 6
First come, first served
CPU utilisation = 100%
Throughput = 3/30 = 0.1
3 processes completed in 30 units of time
Average rotation time: (24 + 27 + 30) / 3 = 27

P1 P2 P3

0 24 27 30

15 Ch. 6
FCFS scheduling (continued)
If the same processes arrive at 0 but in order
P2 , P3 , P1 .
The Gantt chart is:

P2 P3 P1

0 3 6 30

Waiting time for P1 = 6 P2 = 0 P3 = 3


Average waiting time: (6 + 0 + 3) / 3 = 3
Much better!
So for this technique, the average wait time can
vary greatly
Exercise: also calculate the average rotation time, throughput, etc.

16 Ch. 6
Take into account the arrival time!
If the processes arrive at different times,
subtract the arrival times
Exercise: repeat the calculations if:
P2 arrives at time 0
P1 arrives at time 2
P3 arrives at time 5

17 Ch. 6
Convoy Effect with the FCFS
Consider a single CPU bound process and many
I/O bound processes (fairly normal situation).
The I/O bound processes wait for the CPU: I/O
under utilized (*).
The CPU bound process requests I/O: the other
processes rapidly complete CPU bursts and go
back to I/O: CPU under utilized.
CPU bound process finishes I/O, and so do the
other processes: back to *
Solutions?

18
FCFS Scheduling Discussion
Is it simple and easy to program?
Yes!
Can we find other benefits?
It costs little time to make scheduling decision
Does it provide low waiting time?
Not at all, can be quite awful
Does it provide good CPU utilization?
No – the convoy effect
OK, forget about it.

Now, let’s try to reduce the average waiting time


Idea: It is better to execute ready processes with
short CPU bursts
19
Shorter First = Shortest Job First (SJF)

The shortest process starts first


Optimal in principle in terms of
average waiting time
(see the last example)
But how do we know

20 Ch. 6
Shortest-Job-First (SJR) Scheduling
Associate with each process the length of its next CPU
burst. Use these lengths to schedule the process with the
shortest time
Two schemes:
nonpreemptive – once CPU given to the process it cannot be
preempted until completes its CPU burst
preemptive – if a new process arrives with CPU burst length
less than remaining time of current executing process,
preempt. This scheme is also know as the
Shortest-Remaining-Time-First (SRTF)
Just call it preemptive SJF
Preemptive SJF is optimal – gives minimum average waiting
time for a given set of processes
Moving a process with short CPU burst in front of a process
with longer CPU burst reduces average waiting time

21
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
SJF (non-preemptive)

P1 P3 P2 P4

0 3 7 8 12 16
P2 arr. P3 arr. P4 arr
Average waiting time = (0 + 6 + 3 + 7)/4 = 4

22
Example of Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
SJF (preemptive)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16
P2 arr. P3 arr. P4 arr.
Average waiting time = (9 + 1 + 0 +2)/4 = 3

23
Small technical detail with SJF
How do we know the length of the next CPU burst?
If you know that, you probably know also tomorrow’s stock
market prices…
You will be rich and won’t need to waste time in CSI3131 class
So, we can only estimate it

Any idea how to estimate the length of the next CPU burst?
Probably similar as the previous bursts from this process
Makes sense to give more weight to the more recent bursts, not
just straightforward averaging
Use exponential averaging

24
Exponential averaging with SJF

1. tn = actual length of nth CPU burst


2.  n +1 = predicted value for the next CPU burst
3.  , 0    1 = relative weight of recent vs past history
4. Define:
 n +1 =  t n + (1 −  ) n
 n +1 =  t n + (1 −  )t n −1 + (1 −  ) t n − 2 + 
2

+ (1 −  ) t n −i +  + (1 −  ) t1
i n

25
The shortest first SJF: review
Difficulty estimating the length in advance
Long processes will suffer from famine when there
is a constant supply of short processes
Preemption is required for timeshared environments
A long process can monopolize the CPU if it is the
first to enter the system and it does not do I / O
There is an implicit assignment of priorities:
preferences for shorter jobs

26 Ch. 6
SJF Discussion

Does it ensure low average waiting time?


Yes, it was designed that way
As long as our burst-length predictions more-or-less work
Does it provide low response time?
Not necessarily, if there is a steady stream of short
CPU bursts, the longer bursts will not be scheduled
This is called starvation
A process is blocked forever, always overtaken by other
processes (well, or at least while the system is busy)
Let’s see Priority Scheduling.

27
Priority Scheduling
A priority number (usually integer) is associated with each process
On some systems (Windows), the higher number has higher priority
On others (Unix) , the smaller number has higher priority
The CPU is allocated to the process with the highest priority
Can be preemptive or non-preemptive
but usually you want to preempt low-priority process when a high
priority process becomes ready
Priority can be explicit
Assigned by the sysadmin/programmer, often for political reasons
Professor jobs are of higher priority
But also for technical reasons
This device has to be serviced really fast, otherwise the vital data will be lost
(real-time processing)
Priority can also be implicit (computed by the OS)
SJF can be seen as priority scheduling where priority is the predicted
next CPU burst time

28
Priority Scheduling Discussion
Good properties
Professor jobs will be scheduled before student jobs
Allows support of real-time processing
Bad properties
Professor jobs will be scheduled before student jobs
OK, give me something else
starvation – low priority processes may never execute
How to resolve the starvation problem?
aging – keep increasing the priority of a process that has not
been scheduled for a long time
What to do with the processes of the same priority level?
FCFS
Might as well add preemption = Round Robin

29
Round-Robin (RR)
Most used in practice

Each process is allocated a quantum of time (e.g.


10-100 millisecs.) To run
(book terminology: time slice)
If it runs for an integer quantum without other
interruptions, it is interrupted by the timer and the
CPU is given to another process
The interrupted process becomes ready again (at the
end of the queue)

Preemptive method
P[0] P[1]
The ready queue is
P[7] P[2]
a circle (including
P[6] P[3] RR)
P[5] P[4]
30 Ch. 6
Round-Robin performance
If there is a process in the ready queue and
the quantum is q, then each process
receives 1 / n of the CPU time in units of
max duration. q
If q large  FCFS
If q small ... we will see

31 Ch. 6
Example: RR Quantum = 20

Process Cycle
P1 53
P2 17
P3 68
P4 24

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

Normally,
higher turnaround time than SJF
but average waiting time better
32 Ch. 6
A small quantum increases context switches (OS
time)

33 Ch. 6
Example to see the importance of a good choice
of quantum (to be developed as an exercise)
Three cycles:
A, B, C, all of 10
Try with:
q=1
q = 10
In this second case, the round-robin works
as FCFS and the average rotation time is
better

34 Ch. 6
Algorithms we have seen so far
First Come First Serve
simple, little overhead, but poor properties
Shortest Job First
needs to know CPU burst times
exponential averaging of the past
Priority Scheduling
This is actually a class of algorithms
Round Robin
FCFS with preemption

35
Scheduling exercises
Consider three processes P1, P2, P3
Burst times for P1: 14,12,17
Burst times for P2: 2,2,2,3,2,2,2,3,2,2,2,3,2,2,2,3
Burst times for P3: 6,3,8,2,1,3,4,1,2,9,7
All three arrive at time 0, in order P1, P2, P3
Each CPU burst is followed by an I/O operation
taking 6 time units
Let’s simulate the scheduling algorithms
FCFS
Round Robin (quantum=5)
Non-preemptive SJF or Preemptive SJF (your choice)
Round robin (quantum=5) with Priority scheduling,
priorities are P2=P3>P1
36
Multilevel Queue
Idea: Partition the ready queue into several
queues, and handle each queue separately.
Example:
foreground queue (interactive processes)
background (batch processes)
Each queue might have its own scheduling
algorithm
foreground – RR (for low response time)
background – FCFS (for simplicity and low context-
switch overhead)

37
Multilevel Queue
How to schedule from among the queues?
Fixed priority scheduling
i.e. the processes from foreground queue get the
CPU, the background processes get the CPU only if
the foreground queue is empty
Possibility of starvation.
Time slice – each queue gets a certain amount of
CPU time which it can schedule amongst its
processes
i.e., 80% to foreground queue, 20% to background
queue
not necessarily optimal

38
Multilevel Queue Scheduling

39
Multilevel Feedback Queue
Idea:
Use multilevel queues
A process can move up and down queue
hierarchy
Why would a process move to a lower priority
queue?
It is using too much CPU time
Why would a process move to a higher priority
queue?
Has been starved of CPU for long time
A way to implement aging
40
Multilevel Feedback Queue
Multilevel-feedback-queue scheduler defined by the
following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will
enter when that process needs service
This algorithm is the most general one
It can be adapted to specific systems
But it is also the most complex algorithm

41
Example of Multilevel Feedback Queue
Scheduler selects processes in Q0 first (highest priority)
If Q0 is empty, the processes from Q1 are selected.
If both Q0 and Q1 are empty, processes from Q2 are selected
If a process arrives in a higher priority queue when another
from a lower priority queue is running, the running process will
be preempted, to allow the arriving process to run.

Q0
▪ When a process
exhausts its quantum
in either Q0 or Q1, it is
preempted and moved Q1
to the lower priority
queue.

Q2

42
Example of Multilevel Feedback Queue

Scheduling example
A new job enters queue Q0 which is served FCFS. When it
gains CPU, job receives 8 milliseconds.
If it does not finish in 8 milliseconds, job is preempted and
moved to queue Q1 and served again FCFS to receive
another 16 additional milliseconds.
If it still does not complete, it is preempted and moved to
queue Q2.
Q0

Q1

Q2

43
Multilevel Feedback Queue Discussion
Exact properties depend on the parameters
Flexible enough to accommodate most requirements
The convoy example:
One process with long CPU burst time
Several I/O bound processes with short CPU burst time
Even if the all processes start at the same level, the
CPU-intensive process will be soon demoted to a low
priority queue
The I/O bound processes will remain at high priority and
will be swiftly serviced, keeping the I/O devices busy

44
In practice...
The methods we have seen are all used in
practice (except pure SJF which is
impossible)
Sophisticated OS provide the system
manager with a library of methods, which it
can choose and combine as needed after
observing the behavior of the system.

45 Ch. 6
Also…
Our study of scheduling methods is theoretical, does not
consider in detail all the problems that arise in CPU
scheduling.
Eg. CPU schedulers cannot give the CPU to a process for all
the time it needs
Because in practice, the CPU will often be interrupted by
some external event before the end of its cycle.

Also, this study does not consider the execution times of the
scheduler at all.

46 Ch. 6
Summary of scheduling algorithms
First come, first served (FCFS)
simple, low system time (overhead), low quality
Shortest Job First (SJF)
Must know processing times (not practical)
Must predict using the exponential mean of the past
Scheduling with priority
A class of algorithms
Round-Robin
FCFS with preemption
Multilevel Queues
Possible to use different algorithms with each queue
Multilevel Feedback Queues
Combines several techniques

47 Ch. 6
Overview of advanced scheduling topics
Scheduling with several identical CPUs
Evaluation model

48 Ch. 6
Multiple-Processor Scheduling
Good news:
With multiple CPUs, we can share the load
We can also share the OS overhead

Bad news:
We are expected to efficiently share the
load
Managing the OS structures gets more
complicated
The scheduling gets more complex
49
Multiple-Processor Scheduling - Approaches
We assume homogeneous processors
i.e. all processors have the same functionality
Still, the scheduler (and possibly the whole OS) might
be run on a single CPU
Asymmetric multiprocessing – only one CPU
accesses the system data structures and makes
scheduling decisions
alleviates the need for data sharing
but might become bottleneck
Symmetric multiprocessing (SMP) – each CPU is
self-scheduling
Either from a common ready queue, or from a private ready
queue
Care must be taken when CPUs access common data
structures
Virtually all modern operating systems support SMP
including Windows XP, Solaris, Linux, Mac OS X.
50
SMP Scheduling Issues
Processor Affinity
When process runs on a physical CPU, cache memory is
updated with content of the process
If the process is moved to another CPU, benefits of
caching is lost.
SMP systems try to keep processes running on the same
physical CPU – know as processor affinity
Load Balancing
Required when each CPU has its own Ready Queue
Push migration: a task is run periodically to redistribute
load among CPUs (their ready queue)
Pull migration: An idle CPU (i.e. with an empty ready
queue), will pull processes from other CPUs
Linux supports both techniques
Can counteract the benefits of processor affinity, since it
moves processes from one CPU to another

51
Evaluation methods and comparison of
algorithms

Deterministic modeling

Queuing models

Simulation

52 Ch. 6
Deterministic modeling
Essentially, what we have already done by
studying the behavior of several
algorithms on several examples

53 Ch. 6
Use of queuing theory
Analytical method based on probability
theory
Simplified model: in particular, the OS
times are ignored
However, it makes estimates possible

54 Ch. 6
Queuing theory: Little's formula
An important result:
n=W
n: average length of the queue
 : process arrival rate in queue
W: average waiting time in the queue
Eg.
 if processes arrive 3 per sec.
W and they stay in the queue for 2 seconds
n the average queue length will be ???
Exercise: Solve also for  and W
Observe that for n to be stable,   W must be stable
If n must remain 6 and  goes up to 4, what must be W?

55 Ch. 6
Simulation
Build a (simplified...) model of the
sequence of events in the OS
Assign a duration of time to each event
Assume a certain sequence of external
events (e.g. arrival of process, etc.)
Run the model for this sequence to get
stats

56 Ch. 6
Important points in this chapter
Ready Queue for CPU
Scheduling criteria
Scheduling algorithms
FCFS: simple, not optimal
SJF: optimal, difficult implementation
exponential averaging
Priorities
Round-Robin: selection of the quantum
Algorithm evaluation, queuing theory,
Little's formula
57 Ch. 6
Thank You!

‫متشکرم‬

You might also like