0% found this document useful (0 votes)
18 views62 pages

4 Scheduling

Uploaded by

Muhammad Sair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views62 pages

4 Scheduling

Uploaded by

Muhammad Sair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Chapter 6:

Scheduling Algorithms
Process Scheduling
 Process execution consists of a CPU burst followed by I/O
burst
 CPU burst distribution is of main concern

 Goal is to maximize CPU utilization


 Quickly switch processes onto CPU for
time sharing
 Select among available processes for
next execution on CPU
Process Scheduling Diagram
 OS maintains scheduling queues of processes
 Job queue – set of all processes in the system
 Ready queue – set of all processes residing in main memory, ready
and waiting to execute
 Device queues – set of processes waiting for an I/O device
 Processes migrate among the various queues
Ready Queue And Various I/O Device Queues
Schedulers
 Short-term scheduler (or CPU scheduler)
 Selects which process should be executed next and allocates CPU
 Sometimes the only scheduler in a system
 Invoked frequently (milliseconds)  (must be fast)
 Dispatcher module gives control of the CPU to the process selected by
the short-term scheduler
 Dispatch latency is the time it takes for the dispatcher to stop one
process and start another running

 Long-term scheduler (or job scheduler)


 Selects which processes should be brought into the ready queue
 Invoked infrequently (seconds, minutes)  (may be slow)
 Controls the degree of multiprogramming
 Strives for good process mix (I/O and. CPU bound processes)
Physical Memory & Multiprogramming
 Memory is scarce resource
 Want to run many programs
 Programs need memory to run

 What happens when M(a) + M(b) + M(c) > physical memory?


Medium Term Scheduler
 Medium-term scheduler can be added if degree of multiple
programming needs to decrease
 Remove process from memory, store on disk, bring back in from
disk to continue execution: known as job swapping
Histogram of CPU-burst Times
Types of Scheduling
 Types of process scheduling

 Preemptive
Can be taken out from the processor after the time
slice gets expire

 Non-preemptive
Cannot be taken out unless a process completes its
execution
Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible

 Throughput – # of processes that complete their execution per unit time

 Turn around time – amount of time to execute a particular process

 Waiting time – amount of time a process has been waiting in the ready queue

 Response time – amount of time it takes from when a request was submitted
until the first response is produced, not output (for time-sharing environment)
Scheduling Algorithm Optimization Criteria
 Maximize CPU utilization
 Maximize throughput
 Minimize turnaround time
 Minimize waiting time
 Minimize response time
Process Scheduling
 Multiprogramming needs CPU scheduling
 Without any hardware support, what can the OS do to a running
process?

 System calls that triggers Scheduler


 Block – wait on some event/resource
 Network packet arrival (e.g. recv( ))
 Keyboard, mouse input (e.g. getchar( ))
 Disk activity completion (e.g. read( ))
 Yield – give up running for now
Non-Preemptive Scheduler
 A non-preemptive scheduler invoked by explicit block/yield calls or
terminations
 Only method when there is no timer!

 The simplest form


 Scheduler:
 save current process state (into PCB)
 choose next process to run
 dispatch (load PCB and run)

 Used in Windows 3.1, Mac OS


Timesharing Systems
 Timesharing systems support interactive use
 Each user feel he/she has the entire machine
 Optimize response time
 Based on time-slicing

 Without hardware support, can we do anything other than non-


preemptive scheduling?

 Timer interrupt
 Generated by the hardware
 Setting requires privilege
 Delivered to the OS
Preemptive Scheduler
 Using interrupts for scheduling

 Basic Idea
 Before moving process to running, OS sets timer
 If process yields/blocks, clear timer, go to scheduler
 If timer expires, go to scheduler
 Recall Context Switching!

 Considerations: Timer granularity


 Finer timers = more responsive
 Coarser timers = more efficient
OS as a Resource Manager:
Allocation vs. Scheduling
 Allocation (spatial)
 Who gets what? Given a set of requests for resources, which
processes should be given which resources for best utilization?

 Scheduling
 How long can they keep it? When more resources are requested than
can be granted, in what order can they be serviced?
[Lecture 1] Separating Policy from
Mechanism

A fundamental design principle in Computer Science

 Mechanism – tool/implementation to achieve some effect


 Policy – decisions on what effect should be achieved
 Examples:
 All users treated equally
 All programs instances treated equally
 Preferred users treated better

Separation leads to flexibility


Preemptive CPU Scheduling
 What is in it?
 Mechanism + Policy
 Mechanisms fairly simple
 Policy choices harder
Challenges in Policy
 Flexibility – variability in job types
 Long vs. short
 Interactive vs. non-interactive
 I/O bound vs. compute-bound

 Issues?
 Short jobs shouldn’t suffer
 Users shouldn’t be annoyed
Challenges in Policy (cont.)
 Fairness
 All users should get access to CPU
 Amount of CPU should be roughly even?

 Issues?
 Short-term vs. long-term fairness
Goals and Assumptions
 Goals (Performance metric)
 Minimize turnaround time: average time to complete a job
 Maximize throughput: operations (jobs) per second
 Minimize overhead of context switches: large quanta
 Efficient utilization (CPU, memory, disk etc.)
 Short response time: type on a keyboard
 Small quanta
 Fairness (fair, no starvation, no deadlock)
 Goals often conflict
 Fairness vs. average turnaround time?

 Assumptions
 One process/program per user
 Programs are independent
Scheduling Policies
 Is there an optimal scheduling policy?
 Even if we narrow down to one goal?

 But we don’t know about future


 Offline vs. online
Scheduling Algorithms
 Non-preemptive
 First Come First Served
 Shortest Job First
 Priority Scheduling
 Preemptive
 Round Robin
 Shortest Remaining Time First
 Priority Scheduling
Example: FCFS Scheduling Policy

Process Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P , P , P
1 2 3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

 Waiting time for P = 0; P = 24; P = 27


1 2 3
 Average waiting time: (0 + 24 + 27)/3 = 17
Example: FCFS Scheduling Policy
Suppose that the processes arrive in the order:
P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
o Consider one CPU-bound and many I/O-bound processes
Class Exercise : FCFS with Arrival Time

Process Execution Time Arrival Time


P1 6 2
P2 3 5
P3 8 1
P4 3 0
FCFS advantages and disadvantages
 Advantages
 Simple and easy to program

 Disadvantages
 Non-Preemptive scheduling algorithm so after the process has been
allocated to the CPU, it will never release the CPU until it finishes
executing or blocked.
 The Average Waiting Time is high.
 Short processes at the back wait for the long process at the front.
 Not an ideal technique for time-sharing systems.
Shortest-Job-First (SJF) Scheduling Policy
 Associate with each process the length of its next CPU burst
 Use these lengths to schedule the process with the shortest time
 Non-preemptive

 Advantage
 SJF is optimal – gives minimum average waiting time and
turnaround time for a given set of processes

 Disadvantage
 Can cause starvation if there is a steady supply of shorter processes
 Difficult to know the future (length of the next CPU request)
Example: SJF Scheduling Policy
Process Burst Time
P1 6
P2 8
P3 7
P4 3

 SJF scheduling chart

P4 P1 P3 P2

0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Class Exercise : SJF Scheduling Policy

Process Execution Time


P1 7
P2 8
P3 4
P4 3
Class Exercise : SJF Scheduling Policy

Process Execution Time


P1 7
P2 8
P3 4
P4 3

P4 P3 P1 P2

0 7 14 22
3

Average waiting time = (7 + 14 + 3 + 0) / 4 = 6


SJF with Arrival time
Process Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 7.0 3
SJF with Arrival time
Process Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 7.0 3

P1 P3 P4 P2

0 13 16 22
6
SJF with Arrival time
Process Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 7.0 3

P1 P3 P4 P2

0 13 16 22
6

Average waiting time = (0 + 14 + 2 + 6) / 4 = 5.5


Determining Length of Next CPU Burst
 Can only estimate the length – should be similar to the previous one
 Then pick process with shortest predicted next CPU burst
 Can be done by using the length of previous CPU bursts, using
exponential averaging
1. t n  actual length of n th CPU burst
2.  n 1  predicted value for the next CPU burst
3.  , 0    1
4. Define :  n 1   t n  1    n .
 Commonly, α set to ½
  =0
n+1 = n (recent history does not count)
  =1
 n+1 =  tn (only the actual last CPU burst counts)
Prediction of the Length of the Next CPU Burst
SRTF Scheduling Policy
 Shortest Remaining Time First scheduling
 The preemptive version of SJF.
 Calculate the remaining execution time for each process
 Schedule the process with the minimum remaining time

 Advantage
 Minimal average waiting and turnaround time for a given set of
processes

 Disadvantage
 Like SJF, can cause starvation for longer processes
Example: SRTF Scheduling Policy
 Now we add the concepts of varying arrival times and preemption to the
analysis

Process Arrival Time Burst Time


P1 0 8
P2 1 4
P3 2 9
P4 3 5
 Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3

0 1 5 10 17 26

 Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec


Class Exercise: SRTF Scheduling Policy

Process Arrival Time Execution Time


P1 0 7
P2 1 3
P3 2 8
P4 3 4
Class Exercise: SRTF Scheduling Policy

Process Arrival Time Execution Time


P1 0 7
P2 1 3
P3 2 8
P4 3 4

P1 P2 P4 P1 P3

8 14 22
0 1 4

P1 = 8 – 1 = 7
P2 = 1 – 1 = 0
P3 = 14 – 2 = 12
P4 = 4 – 3 = 1
Average waiting time = (7 + 0 + 12 + 1) / 4 = 5
Priority Scheduling
 Assign each process a priority (integer) number (smallest integer  highest
priority)
 Run the process with highest priority in the ready queue first
 Preemptive
 Non-preemptive
 Use FIFO for processes with equal priority
 SJF is priority scheduling where priority is the inverse of predicted next
CPU burst time

 Advantage
 Flexibility: Not all processes are born equal

 Problem  Starvation – low priority processes may never execute


 Solution  Aging – adjust priority dynamically (increase priority over time)
Example: Priority Scheduling
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

 Priority scheduling Gantt Chart

P2 P5 P1 P3 P4

0 1 6 16 18 19

 Average waiting time = 8.2 msec


Priority (preemptive)
 A number is assigned to each process that indicates its priority
level.

 If a newer process arrives,


 having higher priority than the currently running process,
 then the currently running process is preempted.
Important Points
 The waiting time for the process having the highest priority
 will always be zero in preemptive mode.
 may not be zero in non-preemptive mode.
 Preemptive and non-preemptive mode behaves exactly same,
if
 The arrival time of all the processes is same
 All the processes become available
Exercise

Arrival Burst
Process Id Priority
time time

P1 0 4 4

P2 1 3 3

P3 2 1 2

P4 3 5 1

P5 4 2 1
Exercise

Arrival Burst
Process Id Priority
time time

P1 0 4 4

P2 1 3 3

P3 2 1 2

P4 3 5 1

P5 4 2 1
Exercise

Arrival Burst
Process Id Priority
time time

P1 0 4 4

P2 1 3 3

P3 2 1 2

P4 3 5 1

P5 4 2 1

P1=11 P2=8 P3=0 P4=0 P5=4


Avg. waiting time=4.6
Round Robin (RR) Scheduling Policy

 Each process runs a time slice or quantum q.


 For n processes in the ready queue and time quantum is q, each process gets
1/n of the CPU time in chunks of at most q time units. Each process waits
no longer than (n-1)q time units until its next time quantum.
 Timer interrupts every quantum to schedule next process
 Performance
 q large  FIFO
 q small  q must be large with respect to context switch, otherwise
overhead is too high
Example: RR Scheduling Policy
Process Burst Time
P1 24
P2 3
P3 3
Time slice = 4
 The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

 Typically, higher average turnaround time than SJF, but better response
time
 q should be large compared to context switch time
 q usually 10ms to 100ms, context switch < 10 usec
RR
 Advantages
 best performance in terms of average response time.
 best suited for
 time sharing system,
 client server architecture and
 interactive system.
 Disadvantages
 leads to starvation for processes with larger burst time
 as they have to repeat the cycle many times.
 Its performance heavily depends on time quantum.
Important Notes
 With decreasing value of time quantum,
 Number of context switch increases
 Response time decreases
 Chances of starvation decreases
 When time quantum tends to infinity, Round Robin Scheduling becomes
FCFS Scheduling.
 The value of time quantum should be
 neither too big nor too small.
Turnaround Time Varies With The Time Quantum

80% of CPU bursts


should be shorter
than q.
Exercise
Process Id Arrival time Burst time

P1 0 5

P2 1 3

P3 2 1

P4 3 2

P5 4 3

Time slice = 2
sec
Exercise
Process Id Arrival time Burst time

P1 0 5

P2 1 3

P3 2 1

P4 3 2

Time slice = 2 P5 4 3
sec

Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8


Multilevel Queue Scheduling
 When processes can be readily categorized
 then multiple separate queues can be established
 each implementing whatever scheduling algorithm is most
appropriate for that type of job, and/or with different
parametric adjustments.

 Scheduling must also be done between queues


Multilevel Queue Scheduling
 Two common options are
 strict priority
no job in a lower priority queue runs until all higher
priority queues are empty
 round-robin
each queue gets a time slice in turn, possibly of different
sizes
 Note that under this algorithm jobs cannot switch from queue
to queue
 Once they are assigned a queue, that is their queue until
they finish.
MULTILEVEL FEEDBACK-
QUEUE SCHEDULING
Multilevel Feedback-Queue Scheduling
 Similar to the multilevel queue scheduling
 except jobs may be moved from
 one queue to another for a variety of reasons

 Most flexible
 because it can be tuned for any situation.

 Most complex to implement


 because of all the adjustable parameters.
Evaluation of Scheduling Policies
 How to select CPU-scheduling policy for an OS?
 Determine criteria, then evaluate algorithms
 Deterministic modeling
 Type of analytic evaluation
 Takes a particular predetermined workload and defines the performance
of each algorithm for that workload
 Consider 5 processes arriving at time 0:
 FCFS, SJF, and RR
Deterministic Evaluation
 For each algorithm, calculate minimum average waiting time
 Simple and fast, but requires exact numbers for input, applies only to
those inputs
 FCFS is 28ms:

 Non-preemptive SJF is 13ms:

 RR is 23ms:

You might also like