0% found this document useful (0 votes)
13 views44 pages

cs231 ch5

Chapter 5 discusses CPU scheduling in operating systems, covering basic concepts, scheduling criteria, and various algorithms such as First-Come, First-Served, Shortest-Job-First, and Round Robin. It highlights the importance of CPU utilization, throughput, turnaround time, waiting time, and response time in scheduling decisions. Additionally, it addresses multilevel queues and feedback queues for managing process scheduling effectively.

Uploaded by

prateek.singh23b
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views44 pages

cs231 ch5

Chapter 5 discusses CPU scheduling in operating systems, covering basic concepts, scheduling criteria, and various algorithms such as First-Come, First-Served, Shortest-Job-First, and Round Robin. It highlights the importance of CPU utilization, throughput, turnaround time, waiting time, and response time in scheduling decisions. Additionally, it addresses multilevel queues and feedback queues for managing process scheduling effectively.

Uploaded by

prateek.singh23b
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Chapter 5: CPU Scheduling

Operating System Concepts – 8th Edition Silberschatz, Galvin and Gagne ©2009
Chapter 5: CPU Scheduling

 Basic Concepts
 Scheduling Criteria
 Scheduling Algorithms
 Thread Scheduling
 Multiple-Processor Scheduling

Operating System Concepts – 8th Edition 5.2 Silberschatz, Galvin and Gagne ©2009
Basic Concepts

 Maximum CPU utilization obtained with


multiprogramming

 CPU–I/O Burst Cycle – Process execution consists of a


cycle of CPU execution and I/O wait

 CPU burst distribution

Operating System Concepts – 8th Edition 5.3 Silberschatz, Galvin and Gagne ©2009
Alternating Sequence of CPU and I/O Bursts

Operating System Concepts – 8th Edition 5.4 Silberschatz, Galvin and Gagne ©2009
Histogram of CPU-burst Times

Operating System Concepts – 8th Edition 5.5 Silberschatz, Galvin and Gagne ©2009
CPU Scheduler
 Which scheduler selects from among the processes in
ready queue, and allocates the CPU to one of them
A) Long-term
B) Short-term

Operating System Concepts – 8th Edition 5.6 Silberschatz, Galvin and Gagne ©2009
 CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates

Operating System Concepts – 8th Edition 5.7 Silberschatz, Galvin and Gagne ©2009
 Scheduling under 1 and 4 is nonpreemptive
 All other scheduling is preemptive; Have to handle situations -
 Consider access to shared data
 Consider preemption while in kernel mode
 Consider interrupts occurring during crucial OS activities

Operating System Concepts – 8th Edition 5.8 Silberschatz, Galvin and Gagne ©2009
Dispatcher

 Dispatcher module gives control of the CPU to the process


selected by the short-term scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to
restart that program

 Dispatch latency – time it takes for the dispatcher to stop


one process and start another running

Operating System Concepts – 8th Edition 5.9 Silberschatz, Galvin and Gagne ©2009
Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution
per time unit
 Turnaround time – amount of time to execute a particular
process
 Waiting time – amount of time a process has been waiting
in the ready queue
 Response time – amount of time it takes from when a
request was submitted until the first response is produced,
not output (for time-sharing environment)

Operating System Concepts – 8th Edition 5.10 Silberschatz, Galvin and Gagne ©2009
Scheduling Algorithm Optimization Criteria

 Max CPU utilization


 Max throughput
 Min turnaround time
 Min waiting time
 Min response time

Operating System Concepts – 8th Edition 5.11 Silberschatz, Galvin and Gagne ©2009
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17

Operating System Concepts – 8th Edition 5.12 Silberschatz, Galvin and Gagne ©2009
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
 Consider one CPU-bound and many I/O-bound processes
 FCFS Scheduling is non preemptive.
Operating System Concepts – 8th Edition 5.13 Silberschatz, Galvin and Gagne ©2009
Operating System Concepts – 8th Edition 5.14 Silberschatz, Galvin and Gagne ©2009
Shortest-Job-First (SJF) Scheduling

 Associate with each process the length of its next


CPU burst
 Use these lengths to schedule the process with
the shortest time

 SJF is optimal – gives minimum average waiting


time for a given set of processes
 The difficulty is knowing the length of the next
CPU request
 Could ask the user

Operating System Concepts – 8th Edition 5.15 Silberschatz, Galvin and Gagne ©2009
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
 SJF scheduling chart

P4 P3 P2
P1

0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

Operating System Concepts – 8th Edition 5.16 Silberschatz, Galvin and Gagne ©2009
Determining Length of Next CPU Burst
 Can only estimate the length – should be similar to the previous
one
 Then pick process with shortest predicted next CPU burst
 Can be done by using the length of previous CPU bursts, using
exponential averaging

1. t n  actual length of n th
CPU burst
2.  n 1  predicted value for the next CPU burst
3.  , 0    1
4. Define :
 n1   tn  1    n .
 Commonly, α set to ½
 Preemptive version called shortest-remaining-time-first
Operating System Concepts – 8th Edition 5.17 Silberschatz, Galvin and Gagne ©2009
Prediction of the Length of the
Next CPU Burst

Operating System Concepts – 8th Edition 5.18 Silberschatz, Galvin and Gagne ©2009
Examples of Exponential Averaging
  =0
 n+1 = n
 Recent history does not count
  =1
 n+1 =  tn
 Only the actual last CPU burst counts
 If we expand the formula, we get:
n+1 =  tn+(1 - ) tn-1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0

 Since both  and (1 - ) are less than or equal to 1, each


successive term has less weight than its predecessor

Operating System Concepts – 8th Edition 5.19 Silberschatz, Galvin and Gagne ©2009
Example of Shortest-remaining-time-first
 Now we add the concepts of varying arrival times and preemption to the
analysis
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5

Operating System Concepts – 8th Edition 5.20 Silberschatz, Galvin and Gagne ©2009
 Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3

0 1 5 10 17 26

Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 =


26/4 = 6.5 msec

Operating System Concepts – 8th Edition 5.21 Silberschatz, Galvin and Gagne ©2009
Practice
 Consider three process, all arriving at time zero, with total execution time of 10, 20
and 30 units respectively. Each process spends the first 20% of execution time
doing I/O, the next 70% of time doing computation, and the last 10% of time doing
I/O again. The operating system uses a shortest remaining compute time first
scheduling algorithm and schedules a new process either when the running process
gets blocked on I/O or when the running process finishes its compute burst. Assume
that all I/O operations can be overlapped as much as possible. For what percentage
of does the CPU remain idle?
 Home Work
Priority Scheduling
 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority
(smallest integer  highest priority)
 Preemptive
 Nonpreemptive
 SJF is priority scheduling where priority is the inverse of
predicted next CPU burst time
 Problem  Starvation – low priority processes may never
execute
 Solution  Aging – as time progresses increase the priority
of the process

Operating System Concepts – 8th Edition 5.24 Silberschatz, Galvin and Gagne ©2009
Example of Priority Scheduling

ProcessA arri Burst TimeT Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
 Priority scheduling Gantt Chart

P2 P5 P1 P3 P4

0 1 6 16 18 19

 Average waiting time = 8.2 msec

Operating System Concepts – 8th Edition 5.25 Silberschatz, Galvin and Gagne ©2009
Round Robin (RR)

 Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
 If there are n processes in the ready queue and the time
quantum is q, No process waits more than (n-1)q time units.
 Timer interrupts every quantum to schedule next process
 Performance
 q large  FIFO
 q small  q must be large with respect to context switch,
otherwise overhead is too high

Operating System Concepts – 8th Edition 5.26 Silberschatz, Galvin and Gagne ©2009
Time Quantum and Context Switch Time

Operating System Concepts – 8th Edition 5.27 Silberschatz, Galvin and Gagne ©2009
Turnaround Time Varies With
The Time Quantum

80% of CPU bursts should


be shorter than q

Operating System Concepts – 8th Edition 5.28 Silberschatz, Galvin and Gagne ©2009
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
 Typically, higher average turnaround than SJF, but better
response
 q should be large compared to context switch time
 q usually 10ms to 100ms, context switch < 10 usec

Operating System Concepts – 8th Edition 5.29 Silberschatz, Galvin and Gagne ©2009
Operating System Concepts – 8th Edition 5.30 Silberschatz, Galvin and Gagne ©2009
Multilevel Queue
 Ready queue is partitioned into separate queues, eg:
 foreground (interactive)
 background (batch)
 Process permanently in a given queue
 Each queue has its own scheduling algorithm:
 foreground – RR
 background – FCFS

Operating System Concepts – 8th Edition 5.31 Silberschatz, Galvin and Gagne ©2009
 Scheduling must be done between the queues:
 Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time which it
can schedule amongst its processes; i.e., 80% to foreground in RR
 20% to background in FCFS

Operating System Concepts – 8th Edition 5.32 Silberschatz, Galvin and Gagne ©2009
Multilevel Queue Scheduling

Operating System Concepts – 8th Edition 5.33 Silberschatz, Galvin and Gagne ©2009
Multilevel Feedback Queue

 A process can move between the various queues; aging can be


implemented this way

 Multilevel-feedback-queue scheduler defined by the following


parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter
when that process needs service

Operating System Concepts – 8th Edition 5.34 Silberschatz, Galvin and Gagne ©2009
Example of Multilevel Feedback Queue
 Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS
 Scheduling
 A new job enters queue Q0 which is served FCFS
 When it gains CPU, job receives 8 milliseconds
 If it does not finish in 8 milliseconds, job is moved to queue
Q1
 At Q1 job is again served FCFS and receives 16 additional
milliseconds
 If it still does not complete, it is preempted and moved to
queue Q2
Operating System Concepts – 8th Edition 5.35 Silberschatz, Galvin and Gagne ©2009
Multilevel Feedback Queues

Operating System Concepts – 8th Edition 5.36 Silberschatz, Galvin and Gagne ©2009
Thread Scheduling
 Distinction between user-level and kernel-level threads
 When threads supported, threads scheduled, not processes
 Many-to-one and many-to-many models, thread library schedules
user-level threads to run on LWP
 Known as process-contention scope (PCS) (also called
process local scheduling) since scheduling competition is within
the process
 Typically done via priority set by programmer
 Kernel thread scheduled onto available CPU is system-contention
scope (SCS) (also called system global scheduling) – competition
among all threads in system

Operating System Concepts – 8th Edition 5.37 Silberschatz, Galvin and Gagne ©2009
Multiple-Processor Scheduling

 CPU scheduling more complex when multiple CPUs are available


 Homogeneous processors within a multiprocessor
 Asymmetric multiprocessing – only one processor accesses the system
data structures, alleviating the need for data sharing
 Symmetric multiprocessing (SMP) – each processor is self-scheduling,
 all processes in common ready queue, or
 each has its own private queue of ready processes
 Currently, most common

Operating System Concepts – 8th Edition 5.38 Silberschatz, Galvin and Gagne ©2009
 Processor affinity – process has affinity for processor on which it is currently running
 soft affinity : OS tries to run a process on the same processor where it was previously
running.
 hard affinity : Allows a process to specify that the process should not migrate to
other processors.

Operating System Concepts – 8th Edition 5.39 Silberschatz, Galvin and Gagne ©2009
NUMA and CPU Scheduling

Note that memory-placement


algorithms can also consider affinity

Operating System Concepts – 8th Edition 5.40 Silberschatz, Galvin and Gagne ©2009
Multicore Processors
 Recent trend to place multiple processor cores on same
physical chip

 Faster and consumes less power

 Multiple threads per core also growing


 Takes advantage of memory stall to make progress on
another thread while memory retrieve happens

Operating System Concepts – 8th Edition 5.41 Silberschatz, Galvin and Gagne ©2009
Multithreaded Multicore System

Operating System Concepts – 8th Edition 5.42 Silberschatz, Galvin and Gagne ©2009
End of Chapter 5

Operating System Concepts – 8th Edition Silberschatz, Galvin and Gagne ©2009

You might also like