ch5 CPU Scheduling

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling

► Basic Concepts
► Scheduling Criteria
► Scheduling Algorithms
► Thread Scheduling
► Multiple-Processor Scheduling
► Operating Systems Examples
► Algorithm Evaluation
Objectives

► To introduce CPU scheduling, which is the basis for multiprogrammed


operating systems

► To describe various CPU-scheduling algorithms

► To discuss evaluation criteria for selecting a CPU-scheduling


algorithm for a particular system
Basic Concepts

► Maximum CPU utilization obtained with multiprogramming

► CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU


execution and I/O wait

► CPU burst distribution


Histogram of CPU-burst Times
Alternating Sequence of CPU And I/O Bursts
CPU Scheduler

► Selects from among the processes in memory that are ready to


execute, and allocates the CPU to one of them

► CPU scheduling decisions may take place when a process:


1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates

► Scheduling under 1 and 4 is nonpreemptive

► All other scheduling is preemptive


Dispatcher

► Dispatcher module gives control of the CPU to the process selected


by the short-term scheduler; this involves:
► switching context
► switching to user mode
► jumping to the proper location in the user program to restart that
program

► Dispatch latency – time it takes for the dispatcher to stop one


process and start another running
Scheduling Criteria

► CPU utilization – keep the CPU as busy as possible

► Throughput – # of processes that complete their execution per time


unit

► Turnaround time – amount of time to execute a particular process

► Waiting time – amount of time a process has been waiting in the


ready queue

► Response time – amount of time it takes from when a request was


submitted until the first response is produced, not output (for
time-sharing environment)
Scheduling Algorithm Optimization Criteria

► Max CPU utilization

► Max throughput

► Min turnaround time

► Min waiting time

► Min response time


First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
► Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

► Waiting time for P1 = 0; P2 = 24; P3 = 27

► Average waiting time: (0 + 24 + 27)/3 = 17


FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
► The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30
► Waiting time for P1 = 6; P2 = 0; P3 = 3

► Average waiting time: (6 + 0 + 3)/3 = 3

► Much better than previous case

► Convoy effect short process behind long process


Shortest-Job-First (SJF) Scheduling

► Associate with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time

► SJF is optimal – gives minimum average waiting time for a given set of
processes
► The difficulty is knowing the length of the next CPU request
Example of SJF

Process Arrival Time Burst Time


P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
► SJF scheduling chart

P4 P1 P3 P2

0 3 9 16 24

► Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Priority Scheduling

► A priority number (integer) is associated with each process

► The CPU is allocated to the process with the highest priority (smallest
integer ≡ highest priority)
► Preemptive
► No preemptive

► SJF is a priority scheduling where priority is the predicted next CPU


burst time

► Problem ≡ Starvation – low priority processes may never execute

► Solution ≡ Aging – as time progresses increase the priority of the


process
Round Robin (RR)

► Each process gets a small unit of CPU time (time quantum),


usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.

► If there are n processes in the ready queue and the time quantum
is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-1)q
time units.

► Performance
► q large ⇒ FIFO
► q small ⇒ q must be large with respect to context switch, otherwise
overhead is too high
Example of RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
P3 3

► The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

► Typically, higher average turnaround than SJF, but better


response
Turnaround Time Varies With
The Time Quantum
Multilevel Queue

► Ready queue is partitioned into separate queues:


foreground (interactive)
background (batch)

► Each queue has its own scheduling algorithm


► foreground – RR
► background – FCFS

► Scheduling must be done between the queues


► Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
► Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
► 20% to background in FCFS
Multilevel Queue Scheduling
Example of Multilevel Feedback Queue

► Three queues:
► Q0 – RR with time quantum 8 milliseconds
► Q1 – RR time quantum 16 milliseconds
► Q2 – FCFS

► Scheduling
► A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is
moved to queue Q1.
► At Q1 job is again served FCFS and receives 16 additional milliseconds. If
it still does not complete, it is preempted and moved to queue Q2.
Multiple-Processor Scheduling

► CPU scheduling more complex when multiple CPUs are available

► Homogeneous processors within a multiprocessor

► Asymmetric multiprocessing – only one processor accesses the


system data structures, alleviating the need for data sharing

► Symmetric multiprocessing (SMP) – each processor is


self-scheduling, all processes in common ready queue, or each has
its own private queue of ready processes

► Processor affinity – process has affinity for processor on which it is


currently running
► soft affinity
► hard affinity
CPU Scheduling
Multicore Processors

► Recent trend to place multiple processor cores on same physical chip

► Faster and consume less power

► Multiple threads per core also growing


► Takes advantage of memory stall to make progress on another thread
while memory retrieve happens
Operating System Examples

► Solaris scheduling

► Windows XP scheduling

► Linux scheduling
End of Lecture

You might also like