0% found this document useful (0 votes)
134 views65 pages

Chapter-3 CPU Scheduling

The document discusses CPU scheduling in operating systems. It covers key concepts like processes alternating between CPU and I/O bursts, the role of the CPU scheduler and dispatcher in selecting processes from ready queues for execution. It also describes criteria for comparing scheduling algorithms like CPU utilization, turnaround time, waiting time. Finally it outlines common scheduling algorithms like FCFS, SJF, priority scheduling and examples of how they work.

Uploaded by

Yasar Khatib
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views65 pages

Chapter-3 CPU Scheduling

The document discusses CPU scheduling in operating systems. It covers key concepts like processes alternating between CPU and I/O bursts, the role of the CPU scheduler and dispatcher in selecting processes from ready queues for execution. It also describes criteria for comparing scheduling algorithms like CPU utilization, turnaround time, waiting time. Finally it outlines common scheduling algorithms like FCFS, SJF, priority scheduling and examples of how they work.

Uploaded by

Yasar Khatib
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Chapter 5: CPU Scheduling

Maximum CPU utilization


obtained with
multiprogramming
CPU scheduling
▪ The process of determining which process will own
CPU for execution while another process is on hold.
▪ It is the process of selecting process from ready
queue and put on CPU for execution.
▪ The main task of CPU scheduling is to make sure
that whenever the CPU remains idle, the OS at least
select one of the processes available in the ready
queue for execution.
▪ The selection process will be carried out by the CPU
scheduler. It selects one of the processes in
memory/ready queue for execution.
Basic Concepts
▪ In a system with a single CPU core, only one
process can run at a time. Others must wait until the
CPU’s core becomes free.
▪ The objective of multiprogramming is to have some
process running at all times, to maximize CPU
utilization.
▪ CPU–I/O Burst Cycle – Process execution consists
of a cycle of CPU execution and I/O wait.
▪ Processes alternate between these two states.
Basic Concepts
▪ Process execution begins with a CPU burst and
followed by I/O burst and continues.
▪ Eventually, the final CPU burst ends with a system
request to terminate process execution.
Alternating sequence of CPU and I/O bursts
• A CPU burst-
performing
calculations.
• An I/O burst,
waiting for
data transfer
in or out of the
system.
• CPU bursts
vary from
process to
process, and
from program
to program
CPU Scheduler
▪ The CPU scheduler selects the process in ready
queue, and allocates the CPU to it for the execution.
▪ The records in the queues are generally process
control blocks (PCBs) of the processes.
CPU scheduling decisions
1. When a process switches from the running state to
the waiting state (for example, as the result of an
I/O request or an invocation of wait() for the
termination of a child process)
2. When a process switches from the running state to
the ready state (for example, when an interrupt
occurs)
3. When a process switches from the waiting state to
the ready state (for example, at completion of I/O)
4. When a process terminates
CPU scheduling decisions
▪ For conditions 1 and 4 there is no choice in terms of
scheduling - A new process must be selected from
ready queue.
▪ For conditions 2 and 3 there is a choice - To either
continue running the current process, or select a
different one.
Types of CPU Scheduling
Preemptive and Nonpreemptive Scheduling
▪ When scheduling takes place only under
circumstances 1 and 4, the scheduling scheme is
nonpreemptive.
▪ Otherwise, it is preemptive.
▪ In Preemptive Scheduling, the tasks are mostly
assigned with their priorities. Sometimes it is
important to run a task with a higher priority before
another lower priority task, even if the lower priority
task is still running. The lower priority task holds for
some time and resumes when the higher priority task
finishes its execution.
Preemptive and Nonpreemptive Scheduling
▪ Under Nonpreemptive scheduling, once the CPU
has been allocated to a process, the process keeps
the CPU until it releases it either by terminating or by
switching to the waiting state.
▪ Virtually all modern operating systems: Windows,
MacOS, Linux, and UNIX use preemptive scheduling
algorithms.
Dispatcher
▪ It is module that provides control of the CPU to the
process selected by the CPU scheduler;
▪ Functions performed by Dispatcher:
• Switching context
• Switching to user mode
• Moving to the proper location in the newly loaded
program.
▪ Dispatch latency –It is the amount of time needed
by the CPU scheduler to stop one process and start
another.
Dispatcher
TAT = CT - AT

Scheduling Criteria
▪ Used for comparing CPU-scheduling algorithms.
The criteria include the following:
▪ CPU utilization – keep the CPU as busy as
possible. It can range from 0 to 100 percent.
▪ Throughput – # of processes that complete their
execution per unit time. Work completed per unit
time is called Throughput.
▪ Turnaround time(TAT) – It is the time interval from
the time of submission (AT) of a process to the time
of the completion (CT) of the process.
▪ TAT = CT - AT
WT = TAT - BT

Scheduling Criteria
▪ Waiting time –The time spent by a process waiting
in the ready queue for getting the CPU. The time
difference b/w Turnaround Time and Burst Time is
called Waiting Time.
▪ WT = TAT – BT
▪ BT : This is the time required by the process for it’s
execution.
▪ Response time – is the time from the submission
of a request until the first response is produced.
Important CPU scheduling Terminologies
▪ Burst Time/Execution Time: It is a time required
by the process to complete execution.
▪ Arrival Time: when a process enters in a ready
state
▪ Finish Time/Completion time: when process
complete and exit from a system
▪ CPU/IO burst cycle: Characterizes process
execution, which alternates between CPU and I/O
activity. CPU times are usually shorter than the time
of I/O.
Scheduling Algorithm Criteria
Scheduling Algorithms
▪ CPU scheduling deals with the problem of deciding
which of the processes in the ready queue is to be
allocated the CPU’s core.
• First-Come First-Serve Scheduling, FCFS
• Shortest-Job-First Scheduling, SJF
• Priority Scheduling
• Round Robin Scheduling
• Multilevel Queue Scheduling
• Multilevel Feedback-Queue Scheduling
First- Come, First-Served (FCFS) Scheduling

▪ FCFS is very simple - Just a FIFO queue, like


customers waiting in line at the bank or the post
office or at a copying machine.
▪ With this scheme, the process that requests the
CPU first is allocated the CPU first.
▪ Unfortunately, however, FCFS can yield some very
long average wait times, particularly if the first
process to get there takes a long time.
▪ FCFS scheduling algorithm is non-preemptive.
Once the CPU has been allocated to a process, that
process keeps the CPU until it releases the CPU,
either by terminating or by requesting I/O.
First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3

▪ Suppose that the processes arrive in the order: P1 ,


P2 , P3

▪ All processes arrive at same time (0)


First- Come, First-Served (FCFS) Scheduling
▪ The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

▪ Waiting time for P1 = 0; P2 = 24; P3 = 27


▪ Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
▪ Suppose that the processes arrive in the order:
P2 , P3 , P1
▪ The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

▪ Waiting time for P1 = 6; P2 = 0; P3 = 3


▪ Average waiting time: (6 + 0 + 3)/3 = 3
▪ Much better than previous case
Shortest-Job-First (SJF) Scheduling
▪ The idea behind the SJF algorithm is to pick the
quickest fastest little job that needs to be done, and
then pick the next smallest fastest job to do next.
▪ Associate with each process the length of its CPU
burst time.
• Use these lengths to schedule the process with the
shortest time
▪ SJF is optimal – gives minimum average waiting time
for a given set of processes
▪ Preemptive version called shortest-remaining-time-
first
Example of SJF
Process Burst Time
P1 6
P2 8
P3 7
P4 3

▪ Assumption: all jobs arrive at the same time.


Example of SJF

SJF scheduling chart

• Average waiting time = (3 + 16 + 9 + 0) / 4


=7
• What is AWT for FCFS.
Shortest Remaining Time First Scheduling
▪ SJF can be either preemptive or non-preemptive.
▪ Preemption occurs when a new process arrives in
the ready queue that has a predicted burst time
shorter than the time remaining in the process that
currently running on the CPU.
▪ Preemptive SJF is sometimes referred to
as Shortest Remaining Next (SJN)
Example of Shortest-remaining-time-first

Process Arrival Time Burst Time

P1 0 8

P2 1 4

P3 2 9

p4 3 5
Example of Shortest-remaining-time-first

Preemptive SJF Gantt Chart


Example of Shortest-remaining-time-first

AT BT TAT=CT-AT WT=TAT-BT
P1 0 8 17 09

P2 1 4 4 0

P3 2 9 24 15

P4 3 5 7 02

• Average waiting time = 9+0+15+2=26/4 = 6.5


Priority Scheduling
▪ The SJF algorithm is a special case of the general
priority-scheduling algorithm.
▪ A priority is associated with each process, and the
CPU is allocated to the process with the highest
priority first.
▪ Equal-priority processes are scheduled in FCFS
order.
▪ SJF uses the inverse of the next expected burst time
as its priority - The smaller the expected burst, the
higher the priority.
Priority Scheduling
▪ Priorities are generally indicated by some fixed
range of integers/numbers , such as 0 to 7 or 0 to
4,095
▪ But there is no agreed-upon convention as to
whether "high" priorities use large numbers or small
numbers.
▪ There is no general agreement on whether 0 is the
highest or lowest priority
▪ In this context, low number for high priorities, with 0
being the highest possible priority.
Priority Scheduling Example

Process Burst Time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority Scheduling- Example

• The AWT= 8.2 milliseconds


NOTE
▪ Priorities can be assigned either internally or
externally.
▪ Internal priorities are assigned by the OS using
criteria such as average burst time, ratio of CPU to
I/O activity, memory requirements, number of open
files and time limits.
▪ External priorities are assigned by users, based on
the importance of the job/process, for example
scheduling of back-up process, memory and
system scanning for viruses, EMI deduction, and
updating of virus patches for anti-virus software
▪ Priority scheduling can be either preemptive or non-
preemptive.
NOTE
▪ Priority scheduling can suffer from a major problem
known as indefinite blocking, or starvation, in
which a low-priority task can wait forever because
there are always some other processes that have
higher priority. This prevent a low-priority process
from ever getting the CPU (starvation).
▪ (Rumor: when they shut down the IBM 7094 at MIT
in 1973, they found a low-priority process that had
been submitted in 1967 and had not yet been run.)
Solution for Starvation
▪ A solution to the problem of indefinite blockage of
low-priority processes is aging.
▪ Aging involves gradually increasing the priority of
processes that wait in the system for a long time.
▪ Under this scheme a low-priority job will eventually
get its priority raised high enough for the execution.
Round Robin (RR)
▪ Round robin scheduling is similar to FCFS
scheduling, except that CPU bursts are assigned
with limits called time quantum/time slice.
▪ Each process gets a small unit of CPU time (time
quantum q), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and
added to the end of the ready queue.
▪ The ready queue is maintained as a circular queue.
The CPU scheduler goes around the ready queue,
allocating the CPU to each process for a time
interval of up to 1 time quantum.
Round Robin (RR)
▪ RR scheduling can give the effect of all processors
sharing the CPU equally.
▪ Timer interrupts every quantum to schedule next
process
▪ Performance
• q large  FIFO (FCFS)
• q small  RR
▪ Note that q must be large with respect to context
switch, otherwise overhead is too high
Round Robin (RR)
▪ RR scheduling can give the effect of all processors
sharing the CPU equally.
▪ Timer interrupts every quantum to schedule next
process.
Example of RR with Time Quantum = 4

Process Burst Time(ms)

P1 24

P2 3

P3 3
Example of RR with Time Quantum = 4

The Gantt chart is:

• AWT= 5.66 ms.


Example of RR with Time Quantum = 4
▪ q should be large compared to context switch time
• q usually 10 milliseconds to 100 milliseconds,
• Context switch < 10 microseconds
▪ The performance of RR is sensitive to the time
quantum selected.
▪ If the quantum is large enough, then RR reduces to
the FCFS algorithm;
▪ If it is very small, then each process gets 1/nth of
the processor time and share the CPU equally.
▪ Note that q must be large with respect to context
switch, otherwise overhead is too high
Time Quantum and Context Switch Time

A real system invokes overhead for every context


switch, and the smaller the time quantum leads
to more context switches.
Priority Scheduling with Round-Robin
▪ Run the process with the highest priority. Processes
with the same priority run round-robin (quantum = 2
ms)
▪ Example:
Process Burst Time Priority
P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
Priority Scheduling with Round-Robin

▪ Gantt Chart with time quantum = 2 ms


Multilevel Queue
▪ When processes can be readily categorized, then
multiple separate queues can be established and
each implementing appropriate scheduling algorithm
for their processes.
▪ The ready queue consists of multiple queues
▪ Multilevel queue scheduler defined by the following
parameters:
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine which queue a process
will enter and when that process needs service
• Scheduling among the queues.
Multilevel Queue
▪ With priority scheduling, have separate queues for
each priority.
▪ Schedule the process in the highest-priority queue!
Multilevel Queue
Multilevel Queue
▪ Prioritization based upon process type
Multilevel Feedback Queue
▪ A process can move between the various queues.
• If the characteristics of a job change between
CPU-intensive and I/O intensive, then it may be
appropriate to switch a job from one queue to
another.
• Aging can also be incorporated, so that a job that
has waited for a long time in one queue can move
to higher priority queue for execution.
Multilevel Feedback Queue
▪ Multilevel-feedback-queue scheduler defined by the
following parameters:
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine when to upgrade a
process
• Method used to determine when to demote a
process
• Method used to determine which queue a process
will enter and when that process needs service
Example of Multilevel Feedback Queue

▪ Three queues:
• Q0 – RR with time quantum 8 milliseconds
• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS
Example of Multilevel Feedback Queue
Example of Multilevel Feedback Queue
▪ Scheduling
• A new process enters queue Q0 which is served in
RR
When it gains CPU, the process receives 8
milliseconds
If it does not finish in 8 milliseconds, the
process is moved to queue Q1
• At Q1 job is again served in RR and receives 16
additional milliseconds
If
it still does not complete, it is preempted and
moved to queue Q2
Thread Scheduling
▪ Execution of multiple threads on a single CPU in
some order is called thread scheduling
▪ Threads has user-level and kernel-level threads.
▪ Scheduling of threads involves two boundary
scheduling:
• Scheduling of user level threads (ULT) to kernel
level threads (KLT) via lightweight process (LWP)
by the application developer.
• Scheduling of kernel level threads by the system
scheduler to perform different unique OS
functions.
Thread Scheduling
▪ User-level threads are managed by a thread library.
▪ To run on a CPU, user-level threads must ultimately
be mapped to an associated kernel-level thread.
▪ This mapping is done by lightweight process (LWP)
interface.
Thread Scheduling
▪ User level thread scheduling is process-contention
scope (PCS) since scheduling competition is within
the process
• Typically done via priority set by programmer
▪ Kernel level thread scheduling onto available CPU is
system-contention scope (SCS) – competition
among all threads in system
Multiple-Processor Scheduling
▪ CPU scheduling becomes more complex when
multiple CPUs are available.
▪ Load sharing balances the load among multiple
processors.
▪ Multi processor system may be any one of the
following architectures:
• Multicore CPUs
• Multithreaded cores
• NUMA (Non Uniform Memory Access systems
• Heterogeneous system/Homogeneous system
Multiple-Processor Scheduling-Approaches
▪ Asymmetric multiprocessing, in which one
processor is the master, controlling all activities and
running all kernel code, while the other runs only
user code.
▪ Symmetric multiprocessing, SMP, where each
processor schedules its own jobs, either from a
common ready queue or from separate ready
queues for each processor.
• Virtually all modern OSes support SMP, including
XP, Win 2000, Solaris, Linux, and Mac OSX.
Symmetric multiprocessing (SMP)
Multicore Processors

▪ Recent trend to place multiple processor cores on


same physical chip
▪ Faster and consumes less power
▪ Multiple threads per core also growing
Multithreaded Multicore System
▪ Chip-
multithreading
(CMT) assigns each
core multiple
hardware threads.
(Intel refers to this
as
hyperthreading.)
Multiple-Processor Scheduling – Load Balancing

▪ Load balancing attempts to keep workload evenly


distributed
▪ Push migration – periodic task checks load on each
processor, and if found pushes task from overloaded
CPU to other CPUs
▪ Pull migration – idle processors pulls waiting task
from busy processor
NUMA and CPU Scheduling
In NUMA-architecture, it will assign memory closes to
the CPU to the thread running on that CPU.
End of Chapter 3: CPU Scheduling

You might also like