0% found this document useful (0 votes)
37 views48 pages

Unit2 3.cpu Scheduling

Uploaded by

Nikhil Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views48 pages

Unit2 3.cpu Scheduling

Uploaded by

Nikhil Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

CPU SCHEDULING

What is the need for CPU scheduling algorithm?

• CPU scheduling is the process of deciding which process will


own the CPU to use while another process is suspended.
• The main function of the CPU scheduling is to ensure that
whenever the CPU remains idle, the OS has at least selected one
of the processes available in the ready-to-use line.
Objectives of Process Scheduling Algorithm:
• Utilization of CPU at maximum level. Keep CPU as busy as
possible.
• Allocation of CPU should be fair.
• Throughput should be Maximum. i.e. Number of
processes that complete their execution per time unit should
be maximized.
• Minimum turnaround time, i.e. time taken by a process to
finish execution should be the least.
• There should be a minimum waiting time and the process
should not starve in the ready queue.
• Minimum response time. It means that the time when a
process produces the first response should be as less as
possible.
What are the different terminologies to take care
of in any CPU Scheduling algorithm?
• Arrival Time: Time at which the process arrives in the
ready queue.
• Completion Time: Time at which process completes its
execution.
• Burst Time: Time required by a process for CPU
execution.
• Turn Around Time: Time Difference between
completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time

•Waiting Time(W.T): Time Difference between turn


around time and burst time.

Waiting Time = Turn Around Time – Burst Time


What are the different types of CPU Scheduling Algorithms?
First Come First Serve

• FCFS considered to be the simplest of all operating system scheduling


algorithms.
• First come first serve scheduling algorithm states that the process that
requests the CPU first is allocated the CPU first and is implemented by
using FIFO queue.
Characteristics of FCFS
• FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
• Tasks are always executed on a First-come, First-serve concept.
• FCFS is easy to implement and use.
• This algorithm is not much efficient in performance, and the wait time is
quite high.
Advantages of FCFS
• Easy to implement
• First come, first serve method
Disadvantages of FCFS
• FCFS suffers from Convoy effect.
• The average waiting time is much higher than the other algorithms.
• FCFS is very simple and easy to implement and hence not much
efficient.
Shortest Job First(SJF)

• Shortest job first (SJF) is a scheduling process that selects the waiting
process with the smallest execution time to execute next.
• This scheduling method may or may not be preemptive.
• Significantly reduces the average waiting time for other processes waiting
to be executed.
• The full form of SJF is Shortest Job First.
Characteristics of SJF
• Shortest Job first has the advantage of having a minimum average waiting
time among all operating system scheduling algorithms.
• It is associated with each task as a unit of time to complete.
• It may cause starvation if shorter processes keep coming. This problem can
be solved using the concept of ageing.
Advantages of Shortest Job first
• As SJF reduces the average waiting time thus, it is better than the first
come first serve scheduling algorithm.
• SJF is generally used for long term scheduling
Disadvantages of SJF
• One of the demerit SJF has is starvation.
• Many times it becomes complicated to predict the length of the
upcoming CPU request
Problem

Process Arrival time Burst Time

P1 0 ms 5 ms

P2 1 ms 3 ms

P3 2 ms 3 ms

P4 4 ms 1 ms
Round Robin Scheduling

• Round Robin is a CPU scheduling algorithm where each process is cyclically


assigned a fixed time slot.
• It is the preemptive version of the First come First Serve CPU Scheduling
algorithm.
• Round Robin CPU Algorithm generally focuses on Time Sharing technique.
• The period of time for which a process or job is allowed to run in a pre-emptive
method is called time quantum or time slice
• Each process or job present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then the
process will end else the process will go back to the waiting table and wait for its
next turn to complete the execution.
Characteristics
• It is simple, easy to implement, and starvation-free as all processes get a fair share
of CPU.
• One of the most commonly used techniques in CPU scheduling is a core.
• It is preemptive as processes are assigned CPU only for a fixed slice of time at most.
• The disadvantage of it is more overhead of context switching.
Advantages

• There is fairness since every process gets an equal share of the CPU.
• The newly created process is added to the end of the ready queue.
• A round-robin scheduler generally employs time-sharing, giving each job a time
slot or quantum.
• While performing a round-robin scheduling, a particular time quantum is allotted
to different jobs.
• Each process get a chance to reschedule after a particular quantum time in this
scheduling.
Disadvantages
• There is Larger waiting time and Response time.
• There is Low throughput.
• There is Context Switches.
• Gantt chart seems to come too big (if quantum time is less for scheduling. For
Example:1 ms for big scheduling.)
• Time consuming scheduling for small quantum.
Examples to show working of Round Robin Scheduling Algorithm

Time Quantum = 5

GANTT CHART for RR


Average waiting time ?
Priority CPU Scheduling
• In the Shortest Job First scheduling algorithm, the priority of a process is
generally the inverse of the CPU burst time, i.e. the larger the burst time the lower
is the priority of that process.
• In case of priority scheduling the priority is not always set as the inverse of the
CPU burst time, rather it can be internally or externally set, but yes the scheduling
is done on the basis of priority of the process where the process which is most
urgent is processed first, followed by the ones with lesser priority in order.
• Processes with same priority are executed in FCFS manner.
• The priority of process, when internally defined, can be decided based on memory
requirements, time limits, number of open files, ratio of I/O burst to CPU
burst etc.
Types of Priority Scheduling Algorithm

• Priority scheduling can be of two types:


1.Preemptive Priority Scheduling: If the new process arrived at the ready queue
has a higher priority than the currently running process, the CPU is preempted,
which means the processing of the current process is stopped and the incoming
new process with higher priority gets the CPU for its execution.

2.Non-Preemptive Priority Scheduling: In case of non-preemptive priority


scheduling algorithm if a new process arrives with a higher priority than the
current running process, the incoming process is put at the head of the ready
queue, which means after the execution of the current process it will be processed.
GANTT CHART

Averaging Waiting Time?


Comparison Chart
Parameter
PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Once resources(CPU Cycle) are allocated to a


In this resources(CPU Cycle) are
Basic process, the process holds it till it completes
allocated to a process for a limited time.
its burst time or switches to waiting state.

Process can not be interrupted until it


Interrupt Process can be interrupted in between.
terminates itself or its time is up.

If a process having high priority If a process with a long burst time is running
Starvation frequently arrives in the ready queue, a CPU, then later coming process with less
low priority process may starve. CPU burst time may starve.

It has overheads of scheduling the


Overhead It does not have overheads.
processes.

Flexibility flexible rigid

Cost cost associated no cost associated

In preemptive scheduling, CPU


CPU Utilization It is low in non preemptive scheduling
utilization is high.
Parameter NON-PREEMPTIVE
PREEMPTIVE SCHEDULING
SCHEDULING

Preemptive scheduling waiting time is Non-preemptive scheduling waiting


Waiting Time
less. time is high.

Response Preemptive scheduling response time is Non-preemptive scheduling response


Time less. time is high.

Decisions are made by the process


Decisions are made by the scheduler and
Decision itself and the OS just follows the
are based on priority and time slice
making process’s instructions
allocation

The OS has less control over the


Process The OS has greater control over the
scheduling of processes
control scheduling of processes

Lower overhead since context


Higher overhead due to frequent context
Overhead switching is less frequent
switching

Examples of preemptive scheduling are Examples of non-preemptive


• Arrival time (AT) − Arrival time is the time at which the process arrives in ready
queue.
• Burst time (BT) or CPU time of the process − Burst time is the unit of time in
which a particular process completes its execution.
• Completion time (CT) − Completion time is the time at which the process has been
terminated.
• Turn-around time (TAT) − The total time from arrival time to completion time is
known as turn-around time. TAT can be written as,
Turn-around time (TAT) = Completion time (CT) – Arrival time (AT) or,
TAT = Burst time (BT) + Waiting time (WT)
• Waiting time (WT) − Waiting time is the time at which the process waits for its
allocation while the previous process is in the CPU for execution. WT is written
as,
Waiting time (WT) = Turn-around time (TAT) – Burst time (BT)
FCFS
Shortest Job First (SJF) - Non-Preemptive
Pre-emptive SJF

Shortest Job First (SJF): Preemptive, Non-Preemptive Example (


guru99.com)
Round Robin Scheduling Algorithm
Round Robin Scheduling Algorithm

Quantum Time 4
Preemptive Priority CPU Scheduling Algorithm
How does Preemptive Priority CPU Scheduling
Algorithm work?
• Step-1: Select the first process whose arrival time will be 0, we need to select
that process because that process is only executing at time t=0.
• Step-2: Check the priority of the next available process. Here we need to check
for 3 conditions.
• if priority(current_process) > priority(prior_process) :- then execute the current process.
• if priority(current_process) < priority(prior_process) :- then execute the prior process.
• if priority(current_process) = priority(prior_process) :- then execute the process which
arrives first i.e., arrival time should be first.
• Step-3: Repeat Step-2 until it reaches the final process.
• Step-4: When it reaches the final process, choose the process which is having
the highest priority & execute it. Repeat the same step until all processes
complete their execution.
Preemptive Priority CPU Scheduling Algorithm
Turn Around Time (T.A.T) = (Completion Time) – (Arrival Time)
Waiting Time (W.T) = (Turn Around Time) – (Burst Time)
Response Time (R.T) = (First Arrival Time) – (Arrival Time)
Multilevel Queue (MLQ) CPU Scheduling

• It may happen that processes in the ready queue can be divided into different classes
where each class has its own scheduling needs.
• For example, a common division is a foreground (interactive) process and
a background (batch) process. These two classes have different scheduling needs.
For this kind of situation, Multilevel Queue Scheduling is used.
Features of Multilevel Queue (MLQ)
Multiple queues
Multiple queues are maintained for processes with common
characteristics
Priorities assigned
Priorities are assigned to processes based on their type,
characteristics, and importance.
For example, interactive processes like user input/output may
have a higher priority than batch processes like file backups.
Pre-emption
Preemption is allowed in MLQ scheduling, which means a higher
priority process can preempt a lower priority process, and the CPU
is allocated to the higher priority process.
Scheduling algorithm
Different scheduling algorithms can be used for each queue, depending
on the requirements of the processes in that queue.
 For example, Round Robin scheduling may be used for interactive
processes, while First Come First Serve scheduling may be used for batch
processes.
Feedback mechanism
A feedback mechanism can be implemented to adjust the priority of a
process based on its behavior over time.
For example, if an interactive process has been waiting in a lower-priority
queue for a long time, its priority may be increased to ensure it is
executed in a timely manner.
Efficient allocation of CPU time
MLQ scheduling ensures that processes with higher priority levels are
executed in a timely manner, while still allowing lower priority processes
to execute when the CPU is idle.
Fairness
MLQ scheduling provides a fair allocation of CPU time to different
types of processes, based on their priority and requirements.
Customizable
MLQ scheduling can be customized to meet the specific
requirements of different types of processes.
• Ready Queue is divided into separate queues for each class of
processes.
• For example, let us take three different types of processes
System processes, Interactive processes, and Batch Processes.
All three processes have their own queue.
Multilevel Feedback Queue Scheduling (MLFQ)
CPU Scheduling
• Multilevel Feedback Queue Scheduling (MLFQ) CPU
Scheduling is like Multilevel Queue(MLQ) Scheduling but in this
process can move between the queues.

Characteristics of Multilevel Feedback Queue Scheduling:


• In a multilevel queue-scheduling algorithm, processes are
permanently assigned to a queue on entry to the system, and
processes are allowed to move between queues.
• As the processes are permanently assigned to the queue, this
setup has the advantage of low scheduling overhead,
Features of Multilevel Feedback Queue
Scheduling (MLFQ) CPU Scheduling
• Multiple queues
• Priorities adjusted dynamically
• Time-slicing
• Feedback mechanism
• Pre-emption
Advantages of Multilevel Feedback Queue
Scheduling
• It is more flexible.
• It allows different processes to move between different queues.
• It prevents starvation by moving a process that waits too long for the lower
priority queue to the higher priority queue.
Disadvantages of Multilevel Feedback Queue
Scheduling
• The selection of the best scheduler, it requires some other means to select the
values.
• It produces more CPU overheads.
• It is the most complex algorithm.
• Multilevel feedback queue scheduling, however, allows a
process to move between queues.
• Multilevel Feedback Queue Scheduling (MLFQ) keeps analyzing
the behavior (time of execution) of processes and according to
which it changes its priority.
• System Process
The OS has its process to execute,
which is referred to as the System
Process.
• Interactive Process
It is a process in which the same type
of interaction should occur.
• Batch Process
Batch processing is an operating
system feature that collects programs
and data into a batch before
processing starts.
• Student Process
The system process is always given
the highest priority, whereas the
student processes are always given
the lowest.
Multiple-Processor Scheduling in Operating System

• In multiple-processor scheduling multiple CPU’s are available and hence Load


Sharing becomes possible.
• It is more complex as compared to single processor scheduling.
• It enables a computer system to perform multiple tasks simultaneously, which can
greatly improve overall system performance and efficiency.

How does multiple-processor scheduling work?

It works by dividing tasks among multiple processors in a computer system, which


allows tasks to be processed simultaneously and reduces the overall time needed to
complete them.
Approaches:
Asymmetric Multiprocessing - Master Server & User code
Symmetric Multiprocessing
Processor Affinity
 A processes has an affinity for the processor on which it is currently running.
 When a process runs on a specific processor there are certain effects on the cache
memory.
The data most recently accessed by the process populate the cache for the processor
and as a result successive memory access by the process are often satisfied in the
cache memory.

1.Soft Affinity – When an operating system has a policy of attempting to keep a


process running on the same processor but not guaranteeing it will do so, this
situation is called soft affinity.

2.Hard Affinity – Hard Affinity allows a process to specify a subset of processors on


which it may run. Some systems such as Linux implements soft affinity but also
provide some system calls like sched_setaffinity() that supports hard affinity.
Load Balancing
Load Balancing is the phenomenon that keeps the workload evenly distributed
across all processors in an SMP system.
Approaches
1.Push Migration: In push migration, a task routinely checks the load on each
processor. If it finds an imbalance, it evenly distributes the load on each processor
by moving the processes from overloaded to idle or less busy processors.
2.Pull Migration:Pull Migration occurs when an idle processor pulls a waiting task
from a busy processor for its execution.

You might also like