0% found this document useful (0 votes)
8 views90 pages

CPU Scheduling

The document outlines CPU scheduling objectives, algorithms, and criteria, emphasizing the importance of CPU and I/O burst cycles. It details various scheduling types, including preemptive and non-preemptive scheduling, and describes the roles of different process schedulers in operating systems. Additionally, it explains key scheduling metrics such as turnaround time, waiting time, and response time, along with examples of common scheduling algorithms like FCFS and SJF.

Uploaded by

suhanipandey225
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views90 pages

CPU Scheduling

The document outlines CPU scheduling objectives, algorithms, and criteria, emphasizing the importance of CPU and I/O burst cycles. It details various scheduling types, including preemptive and non-preemptive scheduling, and describes the roles of different process schedulers in operating systems. Additionally, it explains key scheduling metrics such as turnaround time, waiting time, and response time, along with examples of common scheduling algorithms like FCFS and SJF.

Uploaded by

suhanipandey225
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

CPU Scheduling

Objectives

▪ Describe various CPU scheduling algorithms


▪ Assess CPU scheduling algorithms based on scheduling criteria
▪ Explain the issues related to multiprocessor and multicore
scheduling
▪ Describe various real-time scheduling algorithms
▪ Describe the scheduling algorithms used in the Windows, Linux, and
Solaris operating systems
▪ Apply modeling and simulations to evaluate CPU scheduling
algorithms
CPU Burst:
▪ This is the period when a process is actively using the CPU to execute
instructions.
I/O Burst:
▪ This is the period when a process is waiting for I/O operations, such
as reading from or writing to a disk or network, to finish.
Basic Concepts

▪ Maximum CPU utilization obtained with


multiprogramming
▪ CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and I/O
wait
▪ CPU burst followed by I/O burst
▪ CPU burst distribution is of main concern
CPU Scheduler
▪ The CPU scheduler selects from among the processes in ready
queue, and allocates a CPU core to one of them
• Queue may be ordered in various ways
▪ CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
▪ For situations 1 and 4, there is no choice in terms of scheduling.
A new process (if one exists in the ready queue) must be selected
for execution.
▪ For situations 2 and 3, however, there is a choice.
Preemptive and Nonpreemptive Scheduling

▪ When scheduling takes place only under circumstances 1 and 4,


the scheduling scheme is nonpreemptive.
▪ Otherwise, it is preemptive.
▪ Under Nonpreemptive scheduling, once the CPU has been
allocated to a process, the process keeps the CPU until it releases
it either by terminating or by switching to the waiting state.
▪ Virtually all modern operating systems including Windows, MacOS,
Linux, and UNIX use preemptive scheduling algorithms.
Dispatcher
▪ Dispatcher module gives control of the
CPU to the process selected by the CPU
scheduler; this involves:
• Switching context
• Switching to user mode
• Jumping to the proper location in the
user program to restart that program
▪ Dispatch latency – time it takes for the
dispatcher to stop one process and start
another running
Process Schedulers in Operating System

▪ A process is the instance of a computer program in execution.


▪ Scheduling is important in operating systems with multiprogramming
as multiple processes might be eligible for running at a time.
▪ One of the key responsibilities of an Operating System (OS) is to
decide which programs will execute on the CPU.
▪ Process Schedulers are fundamental components of operating
systems responsible for deciding the order in which processes are
executed by the CPU. In simpler terms, they manage how the CPU
allocates its time among multiple tasks or processes that are
competing for its attention.
Types of Process Schedulers
There are three types of process schedulers:
1. Long Term or Job Scheduler
▪ Long Term Scheduler loads a process from disk to main memory for
execution. The new process to the ‘Ready State’.
▪ It mainly moves processes from Job Queue to Ready Queue.
▪ It controls the Degree of Multi-programming, i.e., the number of
processes present in a ready state or in main memory at any point in
time.
▪ It is important that the long-term scheduler make a careful selection of
both I/O and CPU-bound processes. I/O-bound tasks are which use
much of their time in input and output operations while CPU-bound
processes are which spend their time on the CPU. The job scheduler
increases efficiency by maintaining a balance between the two.
▪ In some systems, the long-term scheduler might not even exist. For
example, in time-sharing systems like Microsoft Windows, there is
usually no long-term scheduler. Instead, every new process is directly
added to memory for the short-term scheduler to handle.
▪ Slowest among the three (that is why called long term).
2. Short-Term or CPU Scheduler

▪ CPU Scheduler is responsible for selecting one process from the


ready state for running (or assigning CPU to it).
▪ STS (Short Term Scheduler) must select a new process for the CPU
frequently to avoid starvation.
▪ The CPU scheduler uses different scheduling algorithms to balance
the allocation of CPU time.
▪ It picks a process from ready queue.
▪ Its main objective is to make the best use of CPU.
▪ It mainly calls dispatcher.
▪ Fastest among the three (that is why called Short Term).
▪ The dispatcher is responsible for loading the process selected by the
Short-term scheduler on the CPU (Ready to Running State). Context
switching is done by the dispatcher only. A dispatcher does the
following work:
▪ Saving context (process control block) of previously running process if
not finished.
▪ Switching system mode to user mode.
▪ Jumping to the proper location in the newly loaded program.
▪ Time taken by dispatcher is called dispatch latency or process context
switch time.
Categories of Scheduling

Scheduling falls into one of two categories:


▪ Non-Preemptive: In this case, a process’s resource cannot be taken
before the process has finished running. When a running process
finishes and transitions to a waiting state, resources are switched.
▪ Preemptive: In this case, the OS can switch a process from running
state to ready state. This switching happens because the CPU may
give other processes priority and substitute the currently active
process for the higher priority process.
Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Once resources(CPU Cycle) are allocated to a process, the


In this resources(CPU Cycle) are allocated to a
Basic process holds it till it completes its burst time or switches to
process for a limited time.
waiting state

Process can not be interrupted until it terminates itself or its time


Interrupt Process can be interrupted in between.
is up

If a process having high priority frequently arrives in If a process with a long burst time is running CPU, then later
Starvation
the ready queue, a low priority process may starve coming process with less CPU burst time may starve

Overhead It has overheads of scheduling the processes It does not have overheads

Flexibility flexible Rigid

Cost Cost associated No cost associated

Response Time Preemptive scheduling response time is less Non-preemptive scheduling response time is high

Decisions are made by the scheduler and are Decisions are made by the process itself and the OS just follows
Decision making
based on priority and time slice allocation the process’s instructions

The OS has greater control over the scheduling of


Process control The OS has less control over the scheduling of processes
processes

Overhead Higher overhead due to frequent context switching Lower overhead since context switching is less frequent

Concurrency More as a process might be preempted when it was


Less as a process is never preempted.
Overhead accessing a shared resource.

Examples of preemptive scheduling are Round Examples of non-preemptive scheduling are First Come First
Examples
Robin and Shortest Remaining Time First Serve and Shortest Job First
Terminologies Used in CPU Scheduling

▪ Arrival Time: The time at which the process arrives in the ready queue.
▪ Completion Time: The time at which the process completes its
execution.
▪ Burst Time: Time required by a process for CPU execution.
▪ Turn Around Time: Time Difference between completion time and
arrival time.
▪ Turn Around Time = Completion Time – Arrival Time
▪ Waiting Time(W.T): Time Difference between turn around time and
burst time.
▪ Waiting Time = Turn Around Time – Burst Time
Things to Take Care While Designing a CPU Scheduling Algorithm
▪ Different CPU Scheduling algorithms have different structures and
the choice of a particular algorithm depends on a variety of factors.
▪ CPU Utilization: The main purpose of any CPU algorithm is to keep
the CPU as busy as possible. Theoretically, CPU usage can range from
0 to 100 but in a real-time system, it varies from 40 to 90 percent
depending on the system load.
▪ Throughput: The average CPU performance is the number of
processes performed and completed during each unit. This is called
throughput. The output may vary depending on the length or duration
of the processes.
▪ Turn Round Time: For a particular process, the important conditions are
how long it takes to perform that process. The time elapsed from the time
of process delivery to the time of completion is known as the conversion
time. Conversion time is the amount of time spent waiting for memory
access, waiting in line, using CPU and waiting for I/O.
▪ Waiting Time: The Scheduling algorithm does not affect the time required
to complete the process once it has started performing. It only affects the
waiting time of the process i.e. the time spent in the waiting process in the
ready queue.
▪ Response Time: In a collaborative system, turn around time is not the
best option. The process may produce something early and continue to
computing the new results while the previous results are released to the
user. Therefore another method is the time taken in the submission of the
application process until the first response is issued. This measure is
called response time.
Scheduling Criteria

▪ CPU utilization – keep the CPU as busy as possible


▪ Throughput – # of processes that complete their execution per time unit
▪ Turnaround time – amount of time to execute a particular process
▪ Waiting time – amount of time a process has been waiting in the ready
queue
▪ Response time – amount of time it takes from when a request was
submitted until the first response is produced.
Response time
Response time is the time spent when the process is in the ready
state and gets the CPU for the first time. For example, here we
are using the First Come First Serve CPU scheduling algorithm
for the below 3 processes:
Here, the response time of all the 3 processes are:
P1: 0 ms
P2: 7 ms because the process P2 have to wait for 8 ms during
the execution of P1 and then after it will get the CPU for the first
time. Also, the arrival time of P2 is 1 ms. So, the response time
will be 8-1 = 7 ms.
P3: 13 ms because the process P3 have to wait for the execution
of P1 and P2 i.e. after 8+7 = 15 ms, the CPU will be allocated to
the process P3 for the first time. Also, the arrival of P3 is 2 ms.
So, the response time for P3 will be 15-2 = 13 ms.
Response time = Time at which the process gets the CPU
Waiting time

▪ Waiting time is the total time spent by the process in the ready state
waiting for CPU. For example, consider the arrival time of all the below 3
processes to be 0 ms, 0 ms, and 2 ms and we are using the First Come
First Serve scheduling algorithm.
Then the waiting time for all the 3 processes will be:
P1: 0 ms
P2: 8 ms because P2 have to wait for the complete execution of P1
and arrival time of P2 is 0 ms.
P3: 13 ms becuase P3 will be executed after P1 and P2 i.e. after
8+7 = 15 ms and the arrival time of P3 is 2 ms. So, the waiting
time of P3 will be: 15-2 = 13 ms.
Waiting time = Turnaround time - Burst time
▪ There is a difference between waiting time and response time. Response time is the time
spent between the ready state and getting the CPU for the first time. But the waiting time
is the total time taken by the process in the ready state. Let's take an example of a
round-robin scheduling algorithm. The time quantum is 2 ms.
▪ In the above example, the response time of the process P2 is 2 ms because after 2 ms, the CPU
is allocated to P2 and the waiting time of the process P2 is 4 ms i.e turnaround time - burst
time (10 - 6 = 4 ms).
Turnaround time
▪ Turnaround time is the total amount of time spent by the process from coming in the ready
state for the first time to its completion.
▪ Turnaround time = Burst time + Waiting time
▪ or
▪ Turnaround time = Exit time - Arrival time
▪ There is a difference between waiting time and response time. Response time is the
time spent between the ready state and getting the CPU for the first time. But the
waiting time is the total time taken by the process in the ready state. Let's take an
example of a round-robin scheduling algorithm. The time quantum is 2 ms.
Let's take an example of a round-robin scheduling
algorithm. The time quantum is 2 ms.

In the above example, the response time of the process P2 is 2 ms


because after 2 ms, the CPU is allocated to P2 and the waiting time of
the process P2 is 4 ms i.e turnaround time - burst time (10 - 6 = 4 ms).
CPU Scheduling Algorithms

▪ FCFS – First Come, First Serve


▪ SJF – Shortest Job First
▪ SRTF – Shortest Remaining Time First
▪ Round Robin
▪ Priority Scheduling
FCFS – First Come First Serve CPU
Scheduling

▪ First Come, First Serve (FCFS) is one of the simplest types of CPU
scheduling algorithms. It is exactly what it sounds like: processes are
attended to in the order in which they arrive in the ready queue, much like
customers lining up at a grocery store.
▪ FCFS Scheduling is a non-preemptive algorithm, meaning once a process
starts running, it cannot be stopped until it voluntarily relinquishes the
CPU, typically when it terminates or performs I/O. This method schedules
processes in the order they arrive, without considering priority or other
factors.
How Does FCFS Work?
▪ The mechanics of FCFS are straightforward:
▪ Arrival: Processes enter the system and are placed in a queue in the
order they arrive.
▪ Execution: The CPU takes the first process from the front of the
queue, executes it until it is complete, and then removes it from the
queue.
▪ Repeat: The CPU takes the next process in the queue and repeats the
execution process.
Scenario 1: Processes with Same Arrival Time
▪ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3
Process Arrival Time Burst Time

p1 0 5

p2 0 3

p3 0 8

Step-by-Step Execution:
1.P1 will start first and run for 5 units of time (from 0 to 5).
2.P2 will start next and run for 3 units of time (from 5 to 8).
3.P3 will run last, executing for 8 units (from 8 to 16).
▪ Turnaround Time = Completion Time - Arrival Time
▪ Waiting Time = Turnaround Time - Burst Time

Processe
AT BT CT TAT WT
s

P1 0 5 5 5-0 = 5 5-5 = 0

P2 0 3 8 8-0 = 8 8-3 = 5

P3 0 8 16 16-0 = 16 16-8 = 8

Average Turn around time = 9.67


Average waiting time = 4.33
Scenario 2: Processes with Different Arrival Times
▪ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3

Burst Time Arrival Time
Process
(BT) (AT)

P1 5 ms 2 ms

P2 3 ms 0 ms

P3 4 ms 4 ms

Step-by-Step Execution:
1.P2 arrives at time 0 and runs for 3 units, so its completion time is:
Completion Time of P2=0+3=3
2. P1 arrives at time 2 but has to wait for P2 to finish. P1 starts at time 3 and runs
for 5 units. Its completion time is:
Completion Time of P1=3+5=8
3. P3 arrives at time 4 but has to wait for P1 to finish. P3 starts at time 8 and runs
for 4 units. Its completion time is:
Completion Time of P3=8+4=12
Completion Time Turnaround Time Waiting Time (WT =
Process
(CT) (TAT = CT – AT) TAT – BT)

P2 3 ms 3 ms 0 ms

P1 8 ms 6 ms 1 ms

P3 12 ms 8 ms 4 ms

Average Turnaround time = 5.67


Average waiting time = 1.67
Advantages of FCFS

▪ The simplest and basic form of CPU Scheduling algorithm


▪ Every process gets a chance to execute in the order of its arrival. This
ensures that no process is arbitrarily prioritized over another.
▪ Easy to implement, it doesn’t require complex data structures.
▪ Since processes are executed in the order they arrive, there’s no risk
of starvation.
▪ It is well suited for batch systems where the longer time periods for
each process are often acceptable.
Disadvantages of FCFS
▪ As it is a Non-preemptive CPU Scheduling Algorithm, FCFS can result
in long waiting times, especially if a long process arrives before a
shorter one. This is known as the convoy effect, where shorter
processes are forced to wait behind longer processes, leading to
inefficient execution.
▪ The average waiting time in the FCFS is much higher than in the
others
▪ Since FCFS processes tasks in the order they arrive, short jobs may
have to wait a long time if they arrive after longer tasks, which leads to
poor performance in systems with a mix of long and short tasks.
▪ Processes that are at the end of the queue, have to wait longer to
finish.
▪ It is not suitable for time-sharing operating systems where each
process should get the same amount of CPU time.
Shortest Job First (SJF) or Shortest Job
Next (SJN)
▪ Shortest Job First (SJF) or Shortest Job Next (SJN) is a scheduling
process that selects the waiting process with the smallest execution time to
execute next. This scheduling method may or may not be preemptive.
Significantly reduces the average waiting time for other processes waiting
to be executed.
▪ Example of Non Pre-emptive Shortest Job First CPU Scheduling
Algorithm
Process Burst Time Arrival Time
P1 6 ms 0 ms

P2 8 ms 2 ms

P3 3 ms 4 ms

Step-by-Step Execution:

Time 0-6 (P1): P1 runs for 6 ms (total time left: 0 ms)


Time 6-9 (P3): P3 runs for 3 ms (total time left: 0 ms)
Time 9-17 (P2): P2 runs for 8 ms (total time left: 0 ms)
Burst
Arrival Completion Turn Around
Process Time Waiting Time (WT)
Time (AT) Time (CT) Time (TAT)
(BT)

P1 0 6 6 6-0 = 6 6-6 = 0

P2 2 8 17 17-2 = 15 15-8 = 7

P3 4 3 9 9-4 = 5 5-3 = 2

Average Turn around time = (6 + 15 + 5)/3 = 8.6 ms


Average waiting time = ( 2 + 0 + 7 )/3 = 9/3 = 3 ms
Advantages of SJF Scheduling

▪ SJF is better than the First come first serve(FCFS) algorithm as it


reduces the average waiting time.
▪ It is suitable for the jobs running in batches, where run times are
already known.
▪ SJF is probably optimal in terms of average Turn Around Time (TAT).
Disadvantages of SJF Scheduling

▪ SJF may cause very long turn-around times or starvation.


▪ In SJF job completion time must be known earlier.
▪ Many times it becomes complicated to predict the length of the
upcoming CPU request.
Scenario 1: Processes with Same Arrival Time

Process Burst Time Arrival Time


P1 6 ms 0 ms

P2 8 ms 0 ms

P3 5 ms 0 ms

Step-by-Step Execution:

Time 0-5 (P3): P3 runs for 5 ms (total time left: 0 ms) as it has shortest
remaining time left.
Time 5-11 (P1): P1 runs for 6 ms (total time left: 0 ms) as it has shortest
remaining time left.
Time 11-19 (P2): P2 runs for 8 ms (total time left: 0 ms) as it has
shortest remaining time left.
Arrival Burst
Completion Turn Around
Process Time Time Waiting Time (WT)
Time (CT) Time (TAT)
(AT) (BT)
P1 0 6 11 11-0 = 11 11-6 = 5

P2 0 8 19 19-0 = 19 19-8 = 11

P3 0 5 5 5-0 = 5 5-5 = 0

Average Turn around time = (11 + 19 + 5)/3 = 11.6 ms


Average waiting time = (5 + 0 + 11 )/3 = 16/3 = 5.33 ms
Scenario 2: Processes with Different Arrival Times
▪ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3.
Process Burst Time Arrival Time
P1 6 ms 0 ms

P2 3 ms 1 ms

P3 7 ms 2 ms

Step-by-Step Execution:

Time 0-1 (P1): P1 runs for 1 ms (total time left: 5 ms) as it has shortest
remaining time left.
Time 1-4 (P2): P2 runs for 3 ms (total time left: 0 ms) as it has shortest
remaining time left among P1 and P2.
Time 4-9 (P1): P1 runs for 5 ms (total time left: 0 ms) as it has shortest
remaining time left among P1 and P3.
Time 9-16 (P3): P3 runs for 7 ms (total time left: 0 ms) as it has shortest
remaining time left.
Arrival Completio
Burst Time Turn Around Waiting Time
Process Time n
(BT) Time (TAT) (WT)
(AT) Time (CT)
P1 0 6 9 9-0 = 9 9-6 = 3

P2 1 3 4 4-1 = 3 3-3 = 0

P3 2 7 16 16-2 = 14 14-7 = 7

•Average Turn around time = (9 + 14 + 3)/3 = 8.6 ms


•Average waiting time = (3 + 0 + 7 )/3 = 10/3 = 3.33 ms
Advantages of SRTF Scheduling

▪ Minimizes Average Waiting Time: SRTF reduces the average waiting


time by prioritizing processes with the shortest remaining execution
time.
▪ Efficient for Short Processes: Shorter processes get completed
faster, improving overall system responsiveness.
▪ Ideal for Time-Critical Systems: It ensures that time-sensitive
processes are executed quickly.
Disadvantages of SRTF Scheduling

▪ Starvation of Long Processes: Longer processes may be delayed


indefinitely if shorter processes keep arriving.
▪ Difficult to Predict Burst Times: Accurate prediction of process burst
times is challenging and affects scheduling decisions.
▪ High Overhead: Frequent context switching can increase overhead
and slow down system performance.
▪ Not Suitable for Real-Time Systems: Real-time tasks may suffer
delays due to frequent preemptions.
Shortest Remaining Time First Scheduling

Preemptive version of SJN


▪ Whenever a new process arrives in the ready queue, the decision on
which process to schedule next is redone using the SJN algorithm.
▪ Is SRT more “optimal” than SJN in terms of the minimum average waiting
time for a given set of processes?
Example of Shortest-remaining-time-first

▪ Now we add the concepts of varying arrival times and preemption to


the analysis
Process i Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
▪ Preemptive SJF Gantt Chart

▪ Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5


Round Robin Scheduling

▪ Round Robin Scheduling is a method used by operating systems to


manage the execution time of multiple processes that are competing
for CPU attention. It is called "round robin" because the system
rotates through all the processes, allocating each of them a fixed time
slice or "quantum", regardless of their priority.
▪ The primary goal of this scheduling method is to ensure that all
processes are given an equal opportunity to execute, promoting
fairness among tasks.
▪ Process Arrival: Processes enter the system and are placed in a
queue.
▪ Time Allocation: Each process is given a certain amount of CPU time,
called a quantum.
▪ Execution: The process uses the CPU for the allocated time.
▪ Rotation: If the process completes within the time, it leaves the
system. If not, it goes back to the end of the queue.
▪ Repeat: The CPU continues to cycle through the queue until all
processes are completed.
Advantages of Round Robin Scheduling
▪ Fairness: Each process gets an equal share of the CPU.
▪ Simplicity: The algorithm is straightforward and easy to implement.
▪ Responsiveness: Round Robin can handle multiple processes
without significant delays, making it ideal for time-sharing systems.

Disadvantages of Round Robin Scheduling:


▪ Overhead: Switching between processes can lead to high overhead,
especially if the quantum is too small.
▪ Underutilization: If the quantum is too large, it can cause the CPU to
feel unresponsive as it waits for a process to finish its time.
Scenario 1: Processes with Same Arrival Time
▪ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 and given Time Quantum = 2 ms

Process Burst Time Arrival Time


P1 4 ms 0 ms

P2 5 ms 0 ms

P3 3 ms 0 ms

Step-by-Step Execution:
1.Time 0-2 (P1): P1 runs for 2 ms (total time left: 2 ms).
2.Time 2-4 (P2): P2 runs for 2 ms (total time left: 3 ms).
3.Time 4-6 (P3): P3 runs for 2 ms (total time left: 1 ms).
4.Time 6-8 (P1): P1 finishes its last 2 ms.
5.Time 8-10 (P2): P2 runs for another 2 ms (total time left: 1 ms).
6.Time 10-11 (P3): P3 finishes its last 1 ms.
7.Time 11-12 (P2): P2 finishes its last 1 ms.
•Turnaround Time = Completion Time - Arrival Time
•Waiting Time = Turnaround Time - Burst Time

Processes AT BT CT TAT WT
P1 0 4 8 8-0 = 8 8-4 = 4

P2 0 5 12 12-0 = 12 12-5 = 7

P3 0 3 11 11-0 = 11 11-3 = 8

•Average Turn around time = (8 + 12 + 11)/3 = 31/3 = 10.33 ms


•Average waiting time = (4 + 7 + 8)/3 = 19/3 = 6.33 ms
Scenario 2: Processes with Different Arrival Times
▪ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 and given Time Quantum = 2

Process Burst Time (BT) Arrival Time (AT)

P1 5 ms 0 ms

P2 2 ms 4 ms

P3 4 ms 5 ms
▪ Step-by-Step Execution:
▪ Time 0-2 (P1 Executes):
• P1 starts execution as it arrives at 0 ms.
• Runs for 2 ms; remaining burst time = 5 - 2 = 3 ms.
• Ready Queue: [P1].
▪ Time 2-4 (P1 Executes Again):
• P1 continues execution since no other process has arrived yet.
• Runs for 2 ms; remaining burst time = 3 - 2 = 1 ms.
• P2 arrive at 4 ms.
• Ready Queue: [P2, P1].
▪ Time 4-6 (P2 Executes):
• P2 starts execution as it arrives at 4 ms.
• Runs for 2 ms; remaining burst time = 2 - 2 = 0 ms.
• P3 arrive at 5ms
• Ready Queue: [P1, P3].
▪ Time 6-7 (P1 Executes):
• P1 starts execution.
• Runs for 1 ms; remaining burst time = 1 - 1 = 0 ms.
• Ready Queue: [P3].
▪ Time 7-9 (P3 Executes):
• P3 starts execution.
• Remaining burst time = 4 - 2 = 2 ms.
• Ready Queue: [P3].
▪ Time 9-11 (P3 Executes Again):
• P3 resumes execution and runs for 2 ms and complete its
execution
• Remaining burst time = 2 - 2 = 0 ms.
• Ready Queue: [].
Turnaround Time (TAT Waiting Time (WT =
Process Completion Time (CT)
= CT - AT) TAT - BT)

P1 7 ms 7 ms 2 ms

P2 6 ms 2 ms 0 ms

P3 11 ms 6 ms 1 ms

•Average Turn around time =7+2+6/3=15/3=5ms


•Average waiting time = 2+0+1/3=1ms
▪ Consider the set of 5 processes whose arrival time and burst time are
given below-
Process Id Arrival time Burst time

P1 0 5
P2 1 3
P3 2 1
P4 3 2
P5 4 3

If the CPU scheduling policy is Round Robin with time quantum = 2 unit,
calculate the average waiting time and average turn around time.

Ready Queue-
P5, P1, P2, P5, P4, P1, P3, P2, P1
▪ Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6
unit
▪ Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit

Process Id Exit time Turn Around time Waiting time


P1 13 13 – 0 = 13 13 – 5 = 8
P2 12 12 – 1 = 11 11 – 3 = 8
P3 5 5–2=3 3–1=2
P4 9 9–3=6 6–2=4
P5 14 14 – 4 = 10 10 – 3 = 7

•Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6


unit
•Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit
Consider the set of 6 processes whose arrival time and burst time are given
below-

Process Id Arrival time Burst time

P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 6 3

If the CPU scheduling policy is Round Robin with time quantum = 2, calculate the
average waiting time and average turn around time.
▪ Ready Queue-
▪ P5, P6, P2, P5, P6, P2, P5, P4, P1, P3, P2, P1
▪ Turn Around time = Exit time – Arrival time
▪ Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 8 8–0=8 8–4=4
P2 18 18 – 1 = 17 17 – 5 = 12
P3 6 6–2=4 4–2=2
P4 9 9–3=6 6–1=5
P5 21 21 – 4 = 17 17 – 6 = 11
P6 19 19 – 6 = 13 13 – 3 = 10

•Average Turn Around time = (8 + 17 + 4 + 6 + 17 + 13) / 6 = 65 / 6 = 10.84 unit


•Average waiting time = (4 + 12 + 2 + 5 + 11 + 10) / 6 = 44 / 6 = 7.33 unit
Example of RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
P3 3
▪ The Gantt chart is:

▪ Typically, higher average turnaround than SJF, but better response


▪ q should be large compared to context switch time
• q usually 10 milliseconds to 100 milliseconds,
• Context switch < 10 microseconds
Priority scheduling

▪ Priority scheduling is one of the most common scheduling algorithms


used by the operating system to schedule processes based on their
priority. Each process is assigned a priority. The process with the highest
priority is to be executed first and so on.
▪ Processes with the same priority are executed on a first-come first
served basis. Priority can be decided based on memory requirements,
time requirements or any other resource requirement. Also priority can be
decided on the ratio of average I/O to average CPU burst time.
▪ Priority Scheduling can be implemented in two ways:

▪ Non-Preemptive Priority Scheduling


▪ Preemptive Priority Scheduling
Advantages-
▪ It considers the priority of the processes and allows the important
processes to run first.
▪ Priority scheduling in preemptive mode is best suited for real time
operating system.

Disadvantages-

▪ Processes with lesser priority may starve for CPU.


▪ Problem ≡ Starvation – low priority processes may never execute

▪ Solution ≡ Aging – as time progresses increase the priority of the


process
Non-Preemptive Priority Scheduling
▪ In Non-Preemptive Priority Scheduling, the CPU is not taken away
from the running process. Even if a higher-priority process arrives, the
currently running process will complete first.
▪ Ex: A high-priority process must wait until the currently running
process finishes.

Example of Non-Preemptive Priority Scheduling:


▪ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3:
▪ Note: Lower number represents higher priority.
Process Arrival Time Burst Time Priority

P1 0 4 2

P2 1 2 1

P3 2 6 3

Step-by-Step Execution:

•At Time 0: Only P1 has arrived. P1 starts execution as it is the only


available process, and it will continue executing till t = 4 because it is
a non-preemptive approach.
•At Time 4: P1 finishes execution. Both P2 and P3 have arrived. Since
P2 has the highest priority (Priority 1), it is selected next.
•At Time 6: P2 finishes execution. The only remaining process is P3,
so it starts execution.
•At Time 12: P3 finishes execution.
▪ Consider the set of 5 processes whose arrival time and burst time are
given below-
▪ Process Id Arrival time Burst time Priority

P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5

•Turn Around time = Exit time – Arrival time


•Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time

P1 4 4–0=4 4–4=0
P2 15 15 – 1 = 14 14 – 3 = 11
P3 12 12 – 2 = 10 10 – 1 = 9
P4 9 9–3=6 6–5=1

P5 11 11 – 4 = 7 7–2=5

•Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 = 8.2 unit


•Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2 unit
priority preemptive
Consider the set of 5 processes whose arrival time and burst time are given below-

If the CPU scheduling policy is priority preemptive, calculate the average waiting
time and average turn around time. (Higher number represents higher priority)
Process Id Arrival time Burst time Priority

P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5

Turn Around time = Exit time – Arrival time


Waiting time = Turn Around time – Burst time
Proce
Exit time Turn Around time Waiting time
ss Id

P1 15 15 – 0 = 15 15 – 4 = 11

P2 12 12 – 1 = 11 11 – 3 = 8

P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0

P5 10 10 – 4 = 6 6–2=4

•Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit


•Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit
Time Quantum and Context Switch Time
Turnaround Time Varies With The Time Quantum

80% of CPU bursts


should be shorter than q
Multilevel Queue
▪ The ready queue consists of multiple queues
▪ Multilevel queue scheduler defined by the following
parameters:
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine which queue a process will
enter when that process needs service
• Scheduling among the queues
Multilevel Queue
▪ With priority scheduling, have separate queues for each priority.
▪ Schedule the process in the highest-priority queue!
Multilevel Queue

▪ Prioritization based upon process type


Example of Multilevel Feedback Queue
▪ Three queues:
• Q0 – RR with time quantum 8 milliseconds
• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS
▪ Scheduling
• A new process enters queue Q0 which is
served in RR
4 When it gains CPU, the process receives 8
milliseconds
4 If it does not finish in 8 milliseconds, the
process is moved to queue Q1
• At Q1 job is again served in RR and
receives 16 additional milliseconds
4 If it still does not complete, it is preempted
and moved to queue Q2
Thread in Operating System

• A thread is a single sequence stream within a process. Threads are also


called lightweight processes as they possess some of the properties of
processes. Each thread belongs to exactly one process.
• In an operating system that supports multithreading, the process can consist of
many threads. But threads can be effective only if the CPU is more than 1
otherwise two threads have to context switch for that single CPU.
• All threads belonging to the same process share – code section, data section,
and OS resources (e.g. open files and signals)
• But each thread has its own (thread control block) – thread ID, program counter,
register set, and a stack
• Any operating system process can execute a thread. we can say that single
process can have multiple threads.
Why Do We Need Thread?

▪ Threads run in concurrent manner that improves the application


performance.
▪ Each such thread has its own CPU state and stack, but they share the
address space of the process and the environment.
▪ For example, when we work on Microsoft Word or Google Docs, we notice
that while we are typing, multiple things happen together (formatting is
applied, page is changed and auto save happens).
▪ Threads can share common data so they do not need to use inter-process
communication. Like the processes, threads also have states like ready,
executing, blocked, etc.
▪ Priority can be assigned to the threads just like the process, and the highest
priority thread is scheduled first.
▪ Each thread has its own Thread Control Block (TCB). Like the process, a
context switch occurs for the thread, and register contents are saved in (TCB).
As threads share the same address space and resources, synchronization is
also required for the various activities of the thread.
Components of Threads
These are the basic components of the Operating System.

▪ Stack Space: Stores local variables, function calls, and return


addresses specific to the thread.
▪ Register Set: Hold temporary data and intermediate results for the
thread’s execution.
▪ Program Counter: Tracks the current instruction being executed by
the thread.
Types of Thread in Operating System
Threads are of two types. These are described below.
▪ User Level Thread
▪ Kernel Level Thread
User Level Thread

▪ User Level Thread is a type of thread that is not created using system
calls.
▪ The kernel has no work in the management of user-level threads.
▪ User-level threads can be easily implemented by the user.
▪ In case when user-level threads are single-handed processes,
kernel-level thread manages them.
▪ Let’s look at the advantages and disadvantages of User-Level Thread.
Advantages of User-Level Threads
▪ Implementation of the User-Level Thread is easier than Kernel Level Thread.
▪ Context Switch Time is less in User Level Thread.
▪ User-Level Thread is more efficient than Kernel-Level Thread.
▪ Because of the presence of only Program Counter, Register Set, and Stack
Space, it has a simple representation.
Disadvantages of User-Level Threads
▪ The operating system is unaware of user-level threads, so kernel-level
optimizations, like load balancing across CPUs, are not utilized.
▪ If a user-level thread makes a blocking system call, the entire process (and all its
threads) is blocked, reducing efficiency.
▪ User-level thread scheduling is managed by the application, which can become
complex and may not be as optimized as kernel-level scheduling
Kernel Level Threads

▪ A kernel Level Thread is a type of thread that can recognize the


Operating system easily. Kernel Level Threads has its own thread
table where it keeps track of the system.
▪ The operating System Kernel helps in managing threads. Kernel
Threads have somehow longer context switching time. Kernel helps
in the management of threads.
Advantages of Kernel-Level Threads
▪ Kernel-level threads can run on multiple processors or cores simultaneously,
enabling better utilization of multicore systems.
▪ The kernel is aware of all threads, allowing it to manage and schedule them
effectively across available resources.
▪ Applications that block frequency are to be handled by the Kernel-Level Threads.
▪ The kernel can distribute threads across CPUs, ensuring optimal load balancing
and system performance.
Disadvantages of Kernel-Level threads
▪ Context switching between kernel-level threads is slower compared to user-level
threads because it requires mode switching between user and kernel space.
▪ Managing kernel-level threads involves frequent system calls and kernel
interactions, leading to increased CPU overhead.
▪ A large number of threads may overload the kernel scheduler, leading to potential
performance degradation in systems with many threads.
▪ Implementation of this type of thread is a little more complex than a user-level
thread.
Difference Between Process and Thread
▪ The primary difference is that threads within the same process run in
a shared memory space, while processes run in separate memory
spaces.
▪ Threads are not independent of one another like processes are, and
as a result, threads share with other threads their code section, data
section, and OS resources (like open files and signals).
▪ But, like a process, a thread has its own program counter (PC),
register set, and stack space.
What is Multi-Threading?

▪ A thread is also known as a lightweight process. The idea is to


achieve parallelism by dividing a process into multiple threads.
For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text,
another thread to process inputs, etc. More advantages of
multithreading are discussed below.
▪ Multithreading is a technique used in operating systems to
improve the performance and responsiveness of computer
systems. Multithreading allows multiple threads (i.e., lightweight
processes) to share the same resources of a single process, such
as the CPU, memory, and I/O devices.
Benefits of Thread in Operating System
▪ Responsiveness: If the process is divided into multiple threads, if one thread
completes its execution, then its output can be immediately returned.
▪ Faster context switch: Context switch time between threads is lower compared to
the process context switch. Process context switching requires more overhead from
the CPU.
▪ Effective utilization of multiprocessor system: If we have multiple threads in a
single process, then we can schedule multiple threads on multiple processors. This
will make process execution faster.
▪ Resource sharing: Resources like code, data, and files can be shared among all
threads within a process. Note: Stacks and registers can’t be shared among the
threads. Each thread has its own stack and registers.
▪ Communication: Communication between multiple threads is easier, as the threads
share a common address space. while in the process we have to follow some
specific communication techniques for communication between the two processes.
▪ Enhanced throughput of the system: If a process is divided into multiple threads,
and each thread function is considered as one job, then the number of jobs
completed per unit of time is increased, thus increasing the throughput of the
system.
Process Thread

Processes use more resources and hence Threads share resources and hence they are
they are termed as heavyweight processes. termed as lightweight processes.

Creation and termination times of processes Creation and termination times of threads are
are slower. faster compared to processes.

Threads share code and data/file within a


Processes have their own code and data/file.
process.

Communication between processes is slower. Communication between threads is faster.

Context Switching in processes is slower. Context switching in threads is faster.

Threads, on the other hand, are


Processes are independent of each other. interdependent. (i.e they can read, write or
change another thread’s data)

Eg: Opening two different browsers. Eg: Opening two tabs in the same browser.

You might also like