0% found this document useful (0 votes)
27 views30 pages

OS Tasks

The document is a lab tasks manual for the University of Management and Technology, Lahore Campus, focusing on CPU scheduling algorithms, specifically First-Come, First-Served (FCFS), Shortest Job First (SJF), and Priority Scheduling. It outlines the definitions, advantages, and disadvantages of these algorithms, as well as their impact on CPU utilization and system performance, including concepts like the convoy effect and starvation. Additionally, it provides examples and calculations for average waiting and turnaround times for processes using these scheduling methods.

Uploaded by

jannatimtiaz288
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views30 pages

OS Tasks

The document is a lab tasks manual for the University of Management and Technology, Lahore Campus, focusing on CPU scheduling algorithms, specifically First-Come, First-Served (FCFS), Shortest Job First (SJF), and Priority Scheduling. It outlines the definitions, advantages, and disadvantages of these algorithms, as well as their impact on CPU utilization and system performance, including concepts like the convoy effect and starvation. Additionally, it provides examples and calculations for average waiting and turnaround times for processes using these scheduling methods.

Uploaded by

jannatimtiaz288
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

University of Management and Technology, Lahore Campus

Lab- Tasks Manual


Lab Instructor: Muhammad Ahmad
Department of INFS
Email: [email protected]
NAME: Jannat Imatiaz

1. First-Come, First-Served (FCFS)

1. What is the FCFS scheduling algorithm, and how does it work?

First Come First Serve (FCFS) is an operating system (OS) scheduling algorithm that executes
processes in the order they arrive. It's the simplest CPU scheduling algorithm.

2. What are the advantages and disadvantages of FCFS scheduling?

Advantages

Simple: FCFS is easy to understand and implement.

User-friendly: FCFS is easy to use.

No starvation: FCFS prevents processes from waiting indefinitely.

Good for small systems: FCFS is a good choice for small systems.

Disadvantages

Long waiting times: Processes with shorter durations may have to wait for longer processes to
finish.

Lower device utilization: FCFS may favor CPU over I/O, which can reduce device utilization.

Convoy effect: A few slow processes can cause the entire operating system to slow down.

Not ideal for time-sharing systems: FCFS may not be the best technique for time-sharing
systems.

Lower throughput: FCFS may have lower throughput than other scheduling algorithms.

3. Given the following process details, calculate the average waiting time and average
turnaround time using FCFS:

Department of INFS, UMT, Lahore. 1 Muhammad Ahmad


University of Management and Technology, Lahore Campus

Complete Turn around Wait time


Process Arrival Time Burst Time
time time

P1 0 4 4 4 0

P2 1 3 7 6 3

P3 2 1 15 13 12

4. How does FCFS scheduling perform in terms of CPU utilization?

First-Come-First-Served (FCFS) scheduling is a simple and straightforward CPU scheduling


algorithm where processes are executed in the order of their arrival, without preemption.
However, its performance in terms of CPU utilization has both advantages and limitations.

CPU Utilization in FCFS:

 CPU Utilization refers to how effectively the CPU is being used during the execution of
processes. High CPU utilization means the CPU is actively processing jobs rather than being idle.

1. CPU Utilization in Ideal Scenarios:

 High CPU Utilization with Continuous Arrival of Processes: If processes arrive continuously
without significant gaps, the CPU will be utilized efficiently, as the CPU will spend most of its
time executing processes.

2. Problems Leading to Low CPU Utilization:

 Non-Continuous Arrival of Processes: In FCFS, if there are long gaps between the
arrival times of processes, the CPU may remain idle while waiting for the next process to
arrive. This results in periods of inactivity and thus low CPU utilization.
 Inefficiency Due to Long-Running Processes: If a process with a long burst time
arrives first, it can block the CPU for an extended period, even if short processes arrive
later. This leads to poor CPU utilization, especially when a long process is followed by
a series of short processes.

3. Overall Impact on CPU Utilization:

 FCFS scheduling doesn’t account for optimizing the time between processes, meaning that during
periods where there are idle times (because a new process is yet to arrive), the CPU will remain
idle.
 This can lead to poor CPU utilization in certain scenarios, especially if the system has a mix of
long and short processes with substantial gaps in their arrival times.

Department of INFS, UMT, Lahore. 2 Muhammad Ahmad


University of Management and Technology, Lahore Campus
Key Takeaways:

 CPU Utilization in FCFS is highly dependent on the arrival times and burst times of the
processes.
 It can result in high CPU utilization in cases where processes arrive frequently and have similar
burst times.
 CPU utilization may be low in cases where processes have large differences in burst times or
when there are significant gaps between the arrival of processes.

5. Explain the concept of convoy effect in FCFS. How does it affect system performance?

The convoy effect is a phenomenon that can occur in the First-Come-First-Served (FCFS)
scheduling algorithm. It refers to a situation where a long-running process (often called a "giant
process") that arrives first can block or delay the execution of shorter processes that arrive later,
resulting in inefficient system performance.

Concept of Convoy Effect:

 The convoy effect occurs when a process with a long burst time (CPU execution time) is
scheduled first and then occupies the CPU for a prolonged period.
 Because FCFS does not consider process burst times, once a long process starts, it runs to
completion before any other process can be executed.
 Subsequent shorter processes have to wait for the long-running process to finish, even if they
could have completed much faster, leading to a situation where many shorter processes are
queued up behind the long process, forming a "convoy."

Example:

Imagine the following set of processes:

Process Arrival Time Burst Time

P1 0 10

P2 1 2

P3 2 1

P4 3 3

Using FCFS, the processes would be executed in the order of their arrival (P1 → P2 → P3 →
P4). Here's how the execution would unfold:

 P1 (10 units) starts at time 0 and runs to time 10.


 P2 (2 units) waits until P1 finishes, so it starts at time 10 and finishes at time 12.

Department of INFS, UMT, Lahore. 3 Muhammad Ahmad


University of Management and Technology, Lahore Campus
 P3 (1 unit) waits until P2 finishes, so it starts at time 12 and finishes
at time 13.
 P4 (3 units) waits until P3 finishes, so it starts at time 13 and finishes at time 16.

In this scenario, the short processes P2, P3, and P4 are delayed by P1, the long process. Even
though P2 and P3 have small burst times, they have to wait for the long-running P1 to complete,
creating a "convoy" of processes.

Impact of Convoy Effect on System Performance:

1. Increased Waiting Time:


o The waiting time for processes that arrive after the long process can be significantly high.
For example, in the scenario above, P2, P3, and P4 all have to wait for P1 to finish. The
waiting time increases as the size of the first process grows.

2. Increased Turnaround Time:


o The turnaround time (time from arrival to completion) for processes can also increase,
particularly for short processes. They are delayed because the CPU is occupied by the
long-running process, which delays their completion time.

3. Poor CPU Utilization:


o The convoy effect can reduce CPU utilization in cases where the long process doesn't use
the CPU efficiently, but shorter processes could be completed more quickly. If processes
with very different burst times are queued in a FCFS manner, this leads to low CPU
utilization.

4. Inefficient Resource Usage:


o The CPU could be idling while waiting for the long process to finish, and thus resources
are not being used efficiently, particularly when smaller processes could be executed
concurrently (in a preemptive system, for example).

Overall Effect on System Performance:

 The convoy effect leads to inefficiency in process scheduling, as it increases waiting times for
shorter processes and reduces the overall throughput of the system.
 It also exacerbates the inequality in the treatment of processes, where long processes can
dominate the CPU and cause significant delays for shorter processes.

Conclusion:

The convoy effect is a significant downside of the FCFS scheduling algorithm, especially in
systems where there is a mix of process burst times. It negatively impacts waiting times,
turnaround times, and overall system performance. Alternative scheduling algorithms like
Shortest Job Next (SJN) or Round Robin are often used to mitigate such issues by considering
process burst times and improving efficiency.

Department of INFS, UMT, Lahore. 4 Muhammad Ahmad


University of Management and Technology, Lahore Campus

2. Shortest Job First (SJF)

1. Explain how the Shortest Job First (SJF) scheduling algorithm works.

The Shortest Job First (SJF) scheduling algorithm is a non-preemptive CPU scheduling
algorithm that selects the process with the shortest burst time (i.e., the process with the smallest
CPU execution time) for execution next. The goal of SJF is to minimize the average waiting time
and turnaround time, improving overall system efficiency.

How SJF Works:

1. Process Arrival Order:


o Processes arrive at the system at different times, and the scheduler keeps track of the
processes that are ready to execute.

2. Shortest Burst Time Selection:


o From the set of ready processes (i.e., those that have arrived and are waiting to be
executed), the process with the shortest burst time (CPU execution time) is selected to
execute next.

3. Execution:
o Once the process with the shortest burst time is selected, it runs to completion (since SJF
is typically non-preemptive).

4. Repeat:
o After the process finishes, the scheduler checks if there are other ready processes and
selects the next one with the shortest burst time from the available processes.

Key Characteristics:

 Non-preemptive: Once a process starts executing, it continues running until completion. It


cannot be interrupted by another process with a shorter burst time (unless a new process with a
shorter burst time arrives before the current one finishes, but this is an edge case).
 Shortest First: SJF minimizes the average waiting time by favoring processes that require less
CPU time.
 Ideal for Batch Systems: SJF is often suitable for batch systems where the burst times of
processes are known ahead of time.
Department of INFS, UMT, Lahore. 5 Muhammad Ahmad
University of Management and Technology, Lahore Campus
2. Compare preemptive and non-preemptive versions of SJF.

In the non-preemptive version of SJF, once a process starts executing, it runs to completion
without being interrupted, even if a new process with a shorter burst time arrives during its
execution.

Key Characteristics:

 No Interruptions: Once a process starts running, it cannot be stopped or preempted. The process
runs to completion before the CPU switches to another process.
 Execution Order: The CPU selects the process with the shortest burst time from the ready
queue. If two or more processes have the same burst time, they are scheduled in the order of
arrival.
 No Context Switching During Execution: The process continues until it finishes without
interruption, avoiding the overhead of context switching during execution.

3. Given the following processes, calculate the average waiting time and average turnaround
time using non-preemptive SJF:

Process Arrival Time Burst Time Complete time Turnaround time Waiting time

P1 0 6 6 9 3

P2 1 8 24 23 15

P3 2 7 16 14 7

P4 3 3 9 6 3

4. What is the major disadvantage of SJF scheduling?

 Knowledge of Burst Time:

 SJF requires knowledge of the burst times of processes ahead of time. In many real
systems, the burst time is not known in advance and must be estimated, which can lead to
inaccuracies.

 Preemptive Version (SRTF) Complexity:

Department of INFS, UMT, Lahore. 6 Muhammad Ahmad


University of Management and Technology, Lahore Campus
 The preemptive version of SJF, Shortest Remaining Time
First (SRTF), requires more complex handling of preemption and context switching,
which increases overhead

 Starvation occurs in SJF when a process with a longer burst time has to wait for a long period
because shorter processes keep arriving and are prioritized for execution.
 If there is a continuous stream of shorter processes, a longer process that arrives later might never
get a chance to execute because the CPU will always be allocated to shorter processes that are
waiting to execute. This can lead to the longer process being starved of CPU time indefinitely.

5. Why is it difficult to implement SJF in a real operating system?

6. Implementing the Shortest Job First (SJF) scheduling algorithm in a real operating system is
difficult due to several significant challenges:

1. Need for Accurate Burst Time Prediction

 SJF requires knowledge of the burst time (the amount of CPU time a process will need) for each
process in order to prioritize processes with the shortest burst time.
 Inaccurate or Unknown Burst Time: In a real system, the burst time is not known in advance.
The operating system can either rely on estimates or attempt to predict the burst time based on
historical data, but these predictions are often inaccurate. If the estimated burst times are wrong,
the scheduler may not effectively optimize the CPU allocation, leading to poor performance.

2. Estimating Burst Time

 To address this issue, systems can attempt to estimate burst times using techniques like
Exponential Averaging (using past CPU bursts to predict future ones). However, these
techniques are not perfect and can still lead to inefficiencies.
 Estimation errors can cause the system to make suboptimal decisions, such as giving priority to
processes that actually end up needing more time than predicted.

3. Handling of Long Processes (Starvation)

 A major drawback of SJF (especially the non-preemptive version) is starvation—where long


processes can be continually delayed because shorter processes are always being prioritized.
 In a real-world system with a mix of process lengths, if short processes keep arriving, longer
processes might never get executed. To avoid starvation, the operating system would need to
implement mechanisms like aging (gradually increasing the priority of a waiting process), but
this adds complexity.

Department of INFS, UMT, Lahore. 7 Muhammad Ahmad


University of Management and Technology, Lahore Campus
4. Dynamic Nature of Processes

 In real operating systems, processes arrive at different times, have dynamic behavior, and may
interact with I/O operations, which further complicates the ability to determine the exact burst
time.
 Real-time changes in process behavior (e.g., a process that initially seems short may suddenly
require more CPU time due to I/O interactions) make it even harder to accurately predict and
schedule processes efficiently.

5. Preemptive SJF (SRTF) Complexity

 The preemptive version of SJF (known as Shortest Remaining Time First - SRTF) introduces
further complexity. When a new process with a shorter burst time arrives, it can preempt the
currently running process, leading to context switching overhead.
 Context switching not only adds overhead but can also hurt performance if done too frequently,
as the CPU spends more time switching between processes than actually executing them.

6. Fairness and Efficiency Trade-Off

 SJF is designed to minimize average waiting time and turnaround time but can be unfair to
longer processes, leading to inefficient use of resources over time (especially for those processes
that don't get CPU time due to starvation).
 In systems where fairness is a concern (e.g., in multi-user or multi-tasking environments),
ensuring that no process is starved becomes an important requirement, making SJF difficult to
implement without extra mechanisms like aging or priority-based scheduling.

3. Priority Scheduling

1. What is Priority Scheduling, and how does it determine which process to execute next?

Priority Scheduling is a CPU scheduling algorithm in which each process is assigned a priority.
The scheduler selects the process with the highest priority (or the lowest numerical value,
depending on the system's convention) to execute next.

Key Concepts of Priority Scheduling:

1. Priority Value:
o Each process is assigned a priority value, either explicitly (set by the user or system) or
implicitly (based on factors like the process's memory usage or how long it has been
waiting).
o The priority can be a positive integer, with the higher or lower values representing
higher priorities, depending on the convention.

Department of INFS, UMT, Lahore. 8 Muhammad Ahmad


University of Management and Technology, Lahore Campus
 Higher priority: In most systems, processes with a
higher priority are executed first.
 Lower priority: In other systems, lower priority values are considered higher
priority.
2. Scheduling Process:
o The scheduler chooses the process with the highest priority (or the lowest priority
number, depending on the system) from the set of ready processes.
o If multiple processes have the same priority, a secondary scheduling criterion (like
FIFO or shortest burst time) may be used to break ties.

Types of Priority Scheduling:

1. Preemptive Priority Scheduling:


o In this version, if a process with a higher priority arrives while another process is
executing, the currently running process is preempted (paused) and the higher-priority
process is given the CPU.
o Preemption leads to more context switching but ensures that high-priority tasks are
handled quickly.

2. Non-Preemptive Priority Scheduling:


o In non-preemptive priority scheduling, once a process starts executing, it continues until
it finishes, even if a higher-priority process arrives.
o When the running process completes, the highest-priority process in the ready queue is
selected next.

How Priority Scheduling Determines Which Process to Execute Next:

 The priority value assigned to each process determines which process will be executed next:
1. The process with the highest priority (highest numerical value or lowest numerical
value, depending on the system's rules) is selected.
2. If multiple processes have the same priority, tie-breaking rules (e.g., First-Come, First-
Served or Shortest Job First) are applied.
3. If a higher-priority process arrives while a process is executing (in the preemptive
version), the current process is suspended and the higher-priority process is scheduled.

Example:

Consider the following processes, each with an arrival time, burst time, and priority value:

Process Arrival Time Burst Time Priority

P1 0 4 3

P2 1 3 1

P3 2 1 2

P4 3 2 4

Department of INFS, UMT, Lahore. 9 Muhammad Ahmad


University of Management and Technology, Lahore Campus
Scheduling (Preemptive Priority Scheduling):

1. P1 arrives first with priority 3.


2. P2 arrives at time 1 with the highest priority (1), so P2 is selected and executed next.
3. After P2 finishes, P3 (priority 2) is selected to run.
4. Finally, P1 and P4 will be scheduled based on their priority and arrival times.

Advantages of Priority Scheduling:

 Efficiency: Priority scheduling can provide a good way to handle real-time and important
processes first.
 Customizability: The priority system can be customized to fit different needs, such as favoring
CPU-bound tasks over I/O-bound tasks or giving preference to time-sensitive tasks.

Disadvantages of Priority Scheduling:

1. Starvation (Indefinite Blocking): Lower-priority processes may never get executed if higher-
priority processes keep arriving. This is known as starvation.
2. Lack of Fairness: Priority scheduling can lead to unfair allocation of CPU time, especially if
high-priority processes dominate the system.
3. Difficulty in Assigning Priorities: Deciding on the criteria for assigning priorities can be
complex and may not always reflect the true importance of processes.

Conclusion:

Priority Scheduling is an algorithm that selects processes based on their assigned priority
values. It can be either preemptive or non-preemptive, and it aims to ensure that important or
time-critical processes are executed first. However, it can lead to issues like starvation for
lower-priority processes, and careful management of priority assignment is needed to avoid
unfairness and ensure system efficiency.

2. Given the following processes, calculate the waiting time and turnaround time using
Priority Scheduling (preemptive or non-preemptive):

Process Arrival Time Burst Time Priority

P1 0 4 2

P2 1 3 1

P3 2 5 3

Department of INFS, UMT, Lahore. 10 Muhammad Ahmad


University of Management and Technology, Lahore Campus
To calculate the waiting time and turnaround time using Priority
Scheduling (both preemptive and non-preemptive), let's break down the steps. For this
explanation, we will use both preemptive and non-preemptive versions of Priority Scheduling.

Given Processes:
Process Arrival Time Burst Time Priority

P1 0 4 2

P2 1 3 1

P3 2 5 3

Preemptive Priority Scheduling (Shortest Job First with Highest Priority):

 The process with the highest priority (lowest priority number) gets to execute first.
 If a new process with a higher priority arrives while a process is executing, the current process
will be preempted.

Step 1: Gantt Chart for Preemptive Priority Scheduling

1. At time 0, P1 starts executing (since it is the only process at that time and has priority 2).
2. At time 1, P2 arrives with priority 1, which is higher than P1's priority. So, P1 is preempted, and
P2 starts executing.
3. At time 2, P3 arrives with priority 3, which is lower than P2's priority. So, P2 continues
executing.
4. At time 4, P2 completes. Now, P1 (which was preempted) resumes execution because it has a
higher priority than P3.
5. At time 6, P1 completes. P3 starts executing.
6. At time 11, P3 completes.

Gantt Chart for Preemptive Priority Scheduling:


Time 0 1 2 4 6 11

Process P1 P2 P2 P1 P3 P3

Step 2: Calculate Completion, Turnaround, and Waiting Times


Completion Times:

 P1 completes at time 6.
 P2 completes at time 4.
 P3 completes at time 11.

Turnaround Time (TAT) = Completion Time - Arrival Time

 P1: TAT = 6 - 0 = 6
 P2: TAT = 4 - 1 = 3
 P3: TAT = 11 - 2 = 9

Department of INFS, UMT, Lahore. 11 Muhammad Ahmad


University of Management and Technology, Lahore Campus
Waiting Time (WT) = Turnaround Time (TAT) - Burst Time

 P1: WT = 6 - 4 = 2
 P2: WT = 3 - 3 = 0
 P3: WT = 9 - 5 = 4

Summary of Preemptive Priority Scheduling (Results):


Process Completion Time Turnaround Time (TAT) Waiting Time (WT)

P1 6 6 2

P2 4 3 0

P3 11 9 4

Average Turnaround Time (TAT) = (6 + 3 + 9) / 3 = 6

Average Waiting Time (WT) = (2 + 0 + 4) / 3 = 2

Non-Preemptive Priority Scheduling:

 In non-preemptive priority scheduling, once a process starts executing, it runs to completion


without interruption. The process with the highest priority (lowest numerical value) is selected
from the ready queue.

Step 1: Gantt Chart for Non-Preemptive Priority Scheduling

1. At time 0, P1 starts executing since it has the highest priority (priority 2).
2. At time 1, P2 arrives with a higher priority (priority 1), but P1 continues to execute because non-
preemptive scheduling does not allow preemption.
3. At time 2, P3 arrives with priority 3, which is lower than P1's priority, so P1 continues
executing.
4. At time 4, P1 completes, and now P2 starts executing because it has the highest priority.
5. At time 7, P2 completes, and P3 starts executing.
6. At time 11, P3 completes.

Gantt Chart for Non-Preemptive Priority Scheduling:


Time 0 1 4 7 11

Process P1 P1 P2 P2 P3

Step 2: Calculate Completion, Turnaround, and Waiting Times


Completion Times:

 P1 completes at time 4.
 P2 completes at time 7.

Department of INFS, UMT, Lahore. 12 Muhammad Ahmad


University of Management and Technology, Lahore Campus
 P3 completes at time 11.

Turnaround Time (TAT) = Completion Time - Arrival Time

 P1: TAT = 4 - 0 = 4
 P2: TAT = 7 - 1 = 6
 P3: TAT = 11 - 2 = 9

Waiting Time (WT) = Turnaround Time (TAT) - Burst Time

 P1: WT = 4 - 4 = 0
 P2: WT = 6 - 3 = 3
 P3: WT = 9 - 5 = 4

Summary of Non-Preemptive Priority Scheduling (Results):


Process Completion Time Turnaround Time (TAT) Waiting Time (WT)

P1 4 4 0

P2 7 6 3

P3 11 9 4

Average Turnaround Time (TAT) = (4 + 6 + 9) / 3 = 6.33

Average Waiting Time (WT) = (0 + 3 + 4) / 3 = 2.33

Conclusion:

 Preemptive Priority Scheduling results in average waiting time of 2 and average turnaround
time of 6.
 Non-Preemptive Priority Scheduling results in average waiting time of 2.33 and average
turnaround time of 6.33.

3. What are the problems associated with priority scheduling (such as starvation)?

Priority Scheduling, while effective in handling processes with varying levels of importance, can
introduce several issues in a system, particularly starvation, along with other challenges. Below
are the primary problems associated with Priority Scheduling:

1. Starvation (Indefinite Blocking)

 Starvation occurs when a process with a lower priority gets continuously preempted or delayed
because higher-priority processes keep arriving. As a result, the lower-priority process may never
get a chance to execute, leading to indefinite waiting.

Department of INFS, UMT, Lahore. 13 Muhammad Ahmad


University of Management and Technology, Lahore Campus
 This issue is particularly common in preemptive priority
scheduling, where a higher-priority process can interrupt a lower-priority one. If there is a
constant influx of higher-priority processes, the lower-priority process could be starved
indefinitely.

Example:
If a system keeps receiving processes with higher priorities, a process with the lowest priority
might never execute, causing it to wait indefinitely.

2. Unfairness

 Priority Scheduling can be unfair because it favors high-priority processes over lower-priority
ones. This unfairness can lead to inequitable distribution of CPU time, especially in systems
with a large number of low-priority tasks.
 For systems that require fair allocation of resources among all processes, this unfairness can be a
significant drawback.

Example:
In a multi-user system, if users submit tasks with varying priorities, users whose processes have
lower priority may experience significantly worse response times, even if their processes are just
as important in terms of completion.

3. Lack of Consideration for Process Characteristics

 Priority Scheduling may not always account for the nature or resource requirements of a
process, only focusing on priority values. Some short tasks (with high priority) may get
preferential treatment over longer but equally important tasks, leading to inefficient system
utilization.
 This is particularly problematic if priority values are set based solely on arbitrary factors, such as
user preferences or predetermined values, rather than actual runtime characteristics of the
processes.

4. Difficulty in Setting Priorities

 The priority assignment process can be tricky, as determining the right priority levels for
processes may not always be straightforward.
 In some cases, external factors (such as system load, process type, etc.) might need to be
considered to assign appropriate priorities. However, without careful tuning, incorrect priority
assignments can result in suboptimal system performance or unfair resource allocation.

Example:
If a system assigns high priority to user-interactive processes without considering system load, it
may degrade the performance of background jobs, which could be crucial for maintaining overall
system health.

Department of INFS, UMT, Lahore. 14 Muhammad Ahmad


University of Management and Technology, Lahore Campus
5. Complexity in Dynamic Systems

 In systems where dynamic changes in process characteristics are frequent, priority


recalibration becomes necessary to avoid starvation. Implementing such recalibration
mechanisms adds to the complexity of the system.
 Aging (a technique where the priority of waiting processes is gradually increased to prevent
starvation) can mitigate starvation, but implementing aging effectively in priority scheduling can
increase system overhead and complexity.

6. Overhead from Frequent Context Switching (in Preemptive Version)

 In preemptive priority scheduling, the process with the highest priority may frequently interrupt
the current process, leading to context switching overhead.
 While context switching allows for higher-priority processes to run, it can result in inefficiency if
done excessively, especially when the system handles many low-priority tasks.

7. No Consideration for Time-Sensitivity of Processes

 Priority scheduling may not consider real-time constraints in certain cases. For example, a
process that needs to execute within a strict time frame might be delayed if a higher-priority task
arrives. As a result, real-time tasks might experience delays even if they are critical.

Example:
A process involved in controlling a real-time device (e.g., temperature control in a factory) could
be delayed by higher-priority tasks unrelated to real-time constraints, leading to potential failures
in meeting real-time deadlines.

Mitigating Starvation and Improving Fairness:

 Aging: The most common technique to deal with starvation is aging, where the priority of a
process is gradually increased over time if it has not been executed, thus preventing indefinite
waiting.
 Time Sharing: For fair allocation, a system could implement time-sharing where, after a certain
time, the system ensures lower-priority processes are also given CPU time.
 Hybrid Scheduling: Combining priority scheduling with other algorithms, such as Round
Robin or Shortest Job First (SJF), can help achieve a better balance between efficiency and
fairness.

Conclusion:

While Priority Scheduling is useful for prioritizing important tasks, it faces significant
challenges, including starvation, unfair resource allocation, and complex priority
management. These problems can be mitigated using techniques like aging, fair queuing, and
combining scheduling algorithms, but these add to the system's complexity. Therefore, careful
consideration is needed when choosing and implementing priority scheduling in a system.

Department of INFS, UMT, Lahore. 15 Muhammad Ahmad


University of Management and Technology, Lahore Campus
4. How can starvation be avoided in priority scheduling?

Starvation in Priority Scheduling occurs when low-priority processes are never executed
because higher-priority processes continuously preempt them. This can lead to indefinite delays
or blocked execution for certain processes, which is undesirable. To avoid starvation, several
strategies can be implemented, with the most common being aging.

1. Aging

 Aging is the most widely used technique to prevent starvation in priority scheduling.
 As time progresses, the priority of waiting processes gradually increases. This ensures that low-
priority processes eventually get a chance to execute.
 Aging can be implemented by increasing the priority of a process over time, reducing the
difference between the lower-priority process and the higher-priority ones.

Example of Aging Implementation:

 If a process has been waiting for a certain amount of time (e.g., 10ms), its priority might increase
by 1.
 After additional time (e.g., 20ms), its priority could increase further, ensuring it eventually gets
executed even if higher-priority processes keep arriving.

Advantages of Aging:

 Guarantees that no process is starved.


 Simple to implement and effective at solving starvation issues.

Disadvantages:

 May introduce some overhead due to the regular checking and updating of priorities.

2. Fair Share Scheduling

 Fair Share Scheduling is a technique where each user or process group is allocated a share of
the CPU time, ensuring that each process gets an equal opportunity to run.
 This approach helps prevent any process from monopolizing the CPU, thus avoiding starvation.
 Instead of assigning priorities based solely on external criteria (such as process priority), the
system distributes CPU time according to the fair share of resources allotted to each user or
group.

Advantages:

 Ensures fairness across users or processes.


 Helps prevent starvation in multi-user systems.

Disadvantages:

Department of INFS, UMT, Lahore. 16 Muhammad Ahmad


University of Management and Technology, Lahore Campus
 More complex than traditional priority scheduling.

3. Time Quantum or Round-Robin with Priority

 This method combines round-robin scheduling with priority scheduling to avoid starvation.
 High-priority processes are allowed to run first, but once their time slice (quantum) expires, the
scheduler gives a chance to lower-priority processes.
 This approach ensures that even lower-priority processes get to execute within a reasonable
timeframe, avoiding indefinite waiting.

Example:

 Processes are scheduled based on priority, but each process is only allowed to execute for a fixed
time slice (e.g., 50ms). After a process's time slice expires, the next process with the highest
priority is scheduled, ensuring all processes receive CPU time.

Advantages:

 Lower-priority processes get a fair share of CPU time.


 Ensures that all processes eventually get executed, even if they have lower priorities.

Disadvantages:

 May reduce the efficiency of higher-priority tasks as they may need to wait for the round-robin
scheduling to complete.

4. Priority Recalculation

 In some systems, priority recalculation or re-assignment is used, where the priority of


processes is recalculated based on factors such as waiting time, process importance, and resource
utilization.
 This dynamic adjustment can ensure that long-waiting processes are given higher priority over
time, while at the same time ensuring that the system remains responsive to high-priority tasks.

Example:

 If a process waits in the queue for too long, its priority is recalculated to increase its chances of
being executed, reducing the likelihood of starvation.

Advantages:

 Dynamically adapts to changing system conditions.


 Balances the need for high-priority and low-priority processes.

Disadvantages:

 More complex and requires careful design to prevent priority inversion and ensure fairness.

Department of INFS, UMT, Lahore. 17 Muhammad Ahmad


University of Management and Technology, Lahore Campus
5. Multiple Queues (Multilevel Queue Scheduling)

 Multilevel Queue Scheduling divides processes into multiple priority queues based on their
priority levels.
 Within each queue, processes are scheduled using Round Robin or other scheduling algorithms.
When a process in a higher-priority queue completes, the system may check the next lower-
priority queue for any processes waiting for execution.
 This approach ensures that lower-priority queues are not ignored and processes in these queues
eventually get executed.

Advantages:

 Helps balance the needs of high- and low-priority processes.


 Fairer than simple priority scheduling with a single queue.

Disadvantages:

 More complex to manage.


 Requires maintaining multiple queues and monitoring processes across them.

6. Priority Inheritance

 Priority Inheritance is used in systems where processes interact with shared resources. In this
approach, when a high-priority process is waiting for a resource held by a low-priority process,
the low-priority process temporarily inherits the high priority to avoid starvation of the high-
priority process.
 Although primarily used to avoid priority inversion, it also prevents starvation in systems where
resources are shared.

Advantages:

 Ensures that processes holding resources are given priority when waiting processes are more
important.
 Reduces the chances of priority inversion and starvation.

Disadvantages:

 Complex to implement and maintain.


 May introduce additional delays in certain scenarios.

Conclusion:

Starvation in Priority Scheduling can be effectively avoided through mechanisms like aging,
fair share scheduling, time quantum, and priority recalculation. Each method ensures that
low-priority processes eventually receive CPU time, preventing them from being indefinitely
delayed. The choice of strategy depends on the specific requirements of the system, such as
whether fairness or efficiency is the primary goal.

Department of INFS, UMT, Lahore. 18 Muhammad Ahmad


University of Management and Technology, Lahore Campus

5. What are the differences between preemptive and non-preemptive priority scheduling?

The key difference between preemptive and non-preemptive Priority Scheduling lies in
whether or not a running process can be interrupted by a higher-priority process. Here are the
key distinctions:

1. Preemption

 Preemptive Priority Scheduling:


o In preemptive priority scheduling, if a process with a higher priority arrives while a
process is already running, the currently running process is preempted (interrupted)
and placed back in the ready queue.
o The process with the higher priority is then executed.
o This allows for immediate scheduling of the higher-priority process, even if the
currently running process hasn’t finished its execution.

 Non-Preemptive Priority Scheduling:


o In non-preemptive priority scheduling, once a process begins execution, it runs to
completion or until it voluntarily relinquishes the CPU (e.g., through I/O operations).
o A process with a higher priority cannot preempt a running process; it must wait until the
CPU is free.
o This means that higher-priority processes may have to wait their turn if a lower-priority
process is already executing.

2. Handling of High-Priority Processes

 Preemptive Priority Scheduling:


o High-priority processes are executed immediately when they arrive, even if a lower-
priority process is running.
o If a process with a higher priority arrives while another is running, the current process is
interrupted and placed back into the ready queue, allowing the higher-priority process to
execute.
 Non-Preemptive Priority Scheduling:
o A high-priority process must wait for the CPU until the currently executing process
completes or voluntarily relinquishes control.
o A lower-priority process that is running will not be interrupted even if a higher-priority
process arrives.

3. Context Switching

 Preemptive Priority Scheduling:


o More frequent context switches occur because the CPU can switch to a higher-priority
process at any time.

Department of INFS, UMT, Lahore. 19 Muhammad Ahmad


University of Management and Technology, Lahore Campus
o This leads to higher overhead due to the need for saving
and restoring process states.
 Non-Preemptive Priority Scheduling:
o Fewer context switches occur because processes are not interrupted once they start
executing. The system only switches when a process completes or waits for I/O.
o This results in lower overhead compared to preemptive scheduling.

4. Risk of Starvation

 Preemptive Priority Scheduling:


o Starvation is still possible, especially for low-priority processes, because higher-
priority processes can continually preempt them.
o However, aging techniques (increasing the priority of waiting processes) can mitigate
this risk.
 Non-Preemptive Priority Scheduling:
o Starvation is less of an issue compared to preemptive priority scheduling because the
system does not preempt a running process. However, if a low-priority process is
blocked behind higher-priority ones, it could still face indefinite waiting.

5. Fairness

 Preemptive Priority Scheduling:


o Less fair because a high-priority process can monopolize the CPU, especially if it keeps
arriving and preempting the lower-priority processes.
o To address fairness, techniques like aging or time-slicing may be used.
 Non-Preemptive Priority Scheduling:
o More fair in the sense that once a process starts executing, it will not be interrupted. This
allows lower-priority processes a chance to complete their tasks without being
continuously preempted.

6. Efficiency

 Preemptive Priority Scheduling:


o More efficient in terms of responsiveness because high-priority tasks are given
immediate access to the CPU.
o However, it may lead to inefficient utilization of CPU due to context switches and the
frequent switching between processes.
 Non-Preemptive Priority Scheduling:
o Less efficient in terms of responsiveness because higher-priority processes have to wait
until lower-priority processes finish.
o However, less CPU time is wasted on context switching, so it can be more efficient in
systems with fewer process interruptions.

4. Round Robin Scheduling

1. Explain the Round Robin (RR) scheduling algorithm.

Department of INFS, UMT, Lahore. 20 Muhammad Ahmad


University of Management and Technology, Lahore Campus
Round Robin (RR) is one of the simplest and most widely used
preemptive scheduling algorithms in time-sharing systems. It is designed to be fair and give
each process a fair share of the CPU.

How Round Robin Scheduling Works:

1. Process Queue:
o In Round Robin scheduling, all processes are placed in a FIFO (First In, First Out)
queue called the ready queue.
o The processes are assigned in the order they arrive, and each process is given a fixed
time slice or quantum to execute. This time slice is typically a small, predefined time
period, such as 10-100 milliseconds.

2. Execution Cycle:
o The CPU scheduler selects the first process in the ready queue and gives it the CPU for
one time quantum.
o If the process completes its execution within the time quantum, it is removed from the
queue, and the next process is selected.
o If the process does not complete within its time quantum, it is preempted, placed back
into the ready queue, and the next process in the queue is selected to execute. The
preempted process waits for its next turn to execute.

3. Rotation:
o After each process executes for its time slice, the CPU moves to the next process in the
queue, which means that processes are executed in a cyclic manner.
o This round-robin nature continues until all processes in the ready queue have completed
their execution.

4. Time Quantum:
o The length of the time quantum plays a significant role in the performance of the
Round Robin algorithm. A short quantum can lead to excessive context switching
(reducing efficiency), while a long quantum can lead to the system behaving more like
First Come First Served (FCFS), where processes with long burst times may cause
delays for others.

Example of Round Robin Scheduling:

Let’s consider an example with three processes:

Process Arrival Time Burst Time

P1 0 5

P2 1 3

P3 2 8

Department of INFS, UMT, Lahore. 21 Muhammad Ahmad


University of Management and Technology, Lahore Campus
And assume a time quantum of 3 units.

Execution Timeline:

1. At time 0, Process P1 starts execution and runs for 3 units (time quantum). It completes 3 units of
work, and now 2 units are left.
o Remaining for P1: 2 units
o Current Time: 3
2. At time 3, Process P2 is selected and runs for 3 units (time quantum). It completes its work, and
it finishes execution.
o Remaining for P2: 0 units (completed)
o Current Time: 6
3. At time 6, Process P3 starts execution and runs for 3 units (time quantum).
o Remaining for P3: 5 units
o Current Time: 9
4. At time 9, Process P1 is scheduled again and runs for its remaining 2 units.
o Remaining for P1: 0 units (completed)
o Current Time: 11
5. At time 11, Process P2 is already completed, so we move to Process P3, which runs for 3 units.
o Remaining for P3: 2 units
o Current Time: 14
6. At time 14, Process P3 runs its final 2 units and completes.
o Remaining for P3: 0 units (completed)
o Current Time: 16

Gantt Chart for the Example:


Time 0 3 6 9 11 14 16

Process P1 P2 P3 P1 P3 P3

2. How does the time quantum (time slice) in Round Robin scheduling affect performance?

The time quantum (also called the time slice) in Round Robin (RR) scheduling plays a crucial
role in determining how efficiently the system manages processes and responds to tasks. It is the
fixed amount of time a process is allowed to run before it is preempted and the next process is
given the CPU. The time quantum directly affects the performance of Round Robin scheduling
in several ways. Here’s a breakdown of the different aspects of performance that are influenced
by the time quantum:

1. Impact on Context Switching

 Short Time Quantum:


o More Context Switches: If the time quantum is too small (e.g., 1ms or 10ms), processes
are preempted very frequently before they have a chance to complete much work.
o This leads to excessive context switching (switching from one process to another),
which introduces overhead, reducing the overall efficiency of the system. Context
Department of INFS, UMT, Lahore. 22 Muhammad Ahmad
University of Management and Technology, Lahore Campus
switching involves saving the state of the current process
and loading the state of the next process, which consumes CPU time and resources.
o High Overhead: If the processes spend a lot of time being swapped in and out of the
CPU without making much progress, the system can become inefficient, as CPU time is
spent on managing process states rather than performing useful computations.

 Long Time Quantum:


o Fewer Context Switches: With a larger time quantum (e.g., 100ms or more), processes
are allowed to run longer before being preempted. This reduces the number of context
switches and the associated overhead.
o More Efficient: Fewer context switches result in a more efficient use of CPU time, as
each process is allowed to complete more work in one go.

2. Response Time and Turnaround Time

 Short Time Quantum:


o Improved Responsiveness for Interactive Tasks: For interactive tasks or user-facing
applications (e.g., web browsers, text editors), a short time quantum can lead to better
response times. Since processes are frequently switched, interactive processes get
frequent access to the CPU, improving the system’s responsiveness to user input.
o Lower Turnaround Time for Small Jobs: Processes that don’t require much CPU time
will complete more quickly since they are given regular access to the CPU.

 Long Time Quantum:


o Delayed Response for Interactive Tasks: If the quantum is too long, interactive
processes might be delayed, leading to poor response times. A process may have to wait
too long before it gets its turn, especially if CPU-bound processes are hogging the CPU.
o Higher Turnaround Time for Small Jobs: Processes that require only small amounts of
CPU time might take longer to complete because they are not given frequent access to the
CPU. They have to wait for the time quantum of other processes to finish.

3. CPU Utilization

 Short Time Quantum:


o Lower CPU Utilization: With excessive context switching and smaller time slices, the
system may experience reduced CPU utilization. More time is spent on saving and
restoring process states than actually executing processes.
o This may lead to inefficient CPU usage, especially if processes do not fully utilize the
small quantum and are frequently swapped in and out.

 Long Time Quantum:


o Higher CPU Utilization: Longer time quanta mean that each process has a better chance
of utilizing the CPU for more extended periods, potentially leading to better overall CPU
utilization, especially for CPU-bound tasks.

4. Fairness

 Short Time Quantum:


Department of INFS, UMT, Lahore. 23 Muhammad Ahmad
University of Management and Technology, Lahore Campus
o Fairer for Small and Interactive Tasks: Shorter time
quanta help to achieve better fairness between processes, especially for those with
shorter burst times. No process can monopolize the CPU for too long, and each gets a
reasonable share of time.
o More Equitable: All processes get access to the CPU at roughly the same rate, which is
important in systems where multiple users or processes need to share resources equally.

 Long Time Quantum:


o Potentially Unfair for Short Tasks: If the time quantum is too long, it might favor
longer CPU-bound processes that can complete their tasks in a single quantum, leaving
shorter tasks waiting for extended periods, potentially causing inequitable resource
allocation.

5. Throughput

 Short Time Quantum:


o Decreased Throughput for CPU-Bound Tasks: Processes that require substantial CPU
time may perform poorly with a short time quantum, as they are frequently interrupted
before they can finish their work.
o Lower System Throughput: Frequent context switching and the time spent swapping
processes can lead to lower throughput, which is the total amount of work done in a
given time period.

 Long Time Quantum:


o Improved Throughput for CPU-Bound Tasks: With a longer quantum, CPU-bound
processes have more time to complete their tasks without interruption, which can result in
higher throughput for these types of processes.
o However, the throughput for interactive tasks may decrease, as those tasks are given
less frequent CPU time.

6. Starvation and Fairness

 Short Time Quantum:


o Reduces Starvation: With frequent context switching, processes are less likely to be
starved because every process is given a relatively quick chance to run.
o Fairer System: Since each process gets a small time slice, even CPU-bound tasks that
require more time to finish will be preempted and allowed to run again in the future.

 Long Time Quantum:


o Potential for Starvation: If CPU-bound processes take longer to execute than the time
quantum, other processes, particularly interactive tasks, may be starved if the CPU is
monopolized by these longer-running tasks.
o Less Fairness: Long time quanta could cause some processes to wait too long before
their turn to execute, which might create an imbalance in system fairness.

Department of INFS, UMT, Lahore. 24 Muhammad Ahmad


University of Management and Technology, Lahore Campus
3. Given the following processes, calculate the average waiting time
and average turnaround time using Round Robin scheduling with a time quantum of 4
units:

Process Arrival Time Burst Time

P1 0 5

P2 1 3

P3 2 4

P4 3 2

To calculate the average waiting time and average turnaround time using Round Robin
scheduling with a time quantum of 4 units, let's follow the steps systematically. Here’s a
summary of the processes:

Process Arrival Time Burst Time

P1 0 5

P2 1 3

P3 2 4

P4 3 2

Step 1: Initialize and organize the process queue

The processes are executed in the order they arrive, with the time quantum being 4 units.

Step 2: Round Robin Execution

 Time quantum = 4 units


 Processes will be executed in a round-robin fashion, with each process receiving 4 units of CPU
time before being preempted (if necessary).

Execution Timeline

We will go through the time steps, with each process getting up to 4 units of CPU time at a time.

1. At time 0: P1 starts execution and runs for 4 units (time quantum).


o Remaining burst time for P1: 5 - 4 = 1 unit
o Current time: 4

Department of INFS, UMT, Lahore. 25 Muhammad Ahmad


University of Management and Technology, Lahore Campus
2. At time 4: P2 starts execution and runs for 3 units (its burst time).
Remaining burst time for P2: 3 - 3 = 0 units (completed)
o
Current time: 7
o
3. At time 7: P3 starts execution and runs for 4 units (time quantum).
o Remaining burst time for P3: 4 - 4 = 0 units (completed)
o Current time: 11
4. At time 11: P4 starts execution and runs for 2 units (it has less than 4 units remaining).
o Remaining burst time for P4: 2 - 2 = 0 units (completed)
o Current time: 13
5. At time 13: P1 resumes and runs for its remaining 1 unit of burst time.
o Remaining burst time for P1: 1 - 1 = 0 units (completed)
o Current time: 14

Step 3: Completion Times

The completion times for each process are as follows:

Process Completion Time

P1 14

P2 7

P3 11

P4 13

Step 4: Turnaround Time Calculation

The turnaround time for each process is calculated as:

Turnaround Time=Completion Time−Arrival Time\text{Turnaround Time} = \text{Completion Time} - \


text{Arrival Time}

Process Arrival Time Completion Time Turnaround Time

P1 0 14 14 - 0 = 14

P2 1 7 7-1=6

P3 2 11 11 - 2 = 9

P4 3 13 13 - 3 = 10

Step 5: Waiting Time Calculation

The waiting time for each process is calculated as:

Waiting Time=Turnaround Time−Burst Time\text{Waiting Time} = \text{Turnaround Time} - \


text{Burst Time}
Department of INFS, UMT, Lahore. 26 Muhammad Ahmad
University of Management and Technology, Lahore Campus

Process Burst Time Turnaround Time Waiting Time

P1 5 14 14 - 5 = 9

P2 3 6 6-3=3

P3 4 9 9-4=5

P4 2 10 10 - 2 = 8

4. What are the advantages of using the Round Robin scheduling algorithm?

Here are the key advantages of using Round Robin scheduling:

Advantages:

 Fair: Ensures every process gets an equal share of CPU time, avoiding CPU monopolization.
 Simple: Easy to implement with minimal computational overhead.
 Preemptive: Provides better responsiveness and ensures no process is left waiting too long.
 Responsive: Ideal for interactive and time-sharing systems.
 Predictable: Easy to analyze and predict system behavior.
 Balanced: Works well in multi-user systems and environments with mixed workloads.

Round Robin is particularly effective when fairness is a priority, and the system needs to ensure
all tasks get their due share of CPU time. However, it is important to carefully tune the time
quantum to optimize system performance. Too short a time quantum leads to excessive context
switching, while too long a quantum may cause the system to behave like First-Come, First-
Served (FCFS) scheduling, which could lead to poor performance for short tasks.

5. Explain the concept of context switching in Round Robin scheduling. How does it impact
system performance?

Context switching is the process of saving the state of a currently running process and restoring
the state of the next process that will run. In Round Robin (RR) scheduling, context switching
happens every time the time quantum (time slice) for a process expires, or when a process is
preempted. This involves storing the current process's context (such as its CPU registers,
program counter, etc.) and loading the context of the next process in the ready queue.

Context Switching in Round Robin Scheduling

1. Time Quantum Expiry:


Department of INFS, UMT, Lahore. 27 Muhammad Ahmad
University of Management and Technology, Lahore Campus
o In Round Robin scheduling, each process gets a fixed time
quantum to execute. When the time quantum expires, the running process is preempted
and the next process in the ready queue is given a chance to execute.
o The preemption leads to a context switch, where the state of the currently executing
process (such as CPU registers, program counter, etc.) is saved, and the state of the next
process is loaded.

2. Queue Management:
o After a context switch, the current process (if not completed) is placed at the end of the
ready queue, while the next process in the queue is scheduled to run.
o This circular queue structure continues, where each process gets executed in a round-
robin manner, receiving its turn in the queue.

3. Execution Flow:
o The process cycle continues until all processes in the system are completed. After every
context switch, the system moves from one process to another, ensuring fair time-
sharing among processes.

Impact of Context Switching on System Performance

Context switching has both positive and negative impacts on system performance. Here are the
key factors to consider:

1. Overhead

 Time and Resources: Context switching incurs a performance overhead because it requires
time to save and restore the process states. This overhead includes saving the CPU registers, the
program counter, memory management data, and loading the state of the next process.
 System Efficiency: While context switching is essential to enable multitasking, it reduces overall
system efficiency because the CPU is occupied with switching rather than performing useful
work (i.e., executing processes). The more context switches occur, the higher the overhead,
leading to reduced throughput.

2. Increased CPU Usage for Management

 Each context switch consumes CPU cycles for the management tasks involved (saving and
loading the process states). If the time quantum is set too small, the system may perform
frequent context switches, leading to high overhead, especially for processes that do not need
much CPU time.
 On the other hand, if the time quantum is too large, the system behaves more like First-Come,
First-Served (FCFS) scheduling, leading to longer waiting times for other processes and
potentially making the system less responsive.

3. Impact on Throughput

 High Context Switching: If the time quantum is too small or the system is overloaded with too
many processes, the throughput of the system can be negatively impacted. Processes are

Department of INFS, UMT, Lahore. 28 Muhammad Ahmad


University of Management and Technology, Lahore Campus
frequently interrupted, and the time spent switching between
processes reduces the overall productive work done by the system.
 Lower Throughput for Short Jobs: Short processes or small tasks can be disrupted by frequent
context switching, leading to delays in their completion. If the time quantum is too large, long
processes might dominate, increasing waiting times for short tasks.

4. Reduced Responsiveness

 Effect on Interactive Systems: In systems with interactive processes (e.g., user interfaces),
context switching can impact responsiveness. For example, if the time quantum is too large, an
interactive process may have to wait a long time before getting the CPU again, making the system
less responsive. Conversely, if the quantum is too small, excessive context switching can slow
down the execution of interactive processes.
 Fairness and Wait Time: The balance between fairness and responsiveness is critical. While
context switching ensures fairness in CPU allocation, excessive switching can lead to
performance degradation, particularly in time-sensitive applications.

5. Cache and Memory Impacts

 Cache Misses: Context switching can lead to cache misses because when a process is switched
out and another is switched in, the CPU cache may no longer contain useful data for the new
process. The new process may need to reload data into the cache, leading to slower performance.
 Memory Access Delays: In addition to CPU register states, memory pages may also need to be
switched in or out, especially in systems with virtual memory. This can cause additional delays
if the process has large memory requirements or if memory pages are not in physical memory
(leading to page faults).

6. Starvation Risk

 While Round Robin is designed to avoid starvation by ensuring that each process gets a fair
share of the CPU, excessive context switching can still cause issues with CPU-bound processes.
For example, if short tasks frequently preempt long tasks, it can result in long waiting times for
CPU-bound processes, especially in systems with many processes and small time quanta.

Mitigating the Impact of Context Switching

To minimize the negative effects of context switching, it’s essential to optimize the time
quantum. Here are some strategies:

 Tuning Time Quantum: The time quantum should be set large enough to allow processes to
complete some useful work but not so large that the system behaves like FCFS. A moderate
quantum can strike a balance between fairness, responsiveness, and efficiency.
 Reducing Context Switching Frequency: If a system has a heavy workload, reducing the
frequency of context switching (by adjusting the time quantum or process scheduling) can
improve overall performance.

Department of INFS, UMT, Lahore. 29 Muhammad Ahmad


University of Management and Technology, Lahore Campus
 Prioritization: In some systems, priority scheduling or multi-level
queues can be used alongside Round Robin to better manage the execution of processes and
reduce unnecessary context switching.

Department of INFS, UMT, Lahore. 30 Muhammad Ahmad

You might also like