0% found this document useful (0 votes)
12 views48 pages

OS Unit 3

CPU scheduling is essential for efficient CPU utilization and system performance in multitasking operating systems. It involves selecting processes from the ready queue for execution based on various criteria, including CPU utilization, throughput, and response time. The document also discusses different scheduling types, such as pre-emptive and non-pre-emptive scheduling, and outlines the roles of various schedulers in managing process execution.

Uploaded by

anwar.shadab1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views48 pages

OS Unit 3

CPU scheduling is essential for efficient CPU utilization and system performance in multitasking operating systems. It involves selecting processes from the ready queue for execution based on various criteria, including CPU utilization, throughput, and response time. The document also discusses different scheduling types, such as pre-emptive and non-pre-emptive scheduling, and outlines the roles of various schedulers in managing process execution.

Uploaded by

anwar.shadab1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

OS Unit 3

CPU Scheduling

CPU Scheduling is the process of selecting which process from the ready queue will be
allocated the CPU for execution. It is a crucial aspect of multitasking operating systems,
ensuring efficient utilization of the CPU and improving system performance.

Why is CPU Scheduling Needed?

1. Multiprogramming: To maximize CPU utilization, multiple processes are kept in


memory, and scheduling decides which one gets CPU time.
2. Resource Sharing: Since multiple processes compete for CPU time, scheduling helps
in fair and efficient resource distribution.
3. Minimizing Waiting Time: A good scheduling strategy reduces process waiting
time, improving overall performance.
4. Ensuring System Responsiveness: In interactive systems, scheduling helps provide
quick responses to user interactions.

When Does CPU Scheduling Occur?

CPU scheduling takes place when a process:

1. Switches from running to waiting state (e.g., due to I/O requests).


2. Switches from running to ready state (e.g., due to time quantum expiration in time-
sharing systems).
3. Switches from waiting to ready state (e.g., after an I/O operation completes).
4. Terminates (so the CPU needs to assign another process).

Performance Criteria for CPU Scheduling

Scheduling criteria are the key performance measures used to evaluate CPU scheduling algorithms.
These criteria help determine the efficiency, fairness, and effectiveness of a scheduling strategy.

1. CPU Utilization

 Definition: The percentage of time the CPU is actively executing processes rather than being
idle.
 Goal: Maximize CPU utilization to ensure efficient use of system resources.

2. Throughput

 Definition: The number of processes completed per unit of time.

Formula:

 Throughput=(Number of completed processes)/Total time taken


 Goal: Higher throughput means the system is processing more tasks efficiently.
 Example: If 5 processes complete in 10 seconds, the throughput is 0.5 processes per second.

3. Turnaround Time

 Definition: The total time taken for a process to complete from submission to termination.
 Formula:

Turnaround Time=Completion Time−Arrival Time

 Goal: Minimize turnaround time to improve overall system responsiveness.

4. Waiting Time

 Definition: The total time a process spends in the ready queue waiting for CPU execution.
 Formula:

Waiting Time=Turnaround Time−CPU Burst Time

 Goal: Minimize waiting time to prevent process starvation and inefficiency.

5. Response Time

 Definition: The time taken from when a process is submitted until it starts execution for the
first time.
 Formula:

Response Time=First Execution Time−Arrival Time

 Goal: Lower response time is crucial for interactive systems to provide fast user feedback.

6. Fairness

 Definition: Ensuring that all processes get a fair share of CPU time without starvation.
 Goal: Prevent starvation by balancing short and long processes effectively.

7. Predictability

 Definition: The scheduling algorithm should provide consistent and predictable


performance.
 Goal: Minimize variations in response and execution times.

8. Priority Enforcement

 Definition: Some scheduling algorithms assign priority levels to processes, ensuring


that higher-priority tasks are executed first.
 Goal: Ensure critical processes execute without unnecessary delays.
Objective of CPU Scheduling

The primary objective of CPU scheduling is to optimize the execution of processes by


efficiently managing CPU time. This helps in achieving maximum system performance,
responsiveness, and resource utilization.

Key Objectives of CPU Scheduling

1. Maximizing CPU Utilization


o Ensure the CPU remains busy as much as possible to reduce idle time.
2. Maximizing Throughput
o Increase the number of processes completed per unit of time.
3. Minimizing Turnaround Time
o Reduce the time taken for a process to complete execution.
4. Minimizing Waiting Time
o Reduce the time a process spends waiting in the ready queue.
5. Minimizing Response Time
o Ensure processes start executing as soon as possible, improving user
experience in interactive systems.
6. Ensuring Fairness
o Provide equal opportunity for all processes to execute and avoid process
starvation.
7. Prioritizing Important Processes
o Allow high-priority tasks to execute first when needed (e.g., real-time
systems).
8. Balancing Load Among Processes
o Distribute CPU time effectively across multiple processes to prevent
bottlenecks.
9. Providing Predictability
o Ensure that similar processes have similar execution times to improve system
reliability.

Pre-emptive vs. Non-Pre-emptive Scheduling

CPU scheduling can be classified into two main types: pre-emptive scheduling and non-
pre-emptive scheduling. The key difference between them is whether a running process can
be forcibly interrupted or not.

1. Pre-emptive Scheduling

🔹 Definition: In pre-emptive scheduling, the CPU can be taken away from a running
process if a higher-priority process arrives or the time slice (quantum) expires.
🔹 Characteristics:

 The CPU can switch to another process before the current one finishes.
 It allows better resource sharing and multitasking.
 It is used in time-sharing and real-time operating systems.

🔹 Example Algorithms:
✅ Round Robin (RR)
✅ Shortest Remaining Time First (SRTF)
✅ Priority Scheduling (Pre-emptive)
✅ Multilevel Queue Scheduling

🔹 Advantages:
✔ Better responsiveness for high-priority tasks.
✔ Prevents process starvation in long-running tasks.
✔ Increases CPU utilization.

🔹 Disadvantages:
✖ Frequent context switching increases overhead.
✖ More complex to implement.

2. Non-Pre-emptive Scheduling

🔹 Definition: In non-pre-emptive scheduling, once a process starts execution, it runs until


it completes or voluntarily enters the waiting state (e.g., for I/O operations).

🔹 Characteristics:

 The CPU cannot be taken away from a running process.


 Processes are executed in a sequential manner.
 Simpler to implement than preemptive scheduling.

🔹 Example Algorithms:
✅ First Come First Serve (FCFS)
✅ Shortest Job First (SJF)
✅ Priority Scheduling (Non-Pre-emptive)

🔹 Advantages:
✔ No overhead of context switching.
✔ Simpler and easier to implement.

🔹 Disadvantages:
✖ Can lead to process starvation (long jobs delay short ones).
✖ Less efficient for multitasking.
Key Differences Between Pre-emptive and Non-Pre-emptive Scheduling

Feature Pre-emptive Scheduling Non-Pre-emptive Scheduling


Process Not allowed (runs till
Allowed (CPU can be taken away)
Interruption completion)
Response Time Faster Slower
More complex (requires context
Complexity Simpler
switching)
CPU Utilization High Lower
Example FCFS, SJF, Non-Pre-emptive
RR, SRTF, Pre-emptive Priority
Algorithms Priority

Multi-Level Queue Scheduling (MLQ)

🔹 Definition: Multi-Level Queue (MLQ) Scheduling is a CPU scheduling algorithm that divides
processes into different priority-based queues and assigns a separate scheduling algorithm to each
queue.

🔹 Why is MLQ Scheduling Needed?

 Different types of processes have different scheduling needs (e.g., system processes vs. user
processes).
 Some tasks require fast response times (interactive processes), while others need CPU-
intensive execution (batch jobs).
 MLQ ensures efficient CPU utilization and fair resource allocation for different process
categories.
1. Structure of Multi-Level Queue Scheduling

 The ready queue is divided into multiple priority-based queues.


 Each queue has its own scheduling algorithm (e.g., FCFS for one queue, RR for another).
 The CPU scheduler selects processes based on queue priority.

🔹 Example of Queue Categories:

Queue Type Priority Example Scheduling Algorithm

System Processes Highest FCFS (First Come First Serve)

Interactive Processes High RR (Round Robin)

Foreground User Jobs Medium SJF (Shortest Job First)

Background/Batched Jobs Low FCFS or SJF

2. Types of Multi-Level Queue Scheduling

🔹 Fixed Priority Scheduling (Non-Preemptive MLQ)

 The CPU always selects processes from the highest-priority queue.


 Lower-priority queues only get CPU time when higher-priority queues are empty.
 Issue: Can lead to starvation for lower-priority processes.

🔹 Time-Sliced Scheduling (Preemptive MLQ)

 CPU time is divided among queues based on a time slice.


 Each queue gets a share of CPU time, preventing starvation.
 Example: Foreground gets 80% of CPU time, background gets 20%.

3. Advantages of Multi-Level Queue Scheduling

✔ Efficient CPU Utilization – Different process types get CPU time based on their needs.
✔ Better Responsiveness – Interactive and system processes get priority.
✔ Flexibility – Each queue can have a different scheduling algorithm.

4. Disadvantages of Multi-Level Queue Scheduling

✖ Starvation – Lower-priority queues may never get CPU time if priority rules are strict.
✖ Complex Implementation – Requires managing multiple scheduling algorithms.
✖ Overhead – Context switching between queues adds CPU overhead.
Types of Schedulers in Operating Systems

Schedulers are responsible for managing process execution by selecting processes at different stages
of their lifecycle. There are three main types of schedulers:

1. Long-Term Scheduler (Job Scheduler)


2. Short-Term Scheduler (CPU Scheduler)
3. Mid-Term Scheduler

1. Long-Term Scheduler (Job Scheduler)

🔹 Purpose: Controls which processes are admitted into the system for execution.
🔹 Function:

 Selects processes from the job queue and loads them into main memory.
 Ensures a balanced mix of CPU-bound and I/O-bound processes.
 Controls the degree of multiprogramming (number of processes in memory).
🔹 Execution Frequency: Infrequent (Runs only when a new process arrives).
🔹 Example: In batch processing systems, the long-term scheduler decides which jobs enter
execution.

✅ Advantage: Helps in optimizing CPU and memory utilization.


❌ Disadvantage: If too many processes are admitted, it can lead to memory overload.

2. Short-Term Scheduler (CPU Scheduler)

🔹 Purpose: Determines which process from the ready queue will execute next on the CPU.
🔹 Function:

 Selects a process for execution whenever the CPU becomes idle.


 Uses CPU scheduling algorithms (FCFS, SJF, Round Robin, etc.).
🔹 Execution Frequency: Very frequent (Runs every few milliseconds).
🔹 Example: If multiple processes are ready, the CPU scheduler picks one to execute next.

✅ Advantage: Ensures efficient CPU usage by quickly assigning processes.


❌ Disadvantage: Frequent context switching increases CPU overhead.

3. Medium-Term Scheduler

🔹 Purpose: Temporarily removes processes from memory (swapping) to reduce system load.
🔹 Function:

 Suspends processes and moves them to secondary storage (swap space).


 Later reintroduces them into memory when resources are available.
 Helps control multiprogramming by reducing the number of active processes.
🔹 Execution Frequency: Occasional (Runs when system load is high).
🔹 Example: If too many processes are in memory, some are swapped out to free up space.
✅ Advantage: Prevents memory congestion and improves performance.
❌ Disadvantage: Swapping processes in and out causes disk I/O overhead.

Comparison of Schedulers

Feature Long-Term Scheduler Short-Term Scheduler Mid-Term Scheduler

Admits processes into Selects process for CPU Swaps processes in and out of
Purpose
memory execution memory

Frequency Infrequent Very frequent Occasional

Speed Slow Fast Medium

Degree of
Controls CPU scheduling Memory utilization
multiprogramming

Process States in an Operating System

A process goes through multiple states during its lifecycle. These states define what the
process is doing at a given time. The main process states are:

1. New State

🔹 Definition: The process is being created but is not yet ready to execute.
🔹 What Happens?

 The OS initializes the process control block (PCB).


 The process waits for system resources to be allocated.
🔹 Example: A user starts a program, and it is loaded into memory.
2. Ready State

🔹 Definition: The process is in memory and ready to run but waiting for CPU execution.
🔹 What Happens?

 The scheduler places the process in the ready queue.


 It waits until the CPU becomes available.
🔹 Example: Multiple processes are ready, but the CPU is executing another process.

3. Running State

🔹 Definition: The process is actively executing on the CPU.


🔹 What Happens?

 The CPU scheduler selects the process from the ready queue.
 The process runs until it completes or is interrupted.
🔹 Example: A user application is currently running and performing computations.

4. Waiting (Blocked) State

🔹 Definition: The process is waiting for an I/O operation or some event to complete.
🔹 What Happens?

 The process cannot proceed until the required resource is available.


 It is moved to the waiting queue.
🔹 Example: A process is waiting for user input or disk read/write operation.

5. Terminated (Exit) State

🔹 Definition: The process has finished execution and is removed from memory.
🔹 What Happens?

 The OS deallocates its resources.


 The process control block (PCB) is deleted.
🔹 Example: A program closes after completing execution.

6. Suspended States (Optional in Some OS)

🔹 Suspended Ready: Process is in memory but temporarily stopped (paused).


🔹 Suspended Blocked: Process is waiting but swapped out to disk (not in main memory).
Context Switching in Operating Systems

What is Context Switching?

🔹 Definition: Context switching is the process of saving and restoring the state of a CPU so that
execution can resume from the same point later. It occurs when the CPU switches from one process to
another.

🔹 Why is Context Switching Needed?

 To enable multitasking (executing multiple processes).


 To switch between user and kernel mode.
 To allow preemptive scheduling (switching between processes based on priority).

Steps in Context Switching

1. Save the Current Process State

 The OS saves the CPU registers (e.g., program counter, stack pointer) of the currently
running process in its Process Control Block (PCB).
 The process is then moved to the Ready or Waiting state.

2. Load the Next Process State

 The OS selects the next process from the ready queue using the CPU scheduler.
 It loads the saved state from the PCB of the selected process.

3. Resume Execution

 The CPU resumes execution of the new process from where it left off.
When Does Context Switching Occur?

🔹 Multitasking (Time-sharing Systems) – CPU switches between multiple processes.


🔹 Interrupt Handling – CPU switches to handle an I/O or system interrupt.
🔹 Preemptive Scheduling – A higher-priority process preempts the current process.
🔹 System Calls – A process requests the OS to perform operations like I/O.

Overhead of Context Switching

🔸 Takes CPU time – The CPU must save and restore states, which reduces efficiency.
🔸 Involves Memory Operations – Storing and restoring registers causes additional system load.

💡 Optimization Tip: Reducing unnecessary context switches improves system performance!

Need for Process Suspension in Operating Systems


What is Process Suspension?

🔹 Definition: Process suspension occurs when a process is temporarily removed from memory and
moved to secondary storage (swap space). This is done to free up resources and improve system
performance.

🔹 Where is Suspended Process Stored?


 The process is placed in secondary storage (disk) instead of RAM.
 The Process Control Block (PCB) remains in memory so the OS can track the process state.

Why is Process Suspension Needed?

1. To Free Up Memory (Reduce Multiprogramming)

 If too many processes are in memory, the system runs out of RAM.
 Suspending some processes frees up space for higher-priority processes.

2. When a Process is Waiting for I/O

 If a process is waiting for slow I/O operations (e.g., disk, network), it does not need the
CPU.
 Suspending the process allows the OS to run other processes efficiently.

3. To Improve CPU Utilization

 If the system is overloaded, the OS removes inactive processes and gives CPU time to active
ones.
 Prevents the system from becoming unresponsive.

4. To Prevent Starvation in Priority Scheduling

 In priority-based scheduling, low-priority processes may never get CPU time.


 Suspending them prevents indefinite waiting and allows other processes to execute.

5. Swapping in Virtual Memory Systems

 When RAM is full, the OS swaps processes to disk and brings them back when needed.
 This allows running more processes than available physical memory.

Types of Suspended Processes

🔹 Suspended Ready: Process is in secondary storage but ready to execute once brought back.
🔹 Suspended Blocked: Process is waiting for an event and has been moved to secondary storage.

Process Control Block (PCB) – Components and Explanation


What is a Process Control Block (PCB)?

🔹 Definition: The Process Control Block (PCB) is a data structure in the operating system that
stores all important information about a process.

🔹 Why is PCB Needed?

 The OS uses the PCB to track process execution.


 It helps in context switching by saving and restoring process states.
 Each process in the system has a unique PCB.

Components of PCB

The PCB contains multiple fields that store different types of process information.

Component Description

1. Process ID (PID) A unique identifier assigned to each process.

Indicates whether the process is New, Ready, Running, Waiting,


2. Process State
or Terminated.

3. Program Counter (PC) Stores the memory address of the next instruction to be executed.

Saves the values of CPU registers (e.g., accumulator, stack pointer)


4. CPU Registers
for context switching.

5. CPU Scheduling Includes priority, scheduling algorithm, and process queue


Information information.

6. Memory Management Stores base and limit registers, page tables, and segment tables (for
Information memory allocation).

7. I/O Status Information Tracks allocated I/O devices, open files, and pending I/O requests.

Records CPU usage, execution time, process creation time, and user
8. Accounting Information
ID.
Role of PCB in Context Switching

 When switching processes, the OS saves the current process's state in its PCB.
 The OS loads the new process's PCB and restores its state.
 This allows the OS to resume execution from where it left off.

Process Address Space in Operating Systems


What is Process Address Space?

🔹 Definition: A process address space is the range of memory addresses that a process can use
during execution.
🔹 Each process has its own separate address space, preventing interference between processes.

Types of Address Spaces

🔹 Logical Address Space (Virtual Address Space)

 Generated by the CPU during execution.


 Translated into physical addresses using Memory Management Unit (MMU).

🔹 Physical Address Space

 Actual memory locations in RAM where the process is stored.

💡 Example: A process might use logical addresses 0x1000 to 0x2000, but the OS maps them to
physical addresses 0xA000 to 0xB000 in RAM.

Memory Layout of a Process Address Space

A process address space is typically divided into four main sections:

Memory Section Description

1. Text (Code)
Stores executable program code (instructions).
Segment

2. Data Segment Stores global and static variables.

3. Heap Segment Used for dynamic memory allocation (grows upward).

Stores function calls, local variables, and return addresses (grows


4. Stack Segment
downward).

📌 Diagram of Process Address Space

High Memory (Largest Address)


+--------------------+
| Stack (Local Vars) |
|--------------------|
| Heap (Dynamic Mem) |
|--------------------|
| Data (Globals) |
|--------------------|
| Text (Code) |
+--------------------+
Low Memory (Smallest Address)

Key Features of Process Address Space

✅ Each process gets a private address space → Prevents one process from modifying another’s
memory.
✅ Address translation (Logical → Physical) ensures memory protection.
✅ Stack and Heap can dynamically grow and shrink as needed.

Process Identification Information in Operating Systems


What is Process Identification Information?

🔹 Definition: Process Identification Information refers to the unique identifiers and details that the
OS maintains to track each process.
🔹 It is stored in the Process Control Block (PCB) and helps in process management, scheduling,
and resource allocation.

Key Process Identification Information


Field Description

1. Process ID (PID) Unique number assigned to each process.

2. Parent Process ID (PPID) ID of the process that created (forked) this process.

3. User ID (UID) & Group ID (GID) Identifies the user and group that owns the process.

4. Process Name The name of the executable program.

5. Process Status Current state: New, Ready, Running, Waiting, Terminated.

6. Priority Scheduling priority assigned to the process.

7. Execution Mode Indicates User Mode or Kernel Mode execution.


Thread in Operating Systems

What is a Thread?

🔹 Definition: A thread is the smallest unit of execution within a process.


🔹 A process can have multiple threads, each executing independently but sharing the same
resources (memory, files, etc.).
🔹 Threads allow for parallel execution and improve system efficiency.

Thread vs. Process


Feature Process Thread

A lightweight unit of execution within a


Definition An independent program in execution
process

Shares memory with other threads in the


Memory Has its own memory space
same process

Overhead High (requires context switching) Low (switching between threads is faster)

Inter-process communication (IPC) is


Communication Threads share data easily
needed

Creation Speed Slower Faster

Example A web browser is a process Each tab in the browser runs as a thread

Types of Threads

1. User-Level Threads
 Managed without OS kernel intervention.
 Faster to create and switch.
 If one thread blocks, all threads in the process may block.
 Example: Java threads managed by the JVM.

2. Kernel-Level Threads

 Managed by the OS kernel.


 More powerful but slower due to context switching overhead.
 Example: Linux pthreads (POSIX threads).

Multithreading in Operating Systems

🔹 Single-threaded Process: Has only one thread (e.g., simple C program).


🔹 Multi-threaded Process: Has multiple threads running concurrently (e.g., a web server handling
multiple requests).

Advantages of Multithreading

✅ Faster execution – Multiple threads can run simultaneously.


✅ Efficient CPU utilization – Threads use CPU time effectively.
✅ Better responsiveness – User interfaces remain smooth even if tasks run in the background.

Thread Implementation in Operating Systems


How Are Threads Implemented?

Threads can be implemented at three different levels:

1. User-Level Threads (ULTs) – Managed by user libraries.


2. Kernel-Level Threads (KLTs) – Managed by the operating system.
3. Hybrid Threads – A mix of user and kernel-level threading.
1. User-Level Thread (ULT) Implementation

🔹 Managed by a user-space library (not the OS).


🔹 The kernel sees the process as a single entity, even if multiple threads exist.

Advantages:

✅ Fast & lightweight (no kernel intervention).


✅ Easier to create and switch between threads.
✅ Works on any OS (no need for kernel support).

Disadvantages:

❌ No true parallelism (if one thread blocks, all threads block).


❌ Not suitable for multi-core systems (since the kernel handles processes, not threads).

2. Kernel-Level Thread (KLT) Implementation

🔹 Managed by the OS kernel – Each thread is treated as an independent scheduling unit.


🔹 The OS knows about all threads and can schedule them separately.
Advantages:

✅ True parallelism (threads can run on different CPU cores).


✅ One thread blocking doesn’t affect others.
✅ Better multi-core performance.

Disadvantages:

❌ Slower than user-level threads (requires system calls).


❌ More overhead due to context switching.

3. Hybrid Thread Implementation


🔹 Combines user-level and kernel-level threading.
🔹 Threads are created at the user level, but mapped to kernel threads for execution.
🔹 Used in Windows, Solaris, and Linux (M:N threading model).

Advantages:

✅ Balances performance and efficiency.


✅ Faster thread switching compared to pure KLTs.
✅ True parallelism with reduced overhead.

Disadvantages:

❌ Complex to implement.
❌ Thread scheduling must be managed at two levels (user & kernel).

Difference Between User-Level and Kernel-Level Threads


Threads can be implemented at two levels: User-Level Threads (ULTs) and Kernel-Level Threads
(KLTs). The major differences between them are explained below.

Comparison Table: User-Level vs. Kernel-Level Threads


Feature User-Level Threads (ULTs) Kernel-Level Threads (KLTs)

Managed By User-space thread library Operating system (OS Kernel)

OS recognizes and schedules each


OS Awareness OS does not recognize threads
thread

Context Switching
Faster (no kernel mode switching) Slower (requires kernel intervention)
Speed

❌ No true parallelism (single core ✅ True parallelism (multi-core


Parallel Execution
scheduling) execution)

❌ If one thread blocks, all threads in the ✅ One thread blocking does not affect
Blocking Behavior
process block others

System Calls ❌ Cannot make system calls directly ✅ Can make system calls directly

Multi-Core Support ❌ Not supported ✅ Fully supported

❌ More overhead (kernel must


Efficiency ✅ More efficient (less overhead)
manage threads)

Portability ✅ Highly portable across OSs ❌ OS-dependent implementation

Examples Java threads, Python threads Linux pthreads, Windows threads


Thread Cancellation in Operating Systems
What is Thread Cancellation?

🔹 Thread cancellation is the process of terminating a thread before it has finished execution.
🔹 Used when a thread is no longer needed, is consuming too many resources, or needs to be stopped
due to errors.

Types of Thread Cancellation

1. Asynchronous Cancellation (Forceful Termination) 🚀

 The target thread is immediately terminated.


 May leave resources (locks, memory) in an inconsistent state.
 Example: Killing a thread in an emergency.

Pros:Immediate termination.
❌ Cons: Unsafe; may leave resources locked

2. Deferred Cancellation (Graceful Termination) ✅

 The thread checks at safe points whether it should terminate.


 Allows proper resource cleanup.
 Example: A thread periodically checking if it should exit.

✅ Pros: Prevents resource leaks and ensures safe termination.


❌ Cons: Slower, as the thread must periodically check for cancellation.

CPU Scheduling Algorithms


What is CPU Scheduling?

🔹 CPU Scheduling is the process of selecting a process from the ready queue to execute on the CPU.
🔹 It ensures efficient CPU utilization, reduces waiting time, and improves system performance.

Types of CPU Scheduling Algorithms

1. First-Come, First-Served (FCFS) 🏁

 Processes are executed in the order they arrive.


 Non-preemptive (once a process starts, it runs until completion).

✅ Pros: Simple, fair, and easy to implement.


❌ Cons: Long waiting times for large processes (Convoy Effect).

💡 Example
Process Arrival Time Burst Time Execution Order
P1 0 ms 5 ms 1st

P2 1 ms 3 ms 2nd

P3 2 ms 8 ms 3rd

2. Shortest Job Next (SJN) ⏳ (SJF - Shortest Job First)

 Executes the shortest process first (reduces waiting time).


 Can be preemptive (Shortest Remaining Time First - SRTF) or non-preemptive.

✅ Pros: Minimizes waiting time.


❌ Cons: Starvation (longer processes may never execute).

💡 Example (Non-Preemptive SJF)

Process Arrival Time Burst Time Execution Order


P1 0 ms 6 ms 2nd

P2 1 ms 4 ms 1st

P3 2 ms 8 ms 3rd

3. Round Robin (RR) 🔄

 Each process gets a fixed time slice (quantum).


 Preemptive (if a process doesn’t finish, it goes back to the queue).

✅ Pros: Fair for all processes, good for time-sharing systems.


❌ Cons: High context switching overhead.

💡 Example (Quantum = 3 ms)

Process Arrival Time Burst Time Execution Order

P1 0 ms 5 ms P1 (3 ms), P2, P3, P1 (2 ms)

P2 1 ms 3 ms P2 (3 ms)

P3 2 ms 8 ms P3 (3 ms), P3 (3 ms), P3 (2 ms)

4. Priority Scheduling 🎖️

 Each process has a priority value, and the CPU executes higher-priority processes first.
 Can be preemptive or non-preemptive.
✅ Pros: Useful for real-time systems.
❌ Cons: Starvation (low-priority processes may never run).

💡 Example (Higher priority = smaller number)

Process Priority Arrival Time Execution Order


P1 2 0 ms 2nd

P2 1 1 ms 1st

P3 3 2 ms 3rd

🛠 Solution to Starvation: Aging (increasing priority of waiting processes over time).

5. Multi-Level Queue (MLQ) 🏢

 Divides processes into multiple queues, each with a different priority (e.g., foreground vs.
background).
 Scheduling occurs within and between queues.

✅ Pros: Efficient for different types of processes.


❌ Cons: Complex to implement, may cause starvation.

💡 Example Queue System

Queue Type Scheduling Algorithm

System Processes Highest Priority

Interactive Processes Round Robin

Background Processes FCFS

6. Multi-Level Feedback Queue (MLFQ) 🔄📊

 Dynamic priority adjustment (processes move between queues based on execution


behavior).
 Balances efficiency and fairness.

✅ Pros: Best for general-purpose operating systems.


❌ Cons: Difficult to configure.

💡 Example:

1. New process starts in high-priority queue (short time quantum).


2. If it doesn’t finish, it moves to lower-priority queue (longer quantum).
3. I/O-bound processes stay in high-priority queues, while CPU-bound processes move
down.

Comparison of Scheduling Algorithms


Algorithm Preemptive Starvation Risk Best For

FCFS ❌ No ❌ No Simple systems

SJF ❌ No / ✅ Yes (SRTF) ✅ Yes Short jobs

RR ✅ Yes ❌ No Time-sharing systems

Priority ✅ Yes / ❌ No ✅ Yes Real-time systems

MLQ ✅ Yes ✅ Yes Mixed workloads

MLFQ ✅ Yes ❌ No General OS use

First-Come, First-Served (FCFS) Scheduling


What is FCFS Scheduling?

🔹 First-Come, First-Served (FCFS) is the simplest CPU scheduling algorithm.


🔹 The process that arrives first in the ready queue is executed first.
🔹 It is a non-preemptive scheduling algorithm (once a process starts, it runs until completion).

How FCFS Works?

1. Processes are scheduled in the order of arrival.


2. No process can be interrupted once it starts execution.
3. The CPU remains busy until the running process completes.

Example of FCFS Scheduling

Given Processes:

Process Arrival Time (ms) Burst Time (ms)


P1 0 5

P2 2 3

P3 4 2

Gantt Chart Representation


📊 A Gantt chart helps visualize the execution order:

| P1 | P1 | P1 | P1 | P1 | P2 | P2 | P2 | P3 | P3 |
0 1 2 3 4 5 6 7 8 9 10

Calculations

1. Completion Time (CT)

 P1 finishes at 5 ms.
 P2 starts at 5 ms, finishes at 8 ms.
 P3 starts at 8 ms, finishes at 10 ms.

2. Turnaround Time (TAT) = CT - Arrival Time


Process Completion Time (CT) Arrival Time (AT) Turnaround Time (TAT)

P1 5 0 5-0=5

P2 8 2 8-2=6

P3 10 4 10 - 4 = 6

3. Waiting Time (WT) = TAT - Burst Time


Process Turnaround Time (TAT) Burst Time (BT) Waiting Time (WT)

P1 5 5 5-5=0

P2 6 3 6-3=3

P3 6 2 6-2=4

Average Waiting Time (AWT) Calculation

AWT=(0+3+4)/3=7/3≈2.33 ms

Advantages of FCFS ✅

✔ Simple & Easy to implement.


✔ Fair – Each process gets CPU in order of arrival.
✔ No starvation – Every process gets executed.

Disadvantages of FCFS ❌

❌ Convoy Effect – A long process blocks shorter processes.


❌ Poor CPU Utilization – I/O-bound processes may waste CPU time.
❌ High Average Waiting Time – Some processes may wait too long.
When to Use FCFS?

✅ Batch systems (no user interaction).


✅ Non-time-sensitive applications.
✅ Simple environments with minimal process switching.

Shortest Job Next (SJN) Scheduling


What is SJN (Shortest Job Next) Scheduling?

🔹 Shortest Job Next (SJN), also known as Shortest Job First (SJF), is a scheduling algorithm that
selects the process with the shortest burst time to execute next.
🔹 It minimizes waiting time and improves system performance.
🔹 Types of SJN:

 Non-Preemptive SJN: Once a process starts, it cannot be interrupted.


 Preemptive SJN (Shortest Remaining Time First - SRTF): If a new process arrives with a
shorter burst time, the CPU switches to that process.

How SJN Works?

1. The process with the shortest burst time is selected for execution.
2. In non-preemptive SJN, once a process starts, it runs until completion.
3. In preemptive SJN (SRTF), if a new process arrives with a shorter burst time, the CPU
switches to it.

Example of Non-Preemptive SJN

Given Processes:

Process Arrival Time (AT) Burst Time (BT)

P1 0 ms 7 ms

P2 2 ms 4 ms

P3 4 ms 1 ms

P4 5 ms 3 ms

Step-by-Step Execution:

1. P1 starts first (AT = 0).


2. At time 2 ms, P2 arrives but waits since P1 is running.
3. At time 4 ms, P3 arrives with the shortest burst time (1 ms), so it executes next.
4. P4 arrives at time 5 ms, and after P3 finishes, the next shortest job (P4) runs.
5. Finally, P2 executes last.
Gantt Chart Representation

| P1 | P1 | P3 | P4 | P4 | P4 | P2 | P2 | P2 | P2 |
0 2 4 5 6 7 8 9 10 11

Calculations

1. Completion Time (CT)


Process Arrival Time (AT) Burst Time (BT) Completion Time (CT)

P1 0 ms 7 ms 7 ms

P2 2 ms 4 ms 11 ms

P3 4 ms 1 ms 5 ms

P4 5 ms 3 ms 8 ms

2. Turnaround Time (TAT) = CT - AT


Process CT AT TAT (CT - AT)

P1 7 0 7-0=7

P2 11 2 11 - 2 = 9

P3 5 4 5-4=1

P4 8 5 8-5=3

3. Waiting Time (WT) = TAT - BT


Process TAT BT WT =(TAT - BT)

P1 7 7 7-7=0

P2 9 4 9-4=5

P3 1 1 1-1=0

P4 3 3 3-3=0

4. Average Waiting Time (AWT)


AWT=(0+5+0+0)/4=5/4=1.25 ms

Advantages of SJN ✅

✔ Minimizes average waiting time (more efficient than FCFS).


✔ Efficient for batch processing.
✔ Reduces turnaround time compared to FCFS.
Disadvantages of SJN ❌

❌ Starvation – Long processes may never execute if short processes keep arriving.
❌ Difficult to implement in real-time – Requires accurate burst time prediction.
❌ Not suitable for time-sharing systems.

🛠 Solution to Starvation? Aging – Increase priority of long-waiting processes over time.

Preemptive SJN (Shortest Remaining Time First - SRTF)

✔ If a new process with a shorter burst time arrives, it preempts the running process.
✔ It further reduces waiting time but adds more context switching overhead.

Round Robin (RR) Scheduling 🔄


What is Round Robin Scheduling?

🔹 Round Robin (RR) is a preemptive CPU scheduling algorithm designed for time-sharing
systems.
🔹 Each process is assigned a fixed time slice (also called a time quantum).
🔹 If a process does not complete within its time quantum, it is pre-empted and moved to the back
of the ready queue.

How Round Robin Works?

1. All processes are placed in a queue.


2. The first process gets CPU for a fixed time quantum.
3. If it completes within that time, it exits. Otherwise, it goes to the end of the queue.
4. The next process in the queue runs, and the cycle repeats until all processes finish.

Example of Round Robin Scheduling

Given Processes

Process Arrival Time (AT) Burst Time (BT)

P1 0 ms 5 ms

P2 1 ms 3 ms

P3 2 ms 8 ms

Time Quantum = 3 ms
Step-by-Step Execution

1. P1 executes for 3 ms, remaining burst time = 5 - 3 = 2 ms.


2. P2 executes for 3 ms, remaining burst time = 3 - 3 = 0 ms (P2 completes).
3. P3 executes for 3 ms, remaining burst time = 8 - 3 = 5 ms.
4. P1 executes for 2 ms, remaining burst time = 0 ms (P1 completes).
5. P3 executes for 3 ms, remaining burst time = 5 - 3 = 2 ms.
6. P3 executes for 2 ms, remaining burst time = 0 ms (P3 completes).

Gantt Chart Representation


CopyEdit
| P1 | P2 | P3 | P1 | P3 | P3 |
0 3 6 9 11 14 16

Calculations

1. Completion Time (CT)


Process Completion Time (CT)

P1 11 ms

P2 6 ms

P3 16 ms

2. Turnaround Time (TAT) = CT - AT


Process CT AT TAT= (CT - AT)

P1 11 0 11 - 0 = 11

P2 6 1 6-1=5

P3 16 2 16 - 2 = 14

3. Waiting Time (WT) = TAT - BT


Process TAT BT WT= (TAT - BT)

P1 11 5 11 - 5 = 6

P2 5 3 5-3=2

P3 14 8 14 - 8 = 6

4. Average Waiting Time (AWT)


AWT=(6+2+6)/3=14/3≈4.67 ms
Advantages of Round Robin ✅

✔ Fair – Each process gets an equal time share.


✔ Avoids starvation (no process is left waiting indefinitely).
✔ Best for time-sharing systems.

Disadvantages of Round Robin ❌

❌ High context switching overhead – More frequent switching increases CPU overhead.
❌ Performance depends on time quantum – If the time quantum is too short, there’s excessive
switching. If too long, it behaves like FCFS.

Choosing the Right Time Quantum

✔ Too small → High context switching overhead.


✔ Too large → Processes take too long to finish, behaving like FCFS.
✔ Ideal quantum → Usually 10–100 ms in modern OS.

Priority Scheduling 🚦
What is Priority Scheduling?

🔹 Priority Scheduling is a CPU scheduling algorithm where each process is assigned a priority,
and the CPU executes the highest-priority process first.
🔹 If multiple processes have the same priority, FCFS (First-Come, First-Served) is used.
🔹 Priority can be preemptive or non-preemptive.

Types of Priority Scheduling

1. Preemptive Priority Scheduling


o If a new process arrives with a higher priority, it pre-empts the running process.
2. Non-Preemptive Priority Scheduling
o The CPU does not stop the running process until it completes, even if a higher-
priority process arrives.

Example of Non-Preemptive Priority Scheduling

Given Processes:

Process Arrival Time (AT) Burst Time (BT) Priority (Lower = Higher)

P1 0 ms 5 ms 2
Process Arrival Time (AT) Burst Time (BT) Priority (Lower = Higher)

P2 1 ms 3 ms 1

P3 2 ms 8 ms 4

P4 3 ms 6 ms 3

Lower priority number means higher priority.

Step-by-Step Execution:

1. P1 arrives first and starts execution.


2. P2 arrives at 1 ms and has a higher priority (1) than P1 (2) → P2 gets CPU.
3. P3 & P4 arrive but have lower priority → CPU continues running P2.
4. P2 completes at 4 ms.
5. The next highest priority process is P1 → Runs until 9 ms.
6. The next highest priority process is P4 → Runs until 15 ms.
7. Finally, P3 executes and completes at 23 ms.

Gantt Chart Representation


| P2 | P1 | P4 | P3 |
0 4 9 15 23

Calculations

1. Completion Time (CT)


Process Completion Time (CT)

P1 9 ms

P2 4 ms

P3 23 ms

P4 15 ms

2. Turnaround Time (TAT) = CT - AT


Process CT AT TAT (CT - AT)

P1 9 0 9-0=9

P2 4 1 4-1=3

P3 23 2 23 - 2 = 21

P4 15 3 15 - 3 = 12
3. Waiting Time (WT) = TAT - BT
Process TAT BT WT= (TAT - BT)

P1 9 5 9-5=4

P2 3 3 3-3=0

P3 21 8 21 - 8 = 13

P4 12 6 12 - 6 = 6

4. Average Waiting Time (AWT)


AWT=(4+0+13+6)/4=23/4=5.75 ms

Advantages of Priority Scheduling ✅

✔ More important tasks execute first.


✔ Efficient for time-sensitive applications.
✔ Can be preemptive or non-preemptive.

Disadvantages of Priority Scheduling ❌

❌ Starvation – Lower-priority processes may never execute.


❌ Priority Inversion – A low-priority process holding a resource can block a high-priority process.
❌ Difficult to assign priorities in real-world applications.

🛠 Solution to Starvation? Aging – Gradually increase the priority of waiting processes.

Preemptive vs. Non-Preemptive Priority Scheduling


Feature Preemptive Priority Non-Preemptive Priority

Running process can be interrupted by a Once a process starts, it runs until


Interruptions
higher-priority process completion

Response Time Lower (better) Higher

Starvation
More likely Less likely
Issue

Use Case Real-time systems Batch processing

Multi-Level Queue (MLQ) Scheduling 🏢🎯


What is Multi-Level Queue (MLQ) Scheduling?

🔹 Multi-Level Queue (MLQ) Scheduling divides the ready queue into multiple separate queues
based on process characteristics (e.g., priority, type, CPU burst time).
🔹 Each queue has its own scheduling algorithm (e.g., FCFS, SJF, RR).
🔹 Processes cannot move between queues (in traditional MLQ).

How Multi-Level Queue Scheduling Works?

1. Processes are classified into different queues based on priority, user type, or CPU burst
requirements.
2. Each queue has a fixed priority (higher-priority queues get CPU first).
3. CPU executes processes from the highest-priority queue first.
4. If a queue becomes empty, lower-priority queues get CPU time.
5. Scheduling within each queue follows a separate policy (e.g., RR for foreground, FCFS for
background).

Example: Multi-Level Queue Setup


Queue Type Process Type Scheduling Algorithm

Q1 System Processes Highest Priority (FCFS)

Q2 Interactive Processes Round Robin (RR)

Q3 Background Processes FCFS (Lowest Priority)

Example Execution

Given Processes

Process Arrival Time (AT) Burst Time (BT) Queue

P1 0 ms 5 ms Q2 (RR)

P2 1 ms 8 ms Q1 (FCFS)

P3 2 ms 3 ms Q3 (FCFS)

Step-by-Step Execution

1. Q1 (System Processes) has the highest priority → P2 executes first (FCFS).


2. Q2 (Interactive Processes) is next → P1 executes using Round Robin.
3. Q3 (Background Processes) executes last → P3 runs (FCFS).

Gantt Chart Representation


| P2 | P2 | P2 | P2 | P2 | P2 | P2 | P2 | P1 | P1 | P1 | P1 | P1 | P3 | P3 | P3 |
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Advantages of Multi-Level Queue Scheduling ✅

✔ Efficient for different process types (e.g., system vs. user processes).
✔ Higher-priority tasks get CPU first.
✔ Custom scheduling for each queue improves overall system performance.

Disadvantages of Multi-Level Queue Scheduling ❌

❌ Starvation – Low-priority processes may never execute if high-priority queues are always full.
❌ Rigid structure – Processes cannot move between queues.
❌ Complex implementation – Requires proper classification of processes.

Solution to Starvation? 🚀

✔ Multi-Level Feedback Queue (MLFQ) – Allows processes to move between queues based on
execution history.

Multi-Level Feedback Queue (MLFQ) Scheduling 🔄🏢


What is Multi-Level Feedback Queue (MLFQ) Scheduling?

🔹 MLFQ is an advanced CPU scheduling algorithm that allows processes to move between
multiple priority queues based on their behavior.
🔹 It is designed to optimize response time for interactive processes while ensuring CPU-bound
processes also get executed.
🔹 Unlike Multi-Level Queue (MLQ) scheduling, where processes are fixed in one queue, MLFQ
adjusts process priority dynamically.

How Multi-Level Feedback Queue Works?

1. Processes start in the highest-priority queue (Q1).


2. If a process exceeds its time quantum, it moves to a lower-priority queue.
3. Short-running processes stay in high-priority queues to get quick execution.
4. Long-running processes gradually move to lower-priority queues.
5. Lower-priority queues use FCFS or RR scheduling, while higher-priority queues may use
strict priority or shorter time quanta.
6. Aging can be implemented to prevent starvation by moving processes back to higher
queues after waiting for too long.

Example: Multi-Level Feedback Queue Structure


Queue Scheduling Algorithm Time Quantum

Q1 (High) Round Robin (RR) 4 ms


Queue Scheduling Algorithm Time Quantum

Q2 (Medium) Round Robin (RR) 8 ms

Q3 (Low) First-Come, First-Served (FCFS) No limit

Example Execution

Given Processes

Process Arrival Time (AT) Burst Time (BT)

P1 0 ms 20 ms

P2 1 ms 5 ms

P3 2 ms 10 ms

Step-by-Step Execution

1. All processes start in Q1 (RR, 4 ms time quantum).


o P1 runs for 4 ms, then moves to Q2 (remaining 16 ms).
o P2 runs for 4 ms, remaining 1 ms.
o P3 runs for 4 ms, remaining 6 ms.
2. Next cycle in Q1:
o P2 runs for 1 ms and completes.
o P3 runs for 4 ms, remaining 2 ms.
3. Processes in Q2 (RR, 8 ms time quantum):
o P1 runs for 8 ms, remaining 8 ms.
o P3 runs for 2 ms and completes.
4. Process in Q3 (FCFS, no time limit):
o P1 runs for 8 ms and completes.

Gantt Chart Representation


| P1 | P2 | P3 | P1 | P2 | P3 | P3 | P1 | P1 |
0 4 8 12 16 17 21 23 31 39

Advantages of Multi-Level Feedback Queue Scheduling ✅

✔ Highly flexible – Adjusts process priority dynamically.


✔ Minimizes response time for interactive processes.
✔ Prevents starvation – Aging can promote long-waiting processes.
✔ Efficient for both CPU-bound and I/O-bound processes.
Disadvantages of Multi-Level Feedback Queue Scheduling ❌

❌ Complex implementation – Requires careful tuning of time quanta and queue rules.
❌ Higher overhead due to frequent process movements between queues.
❌ May not guarantee fairness if queue parameters are not optimized.

Difference Between MLQ and MLFQ


Feature Multi-Level Queue (MLQ) Multi-Level Feedback Queue (MLFQ)

Queue Movement Processes stay in a fixed queue Processes can move between queues

Priority Adjustment Static Dynamic

Flexibility Low High

Starvation Handling Starvation possible Aging prevents starvation

Use Case Simple priority-based systems Modern, adaptive scheduling

Conclusion

📌 MLFQ is an advanced scheduling algorithm that optimizes CPU usage for different process
types.
📌 It balances interactive process response time with CPU-bound process execution efficiency.
📌 Used in modern operating systems to ensure fair and efficient scheduling.

Multiprocessor Scheduling
What is Multiprocessor Scheduling?

🔹 In multiprocessor systems, multiple CPUs share the workload, requiring efficient scheduling.
🔹 Unlike single-processor scheduling, where only one process runs at a time, multiprocessor
scheduling distributes processes among multiple CPUs for better performance.
🔹 The goal is to maximize CPU utilization, balance load, and reduce execution time.

Types of Multiprocessor Scheduling

1. Asymmetric Multiprocessing (AMP) 🏛️

🔹 One CPU acts as the master and handles scheduling.


🔹 Other CPUs only execute assigned tasks.
🔹 Simple but can lead to CPU bottleneck.
🔹 Used in older systems and real-time embedded systems.

2. Symmetric Multiprocessing (SMP) ⚖️


🔹 All CPUs share scheduling responsibilities.
🔹 The OS assigns tasks to any available CPU.
🔹 More efficient and widely used in modern OS (Windows, Linux, macOS).

Processor Assignment Methods

1. Load Sharing

🔹 Processes are placed in a common queue, and any available CPU executes them.
🔹 Advantages: Simple to implement.
🔹 Disadvantages: Might cause imbalance if some CPUs get more workload.

2. Load Balancing

🔹 The system tries to distribute processes evenly across CPUs.


🔹 Two types:

 Push Migration – A dedicated process moves tasks from overloaded CPUs to underloaded
ones.
 Pull Migration – Idle CPUs pull tasks from overloaded CPUs.
🔹 Used in modern OS to optimize performance.

3. Processor Affinity

🔹 A process is preferably scheduled on the same CPU it previously ran on.


🔹 Why? Helps with cache reuse, reducing memory access time.
🔹 Two types:

 Soft Affinity – The OS tries to keep a process on the same CPU but can migrate if necessary.
 Hard Affinity – The OS strictly binds a process to a specific CPU.

4. Gang Scheduling

🔹 Groups of related processes (threads) run together on multiple CPUs.


🔹 Used in parallel computing for better synchronization.

Challenges in Multiprocessor Scheduling

❌ Synchronization Issues – Shared data structures need proper locking.


❌ Load Imbalance – Some CPUs may be idle while others are overloaded.
❌ Overhead – Frequent migrations can reduce efficiency.
❌ Processor Affinity Handling – Need to balance performance and flexibility.
Multiprocessor vs. Single-Processor Scheduling
Feature Single-Processor Scheduling Multiprocessor Scheduling

CPU Count 1 Multiple

Load Balancing Not needed Required

Processor Affinity Not applicable Important for cache optimization

Complexity Lower Higher

Performance Limited by a single CPU Higher due to parallel execution

Real-World Applications

✅ Modern OS: Windows, Linux, macOS use SMP with load balancing.
✅ Datacenters & Cloud Computing: Efficient task distribution in multi-core servers.
✅ Parallel Computing: Scientific computing, AI, and simulations benefit from gang scheduling.

Conclusion

📌 Multiprocessor scheduling improves system performance by distributing tasks across multiple


CPUs.
📌 Techniques like load balancing, processor affinity, and gang scheduling help optimize
execution.
📌 Used in modern operating systems, cloud computing, and parallel computing applications.

Deadlock System Model


What is a Deadlock?

🔹 A deadlock is a situation where a set of processes are waiting indefinitely for resources that are
held by each other.
🔹 It occurs when multiple processes hold some resources and request additional ones that are
occupied by other processes.

Conditions for Deadlock (Coffman’s Conditions)

A deadlock can occur if all four conditions hold simultaneously:

1️⃣ Mutual Exclusion – Only one process can use a resource at a time.
2️⃣ Hold and Wait – A process holding at least one resource is waiting for additional resources.
3️⃣ No Preemption – A resource cannot be forcibly taken from a process.
4️⃣ Circular Wait – A set of processes {P1, P2, ..., Pn} exists, where each process waits for a
resource held by the next process, forming a circular chain.
Deadlock System Model

🔹 The system consists of processes and resources.


🔹 Resources can be CPU cycles, memory, files, printers, etc.
🔹 The system can be represented using a Resource Allocation Graph (RAG):

 Processes → Circles (P1, P2, P3, etc.)


 Resources → Squares (R1, R2, etc.)
 Requests → Arrows from process to resource
 Allocations → Arrows from resource to process

Example of Deadlock

Given Resources & Processes:

🔹 Processes: P1, P2
🔹 Resources: R1, R2
🔹 Initial Allocation:

 P1 holds R1 and requests R2


 P2 holds R2 and requests R1

Deadlock Situation:
P1 → (Holds R1) → Requests R2
P2 → (Holds R2) → Requests R1

✅ Circular wait condition is satisfied → DEADLOCK!


Deadlock Handling Methods

1. Deadlock Avoidance
What is Deadlock Avoidance?

🔹 Deadlock avoidance ensures that the system never enters an unsafe state where a deadlock could
occur.
🔹 Unlike deadlock prevention, which removes conditions causing deadlock, deadlock avoidance
allows requests but decides whether to grant them based on system safety.
🔹 Requires additional information about processes (e.g., maximum resource needs).

Safe vs. Unsafe States

✅ Safe State – The system can allocate resources in some order without leading to a deadlock.
❌ Unsafe State – If resources are allocated in a certain way, deadlock may occur.

📌 If a system is in a safe state, it will never reach a deadlock!

Example of a Safe State

 3 processes (P1, P2, P3)


 3 resource instances (R1, R2, R3)
 Even if each process requests more resources, some sequence of execution will avoid
deadlock.

Example of an Unsafe State (Deadlock Risk!)

 If all processes hold some resources and request more without the possibility of completion,
deadlock might occur.

Deadlock Avoidance Techniques

1. Resource Allocation Graph (RAG) with Claim Edges

🔹 Modification of RAG where a dashed claim edge (→) shows possible future requests.
🔹 If granting a request creates a cycle in the RAG, it is denied to avoid deadlock.

✅ Pros: Simple for small systems.


❌ Cons: Difficult for systems with dynamic requests.

2. Banker’s Algorithm (Most Common)


🔹 Named after a bank managing loans to prevent bankruptcy.
🔹 Before allocating resources, the system checks if it will remain in a safe state.

Banker’s Algorithm Steps:

1️⃣ Each process declares its maximum resource need at the start.
2️⃣ Whenever a process requests resources, the system:

 Temporarily grants the request.


 Simulates execution to check if the system remains in a safe state.
 If safe → Grant the request.
 If unsafe → Deny the request.
3️⃣ Ensures deadlock will never occur.

✅ Pros: Ensures safety, widely used in OS.


❌ Cons: Requires pre-declared max resource needs, overhead in checking.

Deadlock Prevention
What is Deadlock Prevention?

🔹 Deadlock prevention is a strategy to ensure that a deadlock never occurs by eliminating at least
one of the four Coffman’s conditions.
🔹 Unlike deadlock avoidance, which dynamically checks system states, prevention actively
removes conditions that lead to deadlocks.

Coffman’s Conditions (Necessary for Deadlock)

A deadlock occurs only if all four conditions hold simultaneously:

Condition Description

1. Mutual
Only one process can hold a resource at a time.
Exclusion

2. Hold and Wait A process holds resources while waiting for additional ones.

3. No Pre-emption Resources cannot be forcibly taken from a process.

A cycle of processes exists where each process is waiting for a resource held by
4. Circular Wait
another.

✅ Deadlock prevention works by eliminating at least one of these conditions.


Methods of Deadlock Prevention

1. Breaking Mutual Exclusion 🚧

🔹 Ensure that resources can be shared between processes.


✔ Example: Using spooling for printers instead of direct access.

✅ Pros: No deadlock for non-exclusive resources.


❌ Cons: Some resources (e.g., locks, printers) must be exclusive.

2. Breaking Hold and Wait

🔹 Require processes to request all required resources at once and allocate them before execution
starts.
✔ Example: Process must acquire all resources before execution begins (one-shot allocation).

✅ Pros: Eliminates waiting states.


❌ Cons: Low resource utilization, possible starvation.

3. Breaking No Pre-emption

🔹 Allow resources to be pre-empted (taken away) from a process if needed.


✔ Example: If a process requests a resource but is blocked, its current resources are released and
reallocated.

✅ Pros: Prevents indefinite blocking.


❌ Cons: Complex to implement, might cause data inconsistency.

4. Breaking Circular Wait

🔹 Impose a strict resource request order where processes must request resources in a predefined
sequence.
✔ Example: Enforce an increasing order of resource numbering (R1, R2, R3, …).

✅ Pros: Simple and effective.


❌ Cons: Restrictive, requires modification of process logic.

Conclusion

📌 Deadlock prevention removes at least one Coffman’s condition to guarantee deadlock-free


execution.
📌 Best for systems where deadlocks must not happen at all (e.g., real-time systems).
📌 Trade-off: It can reduce resource efficiency but ensures system stability.

Deadlock Detection & Recovery


What is Deadlock Detection & Recovery?

🔹 Deadlock detection is a technique that allows deadlocks to occur but periodically checks the
system to identify them.
🔹 Deadlock recovery involves taking corrective actions to remove deadlocks and continue
execution.
🔹 Used in systems where deadlocks are rare, and prevention or avoidance is too expensive (e.g.,
databases, OS process scheduling).

Deadlock Detection Techniques

1. Resource Allocation Graph (RAG) Cycle Detection

 Used when each resource has only one instance.


 A cycle in the graph indicates a deadlock.
 If no cycle exists, there is no deadlock.

✅ Pros: Simple and effective.


❌ Cons: Does not work well with multiple resource instances.

2. Wait-for Graph (WFG) Method

 Derived from RAG by removing resource nodes.


 Edges represent process dependencies.
 A cycle in WFG → Deadlock detected!

✅ Pros: Faster than RAG for detecting process wait dependencies.


❌ Cons: Works only if resource requests are known.

3. Deadlock Detection Algorithm (Multiple Instances)

 Used when resources have multiple instances.


 A modified Banker’s Algorithm checks if any process can finish.
 If all processes remain stuck → Deadlock detected!

✅ Pros: Works for complex resource allocations.


❌ Cons: High computational overhead (O(n²) time complexity).
Deadlock Recovery Methods

Once a deadlock is detected, the system must recover using one of the following strategies:

1. Process Termination (Killing Processes)

a) Abort all deadlocked processes


✔ Pros: Simple and guaranteed recovery.
❌ Cons: Loss of work, costly in long-running processes.

b) Abort processes one by one until deadlock is resolved


✔ Pros: Less drastic than killing all processes.
❌ Cons: Requires selecting the right processes, overhead in restarting.

2. Resource Pre-emption (Forcing Release)

 Pre-empt (take back) resources from some processes and reallocate.


 Need to choose victim processes based on priority, time, or progress.
 Rollback processes to a safe state if needed.

✅ Pros: Avoids killing processes completely.


❌ Cons: Risk of starvation (if the same process is always pre-empted).

Comparison of Deadlock Handling Methods


Method How It Works Pros Cons

Breaks one of Coffman’s Guarantees no May reduce


Deadlock Prevention
conditions deadlocks efficiency

Checks for safe/unsafe states Allows more Overhead in


Deadlock Avoidance
before granting requests flexibility monitoring

Deadlock Detection & Detects and resolves No need to modify Requires recovery
Recovery deadlocks after they occur request behavior mechanisms

Ignoring Deadlock Does nothing, restarts system Can cause system


Simple to implement
(Ostrich Algorithm) if necessary failures
Conclusion

📌 Deadlock detection is necessary when prevention or avoidance is impractical.


📌 Recovery methods like process termination and resource preemption help resolve deadlocks.
📌 Operating systems and databases often use detection & recovery instead of prevention.

4. Ignore Deadlock (Ostrich Algorithm) 🦢

🔹 Used in real-world OS (e.g., Windows, Linux) where deadlocks are rare, and restarting a process is
easier than implementing complex handling.

Conclusion

📌 Deadlocks block process execution indefinitely unless handled properly.


📌 Systems can use prevention, avoidance, or detection techniques to handle deadlocks.
📌 Resource Allocation Graphs (RAGs) help in visualizing deadlocks.

Difference Between Deadlock and Starvation


Both deadlock and starvation involve processes waiting indefinitely, but they occur for different
reasons.

Key Differences Between Deadlock and Starvation


Feature Deadlock 💀 Starvation ⏳

A process waits indefinitely because it


A set of processes wait indefinitely because
Definition has a low priority and never gets
they are blocked by each other.
resources.

Unequal resource allocation due to


Cause Circular waiting on resources.
priority scheduling.

Resource Resources are not available (held by other Resources are available, but never
Availability deadlocked processes). allocated to the low-priority process.

Can be detected using algorithms (e.g., Harder to detect automatically. Can be


Detection Resource Allocation Graph, Banker’s identified by monitoring process
Algorithm). execution.

Recovery Process termination, pre-emption, rollback, Use aging (increase process priority
Methods or deadlock prevention/avoidance strategies. over time) to ensure execution.

Two processes (P1 and P2) hold resources A low-priority process waits indefinitely
Example
(R1 and R2) and are waiting for each other’s in a priority queue because high-priority
Scenario
resources, creating a cycle. processes keep getting scheduled first.
Solved Numerical on Banker's Algorithm (Safe Allocation)

Problem Statement:

A system has 5 processes (P0 to P4) and 3 resource types (A, B, C). The total available resources
are:

 A = 10, B = 5, C = 7

The Maximum Need, Allocated Resources, and the Available Resources are given in the tables
below.

Step 1: Given Data


Process Max Need (A B C) Allocated (A B C)

P0 753 010

P1 322 200

P2 902 302

P3 222 211

P4 433 002

Step 2: Compute Need Matrix

The Need Matrix is calculated as:

Need=Max Need−Allocated
Process Need (A B C)

P0 743

P1 122

P2 600

P3 011

P4 431

Step 3: Compute Available Resources


Available=Total−∑Allocated
Available=(10,5,7)−(2+3+2+0,1+0+1+0,0+2+1+2)=(3,3,2)

Step 4: Apply Banker's Algorithm to Check for a Safe Sequence

We check whether we can satisfy the need of processes one by one while updating the available
resources.
Iteration 1

Available = (3,3,2)
Find a process whose Need ≤ Available
✔ P1 (1,2,2) ≤ (3,3,2) ✅
Allocate resources to P1, complete it, and update available resources:
New Available = (3+2, 3+0, 2+0) = (5,3,2)
✔ Safe Sequence: P1

Iteration 2

Available = (5,3,2)
✔ P3 (0,1,1) ≤ (5,3,2) ✅
Allocate resources to P3, complete it, and update available:
New Available = (5+2, 3+1, 2+1) = (7,4,3)
✔ Safe Sequence: P1 → P3

Iteration 3

Available = (7,4,3)
✔ P4 (4,3,1) ≤ (7,4,3) ✅
Allocate resources to P4, complete it, and update available:
New Available = (7+0, 4+0, 3+2) = (7,4,5)
✔ Safe Sequence: P1 → P3 → P4

Iteration 4

Available = (7,4,5)
✔ P0 (7,4,3) ≤ (7,4,5) ✅
Allocate resources to P0, complete it, and update available:
New Available = (7+0, 4+1, 5+0) = (7,5,5)
✔ Safe Sequence: P1 → P3 → P4 → P0

Iteration 5

Available = (7,5,5)
✔ P2 (6,0,0) ≤ (7,5,5) ✅
Allocate resources to P2, complete it, and update available:
New Available = (7+3, 5+0, 5+2) = (10,5,7)
✔ Safe Sequence: P1 → P3 → P4 → P0 → P2 ✅

Final Answer: Safe Sequence

✅ P1 → P3 → P4 → P0 → P2
✔ System is in a safe state!
Conclusion

📌 The system remains deadlock-free because we found a safe sequence.


📌 Banker’s Algorithm ensures that each process gets resources only if it won’t lead to an unsafe
state.

You might also like