0% found this document useful (0 votes)
65 views131 pages

Operating System - Unit 2

Uploaded by

Rauank
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views131 pages

Operating System - Unit 2

Uploaded by

Rauank
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 131

UNIT 2: OPERATING

SYSTEM
PROCESS MANAGEMENT
WHAT IS A PROGRAM
 A program is a set of instructions written in a programming language
that tells a computer what tasks to perform and how to perform them.
 It is essentially a passive entity stored on disk (or some other permanent
storage device), containing the code necessary to accomplish a specific
job.
 Key Characteristics of a Program:
• Static: A program is static in nature, meaning it does not change its state
until it is executed.
• Storage: Programs are stored on disk or other non-volatile storage media.
• Languages: Written in programming languages such as Python, Java, C++,
etc.
• Components: Includes code (instructions), data (constants, variables), and
resources (libraries, files).
WHAT IS PASSIVE COLLECTION
 When we refer to a program as a "passive collection of instructions," it
means that the program itself is simply a set of coded commands and data,
which by itself does not perform any actions or consume any system
resources. It is inactive and stored on disk or other permanent storage until it
is explicitly executed by the operating system.
 Characteristics of a Passive Collection:
• Inactive: A program remains dormant and does not execute any actions or
computations on its own.
• Stored: It is stored in non-volatile storage like a hard drive, SSD, or other
memory devices.
• Read Only: It does not change its state or contents until it is loaded into
memory and executed.
• No Resource Utilization: In its passive state, a program does not utilize
CPU, memory, or other system resources.
WHAT IS A PROCESS

 A process is an instance of a program in execution. It is an active entity


that includes the program code and its current activity.
 Unlike a passive program, a process has a life cycle, and it actively uses the
system's resources to perform the tasks specified by the program's
instructions.
 Key Components of a Process:
1. Program Code: The set of instructions that the CPU executes.
2. Program Counter (PC): Indicates the address of the next instruction to be
executed.
3. Stack: Contains temporary data such as function parameters, return addresses,
and local variables.
4. Data Section: Contains global variables.
5. Heap: Dynamically allocated memory during process runtime.
CHARACTERSTICS OR COMPONENTS OF A
PROCESS
EVERY PROCESS HAS ITS OWN:-
 Code/text/instruction section: Here source code of process is stored. Example
.C file stored in RAM
 Data Section: Memory for variables in program is allocated here
 Stack: Used in nesting of function calls
 Heap: Dynamic memory allocation
 Environment: Command line argument are stored here
 Process Control Block: It is a Data Structure which is used to store entire
information about the block
• Process State: New, ready, running etc.
• Program Counter Value: next line address is stored here
• General Purpose Register (GPR) Value: Values in register used by process is
stored here
• CPU scheduling information: what is the priority of process and when will it
get CPU
• Memory Management Information: where in memory process is stored, i.e.,
information about process address space
• Accounting information: This information include amount and number of times
CPU was allocated to process
• I/S states information: which I/O device process can use and which it has
already used
• Pre process file table: contains list of file process can use
• PID: unique number assigned by OS to each and every process
• PPID: parent ID
DIFFERENCE BETWEEN PROCESS AND PROGRAM
Aspect Program Process
A set of instructions written in a
Definition An instance of a program in execution.
programming language.
Nature Passive (static entity). Active (dynamic entity).
Storage Stored on disk or non-volatile storage. Exists in memory (RAM) while executing.
Changes state (new, ready, running, waiting,
State Does not change its state.
terminated).
Resource Uses system resources (CPU, memory, I/O
Does not use system resources.
Utilization devices).
Code, program counter, stack, heap, data
Components Code and static data.
section, PCB.
Created, executed, and terminated by the
Lifecycle Exists until deleted from storage.
operating system.
Not executing; remains dormant until
Execution Actively executing instructions.
invoked.
Source code files like program.c, Running applications like a web browser, text
Examples
executable files like program.exe. editor.
Allocated in main memory (RAM) during
Memory Stored in permanent storage.
execution.
Written and compiled by a Created by the operating system when the
Created By
programmer. program is executed.
Control
No control structure needed. Managed by a Process Control Block (PCB).
Structure
Interaction with Does not interact with the operating Interacts with the operating system for
OS system on its own. scheduling and resource management
EXAMPLE

Consider a text editor program stored on your computer:

•Program: The executable file (e.g., notepad.exe on Windows) stored on


the hard drive is a passive collection of instructions.

•Process: When you double-click on notepad.exe, the operating system


loads it into memory
PROCESS MANAGEMENT

 Process Management is a fundamental aspect of operating system


(OS) functionality that involves creating, scheduling, and terminating
processes. It ensures that processes are executed efficiently and without
conflicts, while also managing the resources required by these
processes.
 KEY FUCNTIONS OF PROCESS MANAGEMENT
 Process creation
 Process scheduling
 Process termination
 Process state management
 Inter process communication
 Process synchronization
 Process handling
COMPONENTS OF PROCESS
MANAGEMENT

•Process Control Block (PCB): A data structure maintained by the OS for


each process, containing information such as process state, program
counter, CPU registers, memory limits, and I/O status.

•Process Scheduler: The component responsible for determining which


process runs next. It uses various scheduling algorithms to optimize CPU
utilization and ensure fair process execution.

•Scheduling Queues: These are used to manage processes in different


states. Common types include the ready queue, wait queue, and job queue.
ADVANTAGES OF PROCESS MANAGEMENT

•Efficient Resource Utilization:


•Ensures that CPU, memory, and I/O devices are used efficiently, minimizing idle time
and maximizing throughput.

•Concurrent Execution:
•Allows multiple processes to run concurrently, improving the system's responsiveness
and performance, especially in multitasking environments.

•Fair Scheduling:
•Implements scheduling algorithms to ensure fair distribution of CPU time among
processes, preventing any single process from monopolizing the CPU.

•Process Isolation:
•Isolates processes to prevent them from interfering with each other, enhancing
system stability and security.
•Inter-Process Communication (IPC):
•Provides mechanisms for processes to communicate and synchronize with each
other, enabling complex workflows and cooperation between processes.

•Deadlock Handling:
•Detects and resolves deadlocks, ensuring that the system does not get stuck in an
unproductive state.

•Scalability:
•Manages an increasing number of processes efficiently, supporting scalability in
modern multi-core and multi-processor systems.

•User Convenience:
•Simplifies the execution of multiple applications simultaneously, providing a more
convenient and productive user experience.
DISADVATAGES OF PROCESS MANAGEMENT

•Overhead:
•Managing processes introduces overhead due to context switching, scheduling, and
maintaining process control blocks (PCBs), which can reduce overall system
performance.

•Complexity:
•The implementation of process management mechanisms like IPC, synchronization,
and deadlock handling adds complexity to the operating system.

•Resource Contention:
•Multiple processes competing for limited resources can lead to contention, causing
delays and reducing the efficiency of resource utilization.

•Deadlock and Starvation:


•Despite deadlock handling mechanisms, some processes may still experience
deadlock or starvation, where they wait indefinitely for resources.
•Security Risks:
•Improper isolation or vulnerabilities in process management can lead to security
risks such as unauthorized access to process data or inter-process communication
channels.

•Memory Consumption:
•Each process requires its own memory space for the PCB, stack, heap, and other
resources, leading to increased memory consumption.

•Complex Debugging:
•Diagnosing and debugging issues in a system with multiple concurrently running
processes can be challenging due to the interdependencies and interactions
between processes.

•Increased Latency:
•The time taken for context switches and the need to share CPU time among
processes can introduce latency, especially in real-time systems where prompt
response is critical.
PROCESS STATES
STATES OF PROCESS
1. New
• Description: The process is being created and is kept in the job pool of hard disk,
since RAM is not very large and can not accommodate many processes.
• Actions:
• The OS performs initial setup tasks, such as allocating memory and initializing the Process
Control Block (PCB).
• The process is not yet ready to execute.

2. Ready
• Description: It indicates the pool of executable processes. The process is ready to run
but is waiting for CPU time.
• Actions:
• The process has all necessary resources except the CPU.
• It is placed in the ready queue, awaiting scheduling by the CPU scheduler.
Transition from new state to ready state is done by Long Term Scheduler(LTS)
or admission scheduler or job scheduler.
 3. Running
• Description: The process is currently being executed by the CPU.
• Actions:
• The process's instructions are being executed.
• It can perform computations and interact with hardware and I/O devices.
• Only one process can be in the running state on a single-core CPU, though multiple processes can run
simultaneously on multi-core systems.
• SHORT TEM SCHEDULER(STS) is responsible to ger the process from ready state to
running state.
• Transition from running to ready state is only possible in case of Preemptive scheduling

 4. Waiting (or Blocked)


• Description: The process cannot proceed until some event occurs (e.g., I/O completion,
availability of a resource). Eg- waiting to get input from keyboard.
• Actions:
• The process is moved to a wait queue.
• It remains inactive until the event it is waiting for occurs.
• Typical events include completion of an I/O operation or receipt of a signal.
CASE:- When the amount of RAM required by the process is not available then it goes
to waiting state.
 5. Terminated (or Exit)
• Description: The process has finished execution.
• Actions:
• The OS deallocates resources used by the process, such as memory and I/O
buffers.
• The process's exit status is made available to the parent process.
• The process is removed from the scheduling queues.
ADDITIONAL STATES
 Suspended (or Swapped Out)
• Description: The process is temporarily removed from main memory and
placed on a backing store (secondary storage), awaiting further action.
• Actions:
• The OS frees up memory by swapping out the process.
• The process can be resumed later by being swapped back into memory.
• This state is often used in conjunction with the ready or waiting states (i.e.,
suspended-ready or suspended-waiting).

SITUATION IN WHICH A PROCESS CAN BE SWAPPED OUT


 The amount of physical memory required in not available, page
replacement algorithms, to prioritize high priority process over low priority
processes, etc.
 . Suspended-Ready
• Description: The process is in secondary storage but is ready to execute once
it is brought back into main memory.
• Actions:
• The process is waiting to be swapped back into memory.
• It will transition to the ready state when memory becomes available.
 8. Suspended-Waiting
• Description: The process is in secondary storage and is waiting for an event to
occur.
• Actions:
• The process remains in secondary storage until the event it is waiting for occurs.
• Once the event occurs, it transitions to the suspended-ready state.
State Transitions

 The following are typical state transitions a process might go through:


• New → Ready: After the process creation is complete.
• Ready → Running: When the CPU scheduler selects the process for execution.
• Running → Waiting: When the process needs to wait for an event (e.g., I/O).
• Running → Ready: When the process is preempted by the scheduler to allow
another process to run.
• Waiting → Ready: When the event the process was waiting for occurs.
• Running → Terminated: When the process completes its execution.
• Ready/Waiting → Suspended-Ready/Suspended-Waiting: When the process is
swapped out to secondary storage.
• Suspended-Ready/Suspended-Waiting → Ready/Waiting: When the process is
swapped back into main memory.
PROGRAM CONTROL BLOCK (PCB)
PROCESS CONTROL BLOCK
 The Process Control Block (PCB) is a data structure used by the operating system
to store information about a process. It is crucial for managing processes as they
transition through different states, ensuring that the system can effectively track
and control process execution.
 Role/Purpose of the PCB
• Process Management: The PCB provides a way for the OS to keep track of the state
and context of each process, facilitating process scheduling, execution, and
management.
• Context Switching: During context switches (when the CPU switches from one
process to another), the PCB is used to save and restore the state of the processes,
allowing them to resume execution from the correct point.
• Resource Tracking: The PCB helps the OS manage resources allocated to each
process, including memory, CPU time, and I/O devices.
• Process Synchronization and Communication: Information necessary for process
synchronization and inter-process communication is stored in the PCB.
COMPONENTS OF PCB
 Process State
• Description: Indicates the current status of the process.
• States: New, Ready, Running, Waiting, Terminated.
• Purpose: Helps the OS manage the lifecycle of the process and control its transitions
between different states.
 2. Process ID (PID)
• Description: A unique identifier assigned to each process.
• Purpose: Allows the OS to track and manage each process individually.
 3. Program Counter (PC)
• Description: Contains the address of the next instruction to be executed by the process.
• Purpose: Ensures the process can resume execution from the correct point after a context
switch.
 4. CPU Registers
• Description: Includes various registers used by the process, such as general-purpose
registers, index registers, and special-purpose registers.
• Purpose: Stores the current working values and intermediate results during process
execution.
 5. Memory Management Information
• Description: Contains data related to the process's memory usage.
• Components:
• Base and Limit Registers: Define the memory boundaries for the process.
• Page Tables: Used in paging systems to map virtual addresses to physical addresses.
• Segment Tables: Used in segmentation to define the segments allocated to the
process.
• Purpose: Manages the allocation and protection of the process's memory space.
 6. Scheduling Information
• Description: Contains details used by the scheduler to manage process
execution.
• Components:
• Process Priority: Indicates the priority level of the process.
• Scheduling Queues: Pointers to the queues where the process is waiting.
• Other Parameters: Timeslice allocation, scheduling algorithms used.
• Purpose: Helps the scheduler determine the order and duration of process
execution.
 7. Accounting Information
• Description: Tracks the resource usage and performance metrics of the process.
• Components:
• CPU Usage: Amount of CPU time consumed by the process.
• Memory Usage: Amount of memory used by the process.
• Execution Time: Total time the process has been running.
• Other Metrics: I/O operations performed, number of context switches.
• Purpose: Provides data for system monitoring, billing, and optimization.
 8. I/O Status Information
• Description: Contains information about the process’s interaction with I/O devices.
• Components:
• I/O Devices Allocated: List of I/O devices assigned to the process.
• Open File Descriptors: Handles for files opened by the process.
• Pending I/O Requests: Details of ongoing or pending I/O operations.
• Purpose: Manages the process’s I/O operations and device usage.
 Process Privileges
• Description: Includes security attributes and access rights of the process.
• Components:
• User ID (UID): Identifier of the user who owns the process.
• Group ID (GID): Identifier of the group the process belongs to.
• Access Rights: Permissions for accessing system resources and files.
• Purpose: Ensures that the process operates within the security policies of the
system.

 Process Stack and Heap Information-


• Description: Details about the stack and heap memory used by the process.
• Components:
• Stack Pointer (SP): Points to the top of the process's stack.
• Heap Information: Includes pointers to dynamically allocated memory.
• Purpose: Manages the process’s execution context and dynamic memory
allocation.
TAKING AN EXAMPLE

#include <stdio.h>

int main() {
int a = 5;
int b = 10;
int sum = a + b;
printf("Sum: %d\n",
sum);
return 0;
}
PROCESS STATE

•Description: The current status of the process.


•Example: Our process can be in different states:
•New: The process is being created.
•Ready: The process is ready to run but waiting for CPU
allocation.
•Running: The process is currently executing (e.g.,
performing sum = a + b).
•Waiting: The process might be waiting for I/O operations
(e.g., waiting for printf to complete).
•Terminated: The process has finished execution (return 0).
•PROCESS ID (PID)
•Description: Unique identifier for the process.
•Example: Let's say our process is assigned PID 1234.

•PROGRAM COUNTER (PC)


•Description: Address of the next instruction to be executed.
•Example: If the next instruction is to execute int sum = a +
b;, the program counter holds the address of this instruction
in memory.
CPU REGISTERS
•Description: Registers used during process execution.
•Example:
•General-Purpose Registers (GPRs): Temporarily hold
variables and results.
•For int sum = a + b;, registers might hold values of a and
b.
•E.g., EAX might hold 5, EBX might hold 10, and the result
15 could be stored back in EAX.
•Special-Purpose Registers: Specific functions like the
stack pointer.
•Stack Pointer (SP): Points to the current stack frame.
•Program Counter (PC): Points to the next instruction.
•MEMORY MANAGEMENT INFORMATION
•Description: Information about the process's memory.
•Example:
•Base Register: Starting address of the process’s memory space.
•Limit Register: Size of the process’s memory space.
•Page Table: If the system uses paging, this maps logical addresses to
physical addresses.
•Our variables a, b, and sum are stored in specific memory locations
managed by these tables.

•SCHEDULING INFORMATION
•Description: Information used by the scheduler.
•Example:
•Priority: Let’s say our process has a priority of 5 (on a scale where higher
numbers mean higher priority).
•Scheduling Queue Pointers: Points to the queue where the process is
waiting (e.g., ready queue).
•ACCOUNTING INFORMATION
•Description: Resource usage metrics.
•Example:
•CPU Time Used: If our process used 0.02 seconds of CPU time.
•Memory Usage: The process might be using 4 KB of memory.

•I/O STATUS INFORMATION


•Description: Information about I/O operations.
•Example:
•Open File Descriptors: Our process has a file descriptor for stdout.
•I/O Devices Allocated: If our process is writing to the console, it
interacts with the terminal device.
•Pending I/O Requests: If printf is still executing, this might be
recorded.
•PROCESS PRIVILEGES
•Description: Security and access rights.
•Example:
•User ID (UID): The process runs under UID 1001.
•Group ID (GID): The process runs under GID 1001.
•Permissions: The process has read and write access to its memory
and stdout.

•PROCESS STACK AND HEAP INFORMATION


•Description: Stack and heap usage details.
•Example:
•Stack Pointer (SP): Points to the current position in the stack,
managing function calls and local variables.
•Heap Information: If the process dynamically allocated memory
using malloc, this would be tracked here.
PROCESS OPERATIONS

 1. Process Creation
• Description: The operation of creating a new process.
• Steps Involved:
• Allocate a PCB: Create a new Process Control Block to manage the new process.
• Assign Process ID: Allocate a unique identifier (PID) for the process.
• Initialize Process State: Set the initial state to "New."
• Load Program: Load the executable program into memory.
• Set Up Resources: Allocate necessary resources such as memory and I/O
devices.
• Example: A new instance of a web browser is created when the user opens
it.
 2. Process Scheduling
• Description: The operation of determining which process should run at a given time.
• Types:
• Long-Term Scheduling: Decides which processes should be admitted to the ready queue.
• Short-Term Scheduling: Decides which process in the ready queue should be executed next by
the CPU.
• Medium-Term Scheduling: Handles swapping processes in and out of memory to manage the
mix of processes.
• Example: The operating system decides which application should run after a currently
running application is paused.

 3. Process Execution
• Description: The operation of running a process on the CPU.
• Steps Involved:
• Context Switching: Save the state of the currently running process and load the state of the
new process.
• Load Instructions: Fetch instructions from the process's memory and execute them.
• Manage Execution Flow: Handle interruptions, system calls, and context switches.
• Example: A word processor executing commands to type, format, and save a document.
 4. Process Suspension and Resumption
• Description: Temporarily stopping a process and later resuming it.
• Types:
• Suspension: Moving a process from the "Running" or "Ready" state to the "Waiting" or
"Blocked" state, typically to manage memory or I/O.
• Resumption: Moving a suspended process back to the "Ready" state when the resources
become available.
• Example: Pausing a game application and resuming it later.

 5. Process Termination
• Description: The operation of ending a process and deallocating its resources.
• Steps Involved:
• Cleanup Resources: Free memory and other resources used by the process.
• Remove PCB: Delete the Process Control Block from the system.
• Update Accounting Information: Record any relevant data about the process’s
execution.
• Example: Closing a text editor application, which releases memory and file handles
used by the application.
 6. Process Synchronization
• Description: Ensuring that processes operate in a coordinated manner to prevent
conflicts.
• Techniques:
• Locks: Mechanisms to ensure that only one process accesses a critical section at a time.
• Semaphores: Signaling mechanisms to coordinate between processes.
• Monitors: High-level synchronization constructs that encapsulate shared variables and
synchronization.
• Example: Multiple threads accessing a shared data structure, such as a queue, in a
thread-safe manner.
 7. Process Communication
• Description: Mechanisms for processes to communicate and synchronize their
actions.
• Methods:
• Inter-Process Communication (IPC): Includes techniques like message passing, pipes,
and shared memory.
• Signals: Notifications sent to a process to indicate events or conditions.
• Example: A server process communicating with a client process using sockets to
exchange data.
 7. Process Communication
• Description: Mechanisms for processes to communicate and synchronize their actions.
• Methods:
• Inter-Process Communication (IPC): Includes techniques like message passing, pipes, and
shared memory.
• Signals: Notifications sent to a process to indicate events or conditions.
• Example: A server process communicating with a client process using sockets to
exchange data.

 Summary
1. Process Creation: Initializes a new process.
2. Process Scheduling: Manages which process runs and when.
3. Process Execution: Runs a process on the CPU.
4. Process Suspension and Resumption: Temporarily stops and later resumes a process.
5. Process Termination: Ends a process and cleans up resources.
6. Process Synchronization: Coordinates processes to avoid conflicts.
7. Process Communication: Allows processes to communicate and share data.
8. Process State Management: Manages and transitions process states.
OS SERVICES FOR PROCESS MANAGEMENT
Process Creation and Termination
 Process Creation
• Overview: The creation of a new process involves several steps to ensure that
the new process is correctly initialized and ready to execute.
• Details:
• Allocation of PCB: A new Process Control Block (PCB) is created to store the process's
state and management information.
• Assignment of Process ID: The operating system assigns a unique Process ID (PID) to
the new process for identification.
• Loading the Program: The executable code of the program is loaded into memory.
• Setting Up the Process Environment: This includes allocating memory space,
setting up the stack, heap, and other resources needed by the process.
• Initialization: The initial state of the process is set, often to "New" or "Ready."
 Process Termination
• Overview: Termination involves ending the process and
releasing all resources it was using.
• Details:
• Cleanup Resources: Free up memory, close open files, and release
any other resources.
• Removing PCB: The PCB is deleted, and the process's information is
removed from the system's process tables.
• Update System State: The operating system updates accounting
and status information, including releasing the PID.
 2. Process Scheduling
 Overview
• Purpose: Scheduling ensures that processes are given CPU time
in an efficient manner based on policies and algorithms.
• Details:
• Long-Term Scheduling: Determines which processes are admitted
to the system and placed in the ready queue.
• Short-Term Scheduling: Decides which process in the ready queue
should be executed next by the CPU. It involves context switching,
where the state of the current process is saved, and the state of the
new process is loaded.
• Medium-Term Scheduling: Manages the swapping of processes in
and out of main memory to maintain a balance between I/O and
CPU-bound processes.
 3. Process Synchronization
 Overview
• Purpose: Synchronization ensures that processes or threads
operate without conflicts when accessing shared resources.
• Details:
• Mutexes: Provide mutual exclusion to prevent multiple processes or
threads from accessing critical sections simultaneously.
• Semaphores: Signal between processes or threads to synchronize
their execution or resource access.
• Condition Variables: Allow threads to wait for certain conditions to
be met before continuing execution.
 4. Inter-Process Communication (IPC)
 Overview
• Purpose: IPC mechanisms enable processes to exchange data
and coordinate actions.
• Details:
• Message Passing: Processes send and receive messages to
communicate. This can be done through system calls or IPC libraries.
• Shared Memory: A region of memory is shared between processes,
allowing them to read and write data.
• Pipes: Provide a unidirectional communication channel between
processes.
• Sockets: Facilitate communication over a network, allowing
processes on different machines to communicate.
 5. Process State Management
 Overview
• Purpose: Manage and track the various states of a
process throughout its lifecycle.
• Details:
• State Transitions: Handle transitions between states such
as New, Ready, Running, Waiting, and Terminated.
• State Information: The PCB contains information about the
 6. Process Resource Management
 Overview
• Purpose: Allocate and manage the resources required for
process execution.
• Details:
• Memory Allocation: Allocate memory for code, data, stack, and
heap segments. This includes managing virtual memory and paging.
• Resource Allocation: Manage access to I/O devices and other
system resources, ensuring that processes can perform their
operations without conflicts.
 7. Process Monitoring and Control
 Overview
• Purpose: Monitor process performance and control its execution.
• Details:
• Status Reporting: Provide tools and system calls for querying
process status, including CPU usage, memory consumption, and
process state.
• Process Control: Allow actions such as suspending, resuming, or
killing processes. This is important for process management and
system stability.
 8. Process Security and Access Control
 Overview
• Purpose: Ensure that processes operate within their security
privileges and do not violate system security policies.
• Details:
• Access Control: Enforce permissions for processes to access files,
memory, and other resources.
• Authentication and Authorization: Verify the identity of
processes and users, and enforce policies to control what processes
can and cannot do.
CONTEXT SWITCHING

 Context switching is the process of saving the state (or context) of a


currently executing process or thread so that it can be resumed later,
and then loading the saved state of a different process or thread to
resume its execution.
 This switching happens frequently in a multitasking operating system to
ensure fair and efficient CPU time allocation among processes.
Why is Context Switching
Needed?
• Multitasking: To allow multiple processes or threads to share the CPU
and run simultaneously (or appear to run simultaneously) by switching
between them.
• Responsiveness: To improve system responsiveness by allowing
higher-priority or interactive tasks to receive CPU time promptly.
• Resource Sharing: To manage and allocate CPU time among various
processes or threads that are competing for execution.
Components Involved in Context Switching

• Process Control Block (PCB): Stores the state information of a


process. Key components include:
• Program Counter (PC): Indicates the address of the next instruction to
execute.
• Registers: Includes general-purpose registers, status registers, and any
special-purpose registers.
• Stack Pointer: Points to the current position in the process’s stack.
• Memory Management Information: Includes page tables, segment
registers, etc.
• Scheduler: Determines which process or thread should run next and
initiates the context switching process.
Steps in Context Switching

1. Save State of Current Process:


1. Save Registers: Store the current values of the CPU registers into the PCB of the currently
running process.
2. Save Program Counter: Store the address of the next instruction to be executed in the PCB.
3. Save Other Information: Update any other process-specific information, such as memory
management details.
2. Select Next Process:
1. Scheduler Decision: The operating system’s scheduler selects the next process or thread to run
based on the scheduling algorithm.
3. Load State of New Process:
1. Load Registers: Retrieve the values of CPU registers from the PCB of the new process.
2. Load Program Counter: Set the program counter to the address of the next instruction to be
executed for the new process.
3. Restore Other Information: Load any other process-specific information, such as memory
management details.
4. Resume Execution:
1. Update CPU: The CPU begins executing the new process or thread from the point it was last
suspended.
Example Scenario

 Imagine a scenario where a word processor and a web browser are running on the same
system. Here’s a simplified sequence of context switching between these two processes:
1. Word Processor Running:
 The word processor process is currently executing. Its CPU registers, program counter, and stack
pointer are actively being used.
2. Interrupt Occurs:
 An interrupt signal (e.g., a user action or a higher-priority task) requires the CPU to switch to another
process, such as the web browser.
3. Save State:
 The operating system saves the current state of the word processor (registers, program counter, etc.)
to its PCB.
4. Scheduler Selects Web Browser:
 The scheduler selects the web browser process to run next based on scheduling policies.
5. Load State of Web Browser:
1. The operating system loads the saved state of the web browser from its PCB, including registers,
program counter, and memory management information.
6. Resume Web Browser Execution:
 The CPU starts executing the web browser process from where it was last suspended.
CPU-Bound Processes
 CPU-bound processes are processes that primarily require extensive use of the CPU for
computation. Their performance is limited by the speed and efficiency of the CPU rather
than I/O operations.
 Characteristics:
• High Computation: They spend most of their time performing calculations or processing
data.
• Low I/O Activity: They do not frequently interact with I/O devices (e.g., disk, network) or
perform minimal I/O operations.
• Examples:
• Mathematical Calculations: Complex algorithms, simulations, or mathematical models.
• Data Processing: Tasks involving extensive data manipulation or transformations.
• Rendering Graphics: Video games or graphics rendering that require intensive computation.
 Performance Considerations:
• Processor Speed: Performance improvements are often achieved by increasing CPU
speed or using multiple cores.
• Efficient Algorithms: Optimization of algorithms and computational efficiency can
reduce the CPU time required.
I/O-Bound Processes

 I/O-bound processes are processes that spend most of their time waiting for I/O
operations to complete rather than using the CPU. Their performance is limited by the
speed and efficiency of the I/O devices.
 Characteristics:
• High I/O Activity: They spend a significant amount of time performing operations such
as reading from or writing to disks, network communication, or user input.
• Low Computation: They require minimal CPU time and do not perform extensive
computations.
• Examples:
• File Transfer: Moving files between locations or copying files.
• Database Queries: Performing queries that involve disk reads and writes.
• Web Servers: Handling network requests and responses.
 Performance Considerations:
• I/O Speed: Performance improvements are often achieved by using faster I/O devices or
optimizing I/O operations.
• Concurrency: Implementing asynchronous I/O operations or multi-threading can
improve the responsiveness and performance of I/O-bound processes.
TYPES OF QUEUES
 Queue is a data structure used to manage processes, tasks, or jobs in a
specific order, primarily to handle scheduling and resource allocation
efficiently. Queues are fundamental for process management,
multitasking, and ensuring fair and orderly execution of processes in a
system.
 Purpose and Role
• Process Scheduling: Queues help manage and schedule processes by
keeping track of processes that need CPU time, are waiting for I/O operations,
or are ready to execute.
• Resource Management: Queues organize processes based on their state
(e.g., waiting, ready) and prioritize their access to system resources.
• Task Coordination: Queues ensure tasks are executed in an orderly fashion,
preventing conflicts and ensuring system efficiency.
JOB QUEUE
 In the context of operating systems, a job queue is a data structure used to manage
jobs or processes that are waiting to be loaded into the system for execution. The job
queue is a crucial component of process management, particularly in batch processing
systems, where jobs are submitted to the system in batches and processed
sequentially
 Key Characteristics
1. Purpose:
1. Job Management: The job queue holds jobs that are waiting to be admitted into the
system's main memory for execution.
2. Batch Processing: It facilitates batch processing systems by queuing jobs that are ready to
be executed as soon as resources become available.
2. Location:
1. Storage: The job queue is typically located on secondary storage (e.g., disk) as it contains
jobs that are not yet in main memory.
3. Operation:
1. Enqueue: Jobs are added to the end of the job queue when they are submitted to the system.
2. Dequeue: Jobs are removed from the front of the queue when they are selected for loading
into main memory and execution.
•Job Submission:
•Job Arrival: When a job is submitted to the system, it is placed at the end of the job queue.
This submission can be done by users, applications, or automated processes.
•Job Details: Each job in the queue may include details such as job ID, job type, priority,
required resources, and job data.

•Job Scheduling:
•Long-Term Scheduling: The job queue is used by the long-term scheduler (or admission
scheduler) to manage the admission of jobs into main memory. The long-term scheduler
decides which jobs from the job queue should be loaded into the ready queue.
•Resource Allocation: Jobs are selected based on various criteria, such as priority, resource
requirements, or system load.

•Job Execution:
•Loading: Jobs are moved from the job queue to the ready queue when resources are
available and the job is ready for execution.
•Execution: Once a job is loaded into main memory, it is placed in the ready queue and
eventually dispatched for execution by the CPU.
READY QUEUE

 In operating systems, the ready queue is a data structure that


manages processes that are loaded into main memory and are ready to
be executed by the CPU. Processes in the ready queue are waiting for
CPU time and are eligible for execution as soon as the CPU becomes
available.
 This queue is generally stored as a lined list
 A ready queue header contains pointer to the first and final PCBs in the
list.
Key Characteristics

1. Purpose:
1. Manage Ready Processes: The ready queue contains processes that have been admitted
to main memory and are ready to run. These processes are in a state where they are only
waiting for CPU time to execute their instructions.
2. Facilitate Scheduling: The ready queue supports the process scheduling function by
organizing processes that are ready for execution, enabling the CPU scheduler to select the
next process to run.
2. Location:
1. Memory: The ready queue is maintained in main memory. It is a part of the system's
memory management and scheduling infrastructure.
3. Operation:
1. Enqueue: Processes are added to the end of the ready queue when they are ready to run
but are waiting for CPU time.
2. Dequeue: Processes are removed from the front of the ready queue when they are selected
by the CPU scheduler to be executed.
Functionality

1. Process Scheduling:
1. Scheduling Algorithms: The ready queue is used by various scheduling
algorithms (e.g., Round Robin, Shortest Job First) to determine which
process should be given CPU time next.
2. Context Switching: The ready queue helps manage context switching,
where the state of the currently running process is saved, and a new
process from the ready queue is loaded into the CPU.
2. Process Management:
1. Process State: Processes in the ready queue are in the "ready" state,
meaning they are fully loaded into main memory and are waiting for the
CPU to execute their instructions.
2. Fairness and Efficiency: The ready queue ensures that processes are
given a fair chance to execute and that CPU time is allocated efficiently.
DEVICE QUEUE
 In operating systems, a device queue is a data structure used to
manage and organize I/O requests for specific hardware devices.
 Each I/O device (such as a disk drive, printer, or network interface)
typically has its own device queue to handle requests that are waiting
to be processed. Device queues ensure orderly and efficient handling of
I/O operations, coordinating access to hardware resources.
 When a process is allocated the CPU, it executes for a while, and
eventually quits, is interrupted or waits for a particular event, such as
completion of an I/O request.
 In the case of an I/O request, the device may be busy with I/O request
of some other process, hence the list of process waiting for a particular
I/O device is called a device queue.
Key Characteristics

1. Purpose:
1. Manage I/O Requests: Device queues keep track of processes or tasks that are
waiting for I/O operations to be completed by a specific device.
2. Optimize Device Utilization: By organizing requests, device queues help ensure
that hardware devices are used efficiently and that I/O operations are completed in a
timely manner.
2. Location:
1. Memory: Device queues are maintained in system memory, typically in the
operating system's I/O management subsystem.
3. Operation:
1. Enqueue: When a process issues an I/O request, the request is added to the end of
the corresponding device queue.
2. Dequeue: When the device is ready to handle the next request, the request is
removed from the front of the queue and processed.
Functionality

1. Request Handling:
1. Order Processing: Requests in the device queue are generally processed in the
order they are received (FIFO) or based on other scheduling policies.
2. Device Management: The operating system uses the device queue to manage
which request should be processed next, based on the device's current state and
capabilities.
2. Device Scheduling:
1. Scheduling Algorithms: The operating system may use specific algorithms to
decide which request to process next from the device queue. For example, Disk
Scheduling Algorithms like Shortest Seek Time First (SSTF) or SCAN are used for
managing disk I/O requests.
3. Resource Allocation:
1. Efficient Utilization: By managing I/O requests through device queues, the
operating system can ensure that I/O devices are utilized efficiently, minimizing idle
time and maximizing throughput.
WHAT IS PROCESS SCHEDULING
 Process scheduling refers to the method by which an operating system decides which
process, out of the many waiting to be executed, should run on the CPU next. The
main objective is to ensure efficient CPU utilization and to maximize the perfor
 Key Concepts in Process Scheduling
1. Process: A program in execution, consisting of code, data, and the current state of
the program.
2. Scheduler: The component of the operating system that makes the decision of which
process runs next. There are three main types:
1. Long-term scheduler (Job scheduler): Decides which processes are admitted to the
system for processing.
2. Short-term scheduler (CPU scheduler): Selects from among the processes that are ready
to execute and allocates the CPU to one of them.
3. Medium-term scheduler: Temporarily removes processes from main memory and places
them in secondary storage (or vice versa), typically to improve process mix.
•Process State: The state of a process at any given time, typically categorized as:
•New: The process is being created.
•Ready: The process is ready to run if a CPU becomes available.
•Running: The process is currently being executed by the CPU.
•Waiting: The process is waiting for some event to occur (such as I/O completion).
•Terminated: The process has finished execution.
•Scheduling Criteria:
•CPU utilization: Keep the CPU as busy as possible.
•Throughput: Number of processes that complete their execution per time unit.
•Turnaround time: Total time taken for a process to complete from submission to
completion.
•Waiting time: Total time a process spends waiting in the ready queue.
•Response time: Time from submission of a request until the first response is
produced.
Importance of Process
Scheduling
• Efficiency: Ensures efficient utilization of CPU and other system
resources.
• Fairness: Provides a fair allocation of CPU time to all processes.
• Response Time: Reduces the response time for processes, particularly
interactive ones.
• Throughput: Maximizes the number of processes that complete their
execution in a given time frame.
TYPES OF SCHEDULERS

 There are 3 types of Schedulers


 Long Term Scheduler (LTS) (Job Scheduler)
 Short Term Scheduler (STS) (CPU Scheduler)
 Medium Term Scheduler (MTS)
LONG TERM SCHEDULER
 Function:
• The long-term scheduler decides which processes are admitted into the system
from the pool of submitted processes. It controls the degree of multiprogramming,
which is the number of processes in memory at any one time.
 Frequency:
• Invoked less frequently compared to other schedulers, typically when a process is
created or when a process terminates. Its decisions have a significant impact on
the overall system performance and resource utilization.
 Objective:
• To ensure a balanced mix of CPU-bound and I/O-bound processes. This balance
helps in maximizing CPU utilization and ensuring that the system does not
become overburdened with processes that are either too CPU-intensive or too I/O-
intensive.
 Example:
• In a batch processing system, the long-term scheduler selects jobs from a job pool
on a storage device and loads them into the ready queue in the main memory for
execution. By carefully selecting the mix of processes, it aims to improve overall
system performance and throughput.
KEY FUNCTIONS
•Job Admission:
•Decides which processes should be brought into the ready queue in the main memory from the
pool of submitted jobs.
•Controls the degree of multiprogramming (number of processes in memory) to optimize system
performance.
•Balancing Workload:
•Ensures a balanced mix of CPU-bound and I/O-bound processes to maintain efficient resource
utilization.
•Prevents the system from becoming overloaded with one type of process, ensuring that the CPU
and I/O devices are efficiently used.
•Scheduling Policies:
•Implements policies to decide which processes are admitted. Policies can be based on criteria like
priority, process type, resource requirements, and fairness.
•May implement mechanisms to ensure fairness and avoid starvation of processes.
•Resource Management:
•Manages system resources by controlling which processes enter the system, thereby preventing
resource contention and potential system overload.
•Coordinates with the medium-term scheduler to manage the process load effectively.
2. Short-term Scheduler (CPU Scheduler)
 Function:
• The short-term scheduler, or CPU scheduler, selects from among the processes that are
in the ready queue and allocates the CPU to one of them. It makes decisions frequently
to ensure that the CPU is always busy executing a process.
 Frequency:
• Invoked very frequently, possibly every few milliseconds. It needs to make quick
decisions to switch between processes, ensuring minimal CPU idle time and efficient
multitasking.
 Objective:
• To ensure that processes are given fair access to the CPU and to optimize various
performance metrics such as CPU utilization, response time, and turnaround time. The
short-term scheduler uses various algorithms to decide the order in which processes
are executed.
 Example:
• In a time-sharing system, the short-term scheduler might use the Round Robin
scheduling algorithm to allocate a fixed time slice to each process in the ready queue,
ensuring that all processes get a fair share of the CPU time.
KEY FUNCTIONS
•Process Selection:
•Selects the next process to run from the ready queue based on a specific scheduling algorithm (e.g.,
FCFS, SJF, Round Robin, Priority Scheduling).
•CPU Allocation:
•Allocates the CPU to the selected process for execution. This involves context switching if a different
process was previously running.
•Ensures that the CPU is utilized efficiently by minimizing idle time and maximizing process execution.
•Maintaining Process States:
•Keeps track of the states of all processes (new, ready, running, waiting, terminated) and transitions them
appropriately based on scheduling decisions.
•Ensures processes move smoothly between different states.
•Context Switching:
•Handles context switching efficiently, saving and restoring the state of processes when a switch occurs.
•Minimizes the overhead of context switching to ensure smooth and quick transitions between processes.
•Performance Optimization:
•Optimizes key performance metrics such as CPU utilization, response time, and turnaround time through
effective scheduling.
•Implements various scheduling algorithms to balance different performance goals and system
requirements.
3. Medium-term Scheduler
 Function:
• The medium-term scheduler handles the swapping of processes between main memory and
secondary storage. It temporarily removes processes from main memory to reduce the
degree of multiprogramming and later swaps them back into memory for continued
execution.
 Frequency:
• Invoked occasionally, based on the need to manage the load on the system. Its decisions are
less frequent than the short-term scheduler but more frequent than the long-term scheduler.
 Objective:
• To improve the process mix and system performance by managing the memory utilization
effectively. It helps in optimizing the system's responsiveness by swapping out less critical
processes during high load periods and swapping them back when resources are available.
 Example:
• In a virtual memory system, when the system runs low on physical memory, the medium-
term scheduler might decide to swap out a process that is waiting for I/O operations to
complete. This frees up memory for other processes that are ready to run, thereby
improving the overall system performance.
KEY FUNCTIONS

•Swapping:
•Temporarily removes (swaps out) processes from main memory to secondary storage (e.g., disk)
and later brings them back (swaps in) when needed.
•Helps manage memory resources by ensuring that only the necessary processes are kept in
memory.
•Load Balancing:
•Balances the process load by adjusting the number of processes in memory based on current
system conditions (e.g., memory availability, CPU load).
•Prevents memory overload and ensures there is enough memory available for active processes.
•Managing Multiprogramming:
•Adjusts the degree of multiprogramming by controlling which processes are in memory and which
are swapped out.
•Ensures a good mix of processes to optimize resource utilization and system performance.
•Handling I/O-bound and CPU-bound Processes:
•Can prioritize the swapping of I/O-bound processes during high CPU load periods and bring
them back when I/O operations are complete.
•Helps maintain a balance between CPU-bound and I/O-bound processes in memory,
optimizing overall system efficiency.
•Interaction with LTS and STS:
•Coordinates with the Long-term Scheduler to manage the overall process load and with the
Short-term Scheduler to ensure the right processes are in memory for execution.
•Ensures that the scheduling decisions of LTS and STS are effectively implemented and
optimized through memory management.
DISPATCHER

 The dispatcher is responsible for giving control of the CPU to the process
selected by the short-term scheduler. It handles the actual process of
context switching, moving the process from the ready state to the
running state.
 It is a kernel module that takes control of the CPU from the current
process and gives it to the process selected by STS.
 The time it takes for the dispatcher to stop one process and start
another running is known as dispatch latency or switching overhead
time.
PREEMPTIVE AND NON-PREEMTIVE
SCHEDULING
Preemptive Scheduling
 Definition:
• In preemptive scheduling, the operating system can interrupt a currently
running process to allocate the CPU to another process. This interruption can
occur based on a specific scheduling policy or when a higher-priority process
arrives.
 Key Characteristics:
1. Time Sharing: Allows multiple processes to share CPU time, enhancing
responsiveness in interactive systems.
2. Interruptions: A running process can be interrupted and moved back to the
ready queue if a higher-priority process needs the CPU or if its time slice
expires.
3. Dynamic: It provides a more dynamic and responsive system, as the CPU
can be quickly reassigned based on current needs and priorities.
 Advantages:
• Better Responsiveness: Improves system responsiveness, especially for
interactive and real-time applications.
• Efficient CPU Utilization: Ensures that the CPU is not idle and can handle
multiple processes efficiently.
• Priority Handling: Higher-priority processes can be serviced quickly by
preempting lower-priority processes.
 Disadvantages:
• Overhead: Involves more overhead due to context switching, as processes
are frequently interrupted.
• Complexity: More complex to implement and manage, as it requires
maintaining process states and handling interruptions.
Non-preemptive Scheduling
 Definition:
• In non-preemptive scheduling, once a process starts its execution, it
runs to completion or until it voluntarily yields the CPU (e.g., waiting for
I/O operations). The operating system cannot forcibly interrupt a running
process to allocate the CPU to another process.
 Key Characteristics:
1. Single Allocation: Once a process is given the CPU, it retains control
until it completes or blocks for I/O.
2. No Interruptions: There are no interruptions by the scheduler; the
running process controls the CPU until it finishes or voluntarily
relinquishes it.
3. Predictable: It provides a more predictable and simpler environment,
as processes run to completion once they start.
 Advantages:
• Simplicity: Easier to implement and manage, with no need for context
switching and handling interruptions.
• Low Overhead: Reduces overhead associated with context switching,
as processes are not interrupted once they start.
• Deterministic: Provides a deterministic execution environment, which
can be beneficial for certain types of applications.
 Disadvantages:
• Poor Responsiveness: Less responsive, especially in interactive
systems, as a long-running process can delay the execution of other
processes.
• Inefficient CPU Utilization: Can lead to inefficient CPU utilization if the
running process spends a lot of time waiting for I/O operations.
SCHEDULING CRITERIA
 In process management within an operating system, various scheduling criteria
are used to determine the order in which processes are executed. These criteria
help optimize different aspects of system performance, depending on the goals of
the scheduling algorithm. Here are the primary scheduling criteria:
 CPU Utilization
 Goal: Maximize CPU utilization.
 Explanation: Keep the CPU as busy as possible. It is measured as the percentage of time
the CPU is active. Higher utilization means more efficient CPU usage.
 Throughput
 Goal: Maximize throughput.
 Explanation: The number of processes that complete their execution per time unit.
Higher throughput means more processes are completed in a given time period.
 Turnaround Time
 Goal: Minimize turnaround time.
 Explanation: The total time taken from the submission of a process to its completion. It
includes the time spent waiting in the ready queue, executing on the CPU, and in I/O
operations.
 Waiting Time
 Goal: Minimize waiting time.
 Explanation: The total time a process spends in the ready queue waiting for CPU allocation.
Reducing waiting time can improve overall system performance and user satisfaction.
 Response Time
 Goal: Minimize response time.
 Explanation: The time from the submission of a request until the first response is produced. This is
particularly important in interactive systems where prompt feedback is crucial.
 Fairness
 Goal: Ensure fairness.
 Explanation: All processes should get a fair share of the CPU and should not be starved. Fairness
ensures that no process is unfairly delayed.
 Priority
 Goal: Respect process priorities.
 Explanation: Some processes may have higher priority and should be scheduled before lower-
priority ones. Priority scheduling ensures that critical processes are given preference.
 Deadlines
 Goal: Meet deadlines.
 Explanation: Some processes may have deadlines by which they must complete. Real-time
systems often use this criterion to ensure timely task completion.
WHAT IS SCHEDULING ALGORITHM
 A scheduling algorithm is a method used by an operating system to
decide the order in which processes are given access to the CPU and
other resources. The primary goal of a scheduling algorithm is to optimize
various performance metrics such as CPU utilization, throughput,
turnaround time, waiting time, and response time. Scheduling algorithms
play a crucial role in ensuring efficient and fair resource allocation in a
multi-programming environment.
 TYPES OF SCHEDULING ALGORITHM
 First Come First Serve (FCFS)
 Shortest Job Next (SJN) / Shortest Job First (SJF)
 Shortest Remaining Time Next (SRTN)
 Priority Scheduling
 Round Robin (RR)/ Time Slicing scheduling
 Multilevel Queue Scheduling
 Multilevel Feedback Queue Scheduling
Goals of Scheduling Algorithms

•Maximize CPU Utilization: Keep the CPU as busy as possible.

•Maximize Throughput: Increase the number of processes that complete per time unit.

•Minimize Turnaround Time: Reduce the total time taken for processes to complete.

•Minimize Waiting Time: Reduce the time processes spend in the ready queue.

•Minimize Response Time: Ensure prompt execution for interactive processes.

•Ensure Fairness: Prevent any process from being indefinitely delayed (starvation).
Key Terms in Process Scheduling
 Burst Time (BT): The total time required by a process to complete its execution
on the CPU. It is also known as execution time or running time.
 Arrival Time (AT): The time at which a process arrives in the ready queue and is
ready for execution.
 Completion Time (CT): The time at which a process completes its execution and
releases the CPU.
 Turnaround Time (TAT): The total time taken for a process to complete
execution from the time it arrives in the system until it finishes.
• Formula: TAT=Completion Time−Arrival Time
 Waiting Time (WT): The total time a process spends waiting in the ready queue
before it gets the CPU for execution.
• Formula: WT=Turnaround Time−Burst Time
 Response Time (RT): The time from the submission of a process until
the first response is produced, i.e., the time it takes for a process to
start execution after arrival.
• Formula: RT=First Response Time−Arrival Time
 Throughput: The number of processes completed per unit time.
• Formula: Throughput=
 CPU Utilization: The percentage of time the CPU is actively working on
processes.
• Formula: CPU Utilization= (1- )
FIRST COME FIRST SERVE (FCFS)

 The First-Come, First-Served (FCFS) scheduling algorithm is one of the


simplest and most straightforward scheduling algorithms used in
operating systems. It schedules processes in the order they arrive in the
ready queue.
 How FCFS Works
1. Arrival Order: Processes are placed in the ready queue as they arrive.
2. Non-preemptive: Once a process starts executing, it runs to
completion without being preempted by another process.
3. FIFO Queue: The ready queue operates on a First-In-First-Out (FIFO)
basis. The process that arrives first gets the CPU first.
 Key Characteristics
• Simplicity: FCFS is easy to understand and implement.
• Non-preemptive: Processes run to completion once they start execution.
• Fairness: In terms of process arrival order, every process is treated
equally.
 Advantages
1. Easy to Implement: FCFS is simple and easy to program.
2. Fair in Terms of Arrival: All processes are treated equally based on their
arrival time.
 Disadvantages
1. Convoy Effect: Short processes can be forced to wait for a long process
to complete, leading to inefficient CPU utilization and higher average
waiting time.
2. Not Suitable for Time-Sharing Systems: FCFS does not provide the
quick response time required in interactive and real-time systems.
Performance Metrics

1. Average Waiting Time: Can be high if short processes wait for long
processes to complete.
2. Turnaround Time: The time taken for a process to complete from the
time it is submitted until it finishes execution.
3. Throughput: The number of processes completed per unit time can be
low if there are long processes.
 Calculate Avg TAT, Avg WT, Avg RT, throughput, CPU Utilization

PID BT CT AT TAT WT
P1 10 10 0 10 0
P2 5 15 0 15 10
P3 8 23 0 23 15

GANTT CHART

P1 P2 P3

0 10 15 23
Turnaround Time (TAT)

• Turnaround Time is the total time taken by a process from its arrival to
its completion. Since all processes arrive at time 0 (in this example), the
turnaround time for each process is equal to its completion time.
 TAT=Completion Time−Arrival Time
 For each process:
• P1: Turnaround Time = 10 - 0 = 10 ms
• P2: Turnaround Time = 15 - 0 = 15 ms
• P3: Turnaround Time = 23 - 0 = 23 ms
 Average Turnaround Time:
 Average TAT= =
Waiting Time (WT)

• Waiting Time is the total time a process spends in the ready queue
before getting the CPU.
 WT=Turnaround Time−Burst Time
 For each process:
• P1: Waiting Time = 10 - 10 = 0 ms
• P2: Waiting Time = 15 - 5 = 10 ms
• P3: Waiting Time = 23 - 8 = 15 ms
 Average Waiting Time: =
Response Time (RT)

• Response Time is the time from the arrival of a process to the first
time it gets the CPU. For FCFS, response time is equal to the waiting
time.

 For each process:


• P1: Response Time = 10 - 10 = 0 ms
• P2: Response Time = 15 - 5 = 10 ms
• P3: Response Time = 23 - 8 = 15 ms
 Average Response Time: =
Throughput

• Throughput is the number of processes completed per unit time.


• Throughput=Total Time Taken/Total Number of Processes​
• Here, the total time taken is the time when the last process finishes
execution (23 ms), and there are 3 processes
• Throughput=
CPU Utilization

• CPU Utilization is the percentage of time the CPU is actively working


(not idle).
 Since there’s no idle time in this example (the CPU is always busy
processing), CPU utilization is:
 CPU Utilization= ( ) X 100%
 CPU Utilization= x 100= 100%
Proces Arrival Burst CT TAT WT RT
s Time Time
(ms)
P1 0 4 4 4 0 0
P2 2 3 7 5 2 2
P3 4 1 8 4 3 3

P1 P2 P3

0 4 7 8

Avg TAT= (4+5+4)/3= 4.33


Avg WT= (0+2+3)/3= 1.67
Avg RT=(0+2+3)/3=1.67
Throughput= 3/8=0.375 ms
CPU Utilization= 8/8= 100%
Shortest Job First (SJF) Scheduling Algorithm
 The Shortest Job First (SJF) scheduling algorithm is a widely used CPU scheduling
algorithm that selects the process with the smallest burst time for execution. It is designed
to minimize the average waiting time of processes by prioritizing shorter jobs over longer
ones.
 SJF can be implemented in two ways:
1. Non-Preemptive SJF: Once a process starts executing, it cannot be interrupted until it
completes.
2. Preemptive SJF (Shortest Remaining Time First, SRTF): If a new process arrives with
a shorter burst time than the remaining time of the currently executing process, the current
process is preempted, and the new process is given the CPU.
 How SJF Works:
 Non-Preemptive SJF:
• Process Selection: The CPU is assigned to the process with the smallest burst time among
the processes in the ready queue.
• Execution: The selected process runs to completion without interruption.
• Next Process: Once the current process finishes, the process with the next shortest burst
time is selected.
 Preemptive SJF (SRTF):
• Process Arrival: As processes arrive, they are compared with the currently running
process.
• Preemption: If a new process has a shorter burst time than the remaining time of the
currently running process, the current process is preempted, and the new process is
executed.
• Resumption: The preempted process is placed back into the ready queue and will be
scheduled later when it again has the shortest remaining time.

Characteristics of SJF:
1. Optimal for Average Waiting Time:
1. SJF is optimal in minimizing the average waiting time across all processes, particularly in non-
preemptive mode. By executing shorter jobs first, it reduces the time other processes wait in the
queue.
2. Requires Knowledge of Burst Time:
1. The SJF algorithm requires knowledge of the burst times of processes in advance, which is often
difficult to obtain or estimate accurately in real-world scenarios.
3. Starvation:
1. A potential issue with SJF is starvation or indefinite blocking. If short processes keep arriving,
longer processes may never get executed, leading to starvation.
4. Preemptive SJF and Overhead:
1. In the preemptive version (SRTF), frequent context switches can occur, especially in environments
with many short processes arriving sporadically. This can lead to increased overhead.
EXAMPLE OF NON PREMTIVE SJF

PID AT BT CT TUT WT RT
P1 0 7 7 7 0 0
P2 2 4 12 10 6 6
P3 4 1 8 4 3 3
P4 5 4 16 11 7 7

P1 P3 P2 P4

0 7 8 1 16
2
 Explanation:
• P1 arrives at time 0 and starts execution since it's the only process. It runs for 7
ms.
• P3 arrives at time 4, but P1 is still running. After P1 finishes, P3 is selected
because it has the shortest burst time (1 ms).
• P2 starts after P3 because it has the next shortest burst time (4 ms).
• P4 starts after P2 and finishes last.
 Turnaround Time:
• P1: 7 - 0 = 7 ms
• P2: 12 - 2 = 10 ms
• P3: 8 - 4 = 4 ms
• P4: 16 - 5 = 11 ms

• AVG TAT= (7+10+4+11)/4= 8 ms


 Waiting Time:
• P1: 0 ms
• P2: 8 - 2 = 6 ms
• P3: 7 - 4 = 3 ms
• P4: 12 - 5 = 7 ms

 AVG WT= (0+6+3+7)/4= 4 ms


EXAMPLE OF PREMTIVE SJF
(SRTF/SRTN)
PID AT BT CT TUT WT RT
P1 0 7 16 16 9 0
P2 2 4 7 5 1 0
P3 4 1 5 1 0 0
P4 5 4 11 6 2 2

P1 P1 P2 P2 P3 P2 P2 P4 P P4 P4 P1 P1 P1 P1 P1
4
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

| P1 | P2 | P3 | P2 | P4 | P1 |
0 2 4 5 7 11 16
Round Robin (RR)/ Time slicing
Scheduling
 Round Robin (RR) scheduling is one of the simplest and most widely
used CPU scheduling algorithms, especially in time-sharing systems. The
main goal of RR is to ensure that all processes get an equal opportunity
to execute by assigning each a fixed time slice or quantum. This
approach is designed to prevent any single process from monopolizing
the CPU, promoting fairness, and ensuring a responsive system.
Key Concepts:
1. Time Quantum (Time Slice):
1. A fixed unit of time (e.g., 10 ms) during which a process can run on the CPU.
2. After the time quantum expires, the currently running process is preempted and
moved to the end of the ready queue.
2. Preemption:
1. If a process's burst time exceeds the time quantum, it is preempted (interrupted)
after the time quantum expires. The process is then placed at the back of the ready
queue, allowing other processes to get their share of CPU time.
2. If the process finishes before the quantum expires, it leaves the system.
3. Cyclic Order:
1. The CPU scheduler cycles through the processes in the ready queue, giving each
process a time slice in turn. This ensures that all processes get a fair share of CPU
time.
How Round Robin Works
1. Initialization:
1. Assume there are multiple processes in the ready queue, each with a specific burst
time (the total time it needs to complete its execution).
2. The scheduler picks the first process from the queue and assigns it the CPU for the
duration of the time quantum.
2. Execution:
1. If the process's burst time is less than or equal to the time quantum, the process
completes its execution within this period, and the CPU moves on to the next process
in the queue.
2. If the process’s burst time is greater than the time quantum, the process will execute
for the time quantum and then be preempted. The remaining burst time is updated,
and the process is moved to the end of the ready queue.
3. Repetition:
1. This cycle continues, with the CPU repeatedly assigning the time quantum to
processes in the ready queue until all processes are completed.
PID AT BT CT TAT WT RT TQ=4
P1 0 5 16 16 11 0
P2 1 3 7 6 3 3
P3 2 8 20 18 10 5
P4 3 6 22 19 13 8

READY P1 P2 P3 P4 P1 P3 P4
QUEUE

RUNNING P1 P2 P3 P4 P1 P3 P4
QUEUE

0 4 7 1 1 1 2 2
1 5 6 0 2
EXPLANATION
 Steps to Create the Gantt Chart:
1. Initial Queue Setup:
1. At time 0 ms, only P1 is in the queue.
2. At time 1 ms, P2 arrives.
3. At time 2 ms, P3 arrives.
4. At time 3 ms, P4 arrives.
2. Execution Order:
1. P1 starts executing at time 0 ms and executes for 4 ms. (Remaining BT: 1 ms)
2. At time 4 ms, P2 starts executing and runs for its entire burst time of 3 ms. (P2 finishes at time 7 ms)
3. At time 7 ms, P3 starts executing and runs for 4 ms. (Remaining BT: 4 ms)
4. At time 11 ms, P4 starts executing and runs for 4 ms. (Remaining BT: 2 ms)
5. At time 15 ms, P1 resumes execution and runs for its remaining 1 ms. (P1 finishes at time 16 ms)
6. At time 16 ms, P3 resumes execution and runs for its remaining 4 ms. (P3 finishes at time 20 ms)
7. At time 20 ms, P4 resumes execution and runs for its remaining 2 ms. (P4 finishes at time 22 ms)
 Calculation of Metrics:
 Completion Time (CT):
• P1: 16 ms, P2: 7 ms, P3: 20 ms, P4: 22 ms
 Turnaround Time (TAT):
• TAT=Completion Time−Arrival Time
• P1: 16 - 0 = 16 ms, P2: 7 - 1 = 6 ms, P3: 20 - 2 = 18 ms, P4: 22 - 3 = 19
ms
 Waiting Time (WT):
• WT=Turnaround Time−Burst Time
• P1: 16 - 5 = 11 ms, P2: 6 - 3 = 3 ms, P3: 18 - 8 = 10 ms, P4: 19 - 6 = 13
ms
 Response Time (RT):
• RT=First Start Time−Arrival Time
• P1: 0 - 0 = 0 ms, P2: 4 - 1 = 3 ms, P3: 7 - 2 = 5 ms, P4: 11 - 3 = 8 ms
 Summary of Results:
• Average TAT= (16+6+18+19)/4= 14.75Z
• Average WT= (11+3+10+13)/4= 9.25
PRIORITY SCHEDULING

 Overview:
 Priority Scheduling is a CPU scheduling algorithm that assigns a priority
to each process.
 The process with the highest priority is executed first.
 If multiple processes have the same priority, they can be scheduled
based on other criteria such as First-Come, First-Served (FCFS) or Round
Robin.
Key Concepts
1. Priority:
1. Each process is assigned a priority, which can be either a number or a category.
2. Lower numbers often indicate higher priority (e.g., priority 1 is higher than priority 5), but
this can vary depending on the system.
2. Preemptive vs. Non-Preemptive:
1. Preemptive Priority Scheduling: If a new process arrives with a higher priority than the
currently running process, the current process is preempted and the CPU is allocated to the
new process.
2. Non-Preemptive Priority Scheduling: Once the CPU is allocated to a process, it cannot
be preempted by a new process, even if the new process has a higher priority. The CPU will
only be reassigned after the current process finishes.
3. Starvation:
1. A situation where low-priority processes may never get executed because higher-priority
processes keep arriving. This is a common issue in priority scheduling.
4. Aging:
1. A technique used to gradually increase the priority of processes that have been waiting in
the queue for a long time to prevent starvation.
How Priority Scheduling Works:

1. Process Arrival:
1. Each process arrives in the ready queue with a certain burst time and a
priority.
2. Queue Management:
1. The scheduler selects the process with the highest priority (or lowest priority
number) from the ready queue.
3. Execution:
1. In Non-Preemptive Priority Scheduling, the selected process runs to
completion.
2. In Preemptive Priority Scheduling, if a process with a higher priority
arrives, it preempts the currently running process.
LOWER THE NO HIGHER THE PRIORITY

PID AT BT PRIORITIE CT TAT WT RT


S
P1 0 10 3 10 10 0 0
P2 1 4 1 14 13 9 9
P3 2 6 4 23 21 15 15
P4 3 3 2 17 14 11 11

NON-PREEMTIVE

P1 P2 P4 P3

0 1 1 1 2
0 4 7 3
LOWER THE NO HIGHER THE PRIORITY

PID AT BT PRIORITIE CT TAT WT RT


S
P1 0 10 3 17 17 7 0
P2 1 4 1 5 4 0 0
P3 2 6 4 23 21 15 15
P4 3 3 2 8 5 2 2

PREEMTIVE

P1 P2 P2 P2 P P4 P1 P3
2
0 1 2 3 4 5 8 1 2
7 3

P1 P2 P4 P1 P3

0 1 5 8 1 2
7 3
METRICS

 AVG TAT= (17+4+21+5)/4=11.75ms


 AVT WT= (7+0+15+2)/4=6 ms
 AVG RT= (0+0+15+2)/4=4.25 ms
Characteristics:
1. Efficiency:
1. Priority scheduling efficiently allocates CPU resources to the most important tasks first.
2. Starvation:
1. Low-priority processes may suffer from starvation if high-priority processes keep arriving.
This is particularly problematic in non-preemptive priority scheduling.
3. Fairness:
1. The fairness of priority scheduling depends on how priorities are assigned. If not managed
carefully, it can lead to unfair situations where some processes never get executed.
4. Aging:
1. Implementing aging can help mitigate the starvation problem by gradually increasing the
priority of waiting processes over time.
5. CPU Utilization:
1. High CPU utilization can be achieved as the CPU is always busy with the highest-priority
tasks.
6. Throughput and Turnaround Time:
1. Throughput and turnaround time can be optimized for high-priority tasks, but low-priority
 Priority Scheduling:
• In Priority Scheduling, each process is assigned a priority, and the process with the
highest priority is executed first. If two processes have the same priority, they are
typically scheduled according to their arrival times (or another secondary criterion).
• The priority can be based on various factors, such as user-defined priority levels,
resource requirements, or other system considerations.
 Shortest Job First (SJF) Scheduling:
• SJF scheduling specifically uses the burst time (i.e., the time required by a process
to complete its execution) as the priority criterion.
• In SJF, the process with the shortest burst time is given the highest priority,
meaning it is scheduled to run next. If two processes have the same burst time, they
are usually scheduled based on their arrival time (FIFO order).
• SJF can be either preemptive (Shortest Remaining Time First, SRTF) or non-
preemptive.
 Why SJF is a Special Case of Priority Scheduling:
• In priority scheduling, the priority can be based on any criteria, like urgency,
importance, or user-assigned values.
• In SJF, the priority is specifically determined by the burst time. The shorter the burst
time, the higher the priority. Hence, SJF is a special case of priority scheduling where
the priority is dynamically assigned based on the burst time.
AGING
 Aging is a technique used in operating systems to prevent starvation and ensure that all processes
eventually get scheduled for execution. Starvation occurs when processes with lower priorities are
indefinitely delayed because higher-priority processes continually take over the CPU.
 Concept of Aging:
1. Problem of Starvation:
1. In priority-based scheduling algorithms, processes with higher priority are executed before those with lower
priority.
2. If a system has many high-priority processes, low-priority processes may never get a chance to execute, leading to
starvation.
2. Objective of Aging:
1. To ensure that low-priority processes eventually receive CPU time and are not indefinitely postponed.
3. How Aging Works:
1. Ageing Mechanism: Aging gradually increases the priority of processes that have been waiting in the queue for a
long time.
2. Increment of Priority: As time progresses, the priority of a process that has not yet been executed is increased.
This makes it more likely to be scheduled in future.
4. Implementation:
1. Priority Adjustment: The priority of a process is adjusted based on its waiting time. The longer a process waits,
the higher its priority becomes.
2. Priority Boosting: Regularly, the system increases the priority of all processes that have been waiting for a
certain amount of time, ensuring that they do not remain in low-priority queues indefinitely.
ADVANTAGES OF AGING
•Prevents Starvation:
•Fairness: Aging ensures that processes with lower priorities are eventually executed, preventing them from
being indefinitely delayed or starved due to continuously arriving high-priority processes.

•Improves System Responsiveness:


•Timely Execution: By gradually increasing the priority of waiting processes, aging helps in balancing the
execution of both high-priority and low-priority processes, leading to better system responsiveness and
fairness.

•Simplifies Priority Management:


•Automatic Adjustment: Aging provides a mechanism to automatically adjust the priority of processes
based on their waiting time, simplifying the management of process priorities.

•Enhanced User Experience:


•Reduced Waiting Times: Users of low-priority processes are less likely to experience prolonged waiting
times, improving their overall experience with the system.
DISADVANTAGES OF AGING
•Increased Overhead:
•Priority Adjustment Overhead: Regularly updating the priorities of waiting processes introduces additional
computational overhead, which may impact system performance, especially in systems with a large number of processes.

•Complexity in Implementation:
•Algorithm Complexity: Implementing aging requires careful design to ensure that priority adjustments are made
correctly and efficiently. This adds complexity to the scheduling algorithm.

•Potential for Priority Inversion:


•Priority Inversion: In some cases, processes with temporarily increased priorities may interfere with the execution of
higher-priority processes, leading to priority inversion where a lower-priority process blocks the execution of a higher-
priority one.

•Unpredictable Response Times:


•Varied Waiting Times: Although aging helps in preventing starvation, the response time for processes may still be
unpredictable, especially in systems with fluctuating loads and varying process behaviors.

•Ineffective for Certain Workloads:


•Workload Specificity: Aging may not be effective in all scenarios, such as systems with highly time-sensitive tasks or
workloads that require strict priority enforcement for real-time processing.
MULTILEVEL QUEUE SCHEDULING

 Multi-Level Queue Scheduling (MLQ) is a type of CPU scheduling


algorithm that divides the ready queue into several separate queues,
each with its own scheduling algorithm and priority.
 Processes are permanently assigned to one of these queues based on
certain characteristics like priority, process type (system vs. user),
memory size, or other criteria.
EXAMPLE
 How Multi-Level Queue Scheduling Works:
1. Queue Structure:
1. The ready queue is divided into multiple sub-queues.
2. Each queue is designed for a specific type of process and has its own scheduling algorithm
(e.g., Round Robin, FCFS).
3. These queues can be prioritized, meaning that processes in a higher-priority queue are
always executed before processes in a lower-priority queue.

1. Process Assignment:
1. Processes are assigned to a queue based on specific attributes like priority level, process
type (foreground or background), memory requirements, etc.
2. Once a process is assigned to a queue, it does not move between queues. This is a key
difference from Multi-Level Feedback Queue Scheduling, where processes can move
between queues based on their behavior.

2. Scheduling Between Queues:


1. The CPU scheduling decision is first made between the queues.
2. The higher-priority queues are given preference over lower-priority queues.
3. If the highest-priority queue has processes, the CPU will schedule them according to that
queue's scheduling algorithm.
4. Lower-priority queues are only considered if higher-priority queues are empty.
Example Scenario

 Assume a system with three queues:


1. Queue 1: For system processes (highest priority) using Round Robin (RR).
2. Queue 2: For interactive/user processes using Shortest Job First (SJF).
3. Queue 3: For batch/background processes (lowest priority) using First-Come, First-Served
(FCFS).
 Process Arrival:
• P1: System process (assigned to Queue 1)
• P2: Interactive process (assigned to Queue 2)
• P3: Batch process (assigned to Queue 3)
• P4: System process (assigned to Queue 1)
• P5: Interactive process (assigned to Queue 2)
 Scheduling:
1. Queue 1 (System Processes) is checked first. If P1 and P4 are present, they will be scheduled
according to the Round Robin algorithm.
1. Let’s say P1 executes for its time slice, then P4 is scheduled next.

2. Once Queue 1 is empty, Queue 2 (Interactive Processes) is considered. P2 and P5 are scheduled
according to the Shortest Job First algorithm.
3. Finally, if both Queue 1 and Queue 2 are empty, Queue 3 (Batch Processes) is considered, where
P3 will be executed according to First-Come, First-Served.
 Advantages of Multi-Level Queue Scheduling:
• Specialization: Different types of processes can be handled using the most
appropriate scheduling algorithm for that type.
• Efficiency: High-priority processes, such as system processes, are given preference,
ensuring that critical tasks are completed promptly.
• Simplified Management: Processes are classified and managed separately,
simplifying the overall scheduling logic for complex systems.
 Disadvantages:
• Inflexibility: Once a process is assigned to a queue, it cannot change queues. This
can lead to inefficiencies, especially if a process's behavior changes over time.
• Starvation: Lower-priority queues may suffer from starvation if higher-priority queues
always have processes waiting. Processes in the lowest queue might never get CPU
time if higher-priority queues are continuously populated.
• Complexity in Configuration: Deciding how to divide the queues and which
scheduling algorithm to apply in each queue requires careful consideration and can be
complex to configure optimally.
EXAMPLE OF SYSTEM, INTERACTIVE AND BATCH PROCESSES.

1. System Processes:
System processes are essential processes that the operating system itself uses to manage
hardware and software resources. These processes generally run in the background and have
higher priority because they ensure the smooth functioning of the system.

Examples:
•Device Drivers: Processes that manage communication between the operating system and
hardware devices like printers, disk drives, and network cards.

•Memory Management Processes: Processes that manage the allocation and deallocation of
memory to various applications.

•Process Scheduler: A process that decides which process runs at a given time.

•System Daemons: Background processes like cron (for task scheduling), syslogd (for system
logging), or inetd (for managing network services).

•Interrupt Handling: Processes that manage the handling of interrupts from hardware devices
2. Interactive Processes:
Interactive processes are initiated by users and typically require quick responses from
the system. They are often associated with applications that involve user interaction.

Examples:
•Web Browsers: Applications like Google Chrome, Firefox, or Safari, where users
interact with web content.

•Text Editors: Programs like Notepad, Vim, or Microsoft Word, where users type and
edit text.

•Command Line Interface (CLI): Interactive shells like bash, zsh, or Windows
Command Prompt, where users enter commands and get immediate feedback.

•Graphical User Interface (GUI) Applications: Applications like file managers,


image editors, or media players where users interact with graphical elements.

•Video Games: Games where real-time user input directly affects gameplay, requiring
immediate system response.
 3. Batch Processes:
 Batch processes are non-interactive and are executed without user
intervention. These processes are often scheduled to run at specific times,
usually to perform large-scale or repetitive tasks. They typically have lower
priority compared to system and interactive processes.
 Examples:
• Data Backup: Scheduled tasks that back up data from a server or
workstation to another location, often running overnight.
• Batch Processing of Data: Tasks like processing payroll, compiling large
amounts of data, or generating reports, often performed at scheduled
intervals.
• Image/Video Rendering: Batch processing of multimedia files, such as
converting video formats or rendering images, which can be done without
user interaction.
• Automated Software Builds: Tasks that compile and build software from
source code in an automated manner, often triggered by version control
systems.
• Log File Analysis: Processes that analyze log files to generate reports or
alerts, typically run at scheduled intervals.
Multi-Level Feedback Queue Scheduling
Multi-Level Feedback Queue Scheduling
•Multiple Queues with Different Priorities:
•MLFQ uses multiple queues, each with its own scheduling algorithm and priority level.
•Typically, higher-priority queues have shorter time slices and use algorithms like Round Robin,
while lower-priority queues might use algorithms like First-Come, First-Served (FCFS).

•Dynamic Adjustment of Priority:


•Processes start in the highest-priority queue.
•If a process uses its time slice without completing, it is moved to a lower-priority queue.
•If a process in a lower-priority queue does not complete within its time slice, it remains in the
same queue or is demoted further.
•If a process in a lower-priority queue does not use its entire time slice, it can be promoted to a
higher-priority queue.

•Feedback Mechanism:
•The feedback mechanism allows the system to adjust the priority of processes dynamically
based on their behavior and resource usage.
•This ensures that both short and long processes are handled efficiently, balancing
responsiveness and throughput.
How MLFQ Scheduling Works:
1. Initialization:
1. Processes are initially placed in the highest-priority queue (Queue 0).
2. The time quantum (time slice) for this queue is typically short to allow for quick response
to interactive processes.
2. Process Execution:
1. The CPU scheduler picks processes from the highest-priority queue first and executes
them according to the queue's scheduling algorithm.
2. If a process uses up its time slice and does not complete, it is moved to the next lower-
priority queue.
3. If a process in a lower-priority queue completes before its time slice is used up, it may be
moved back to a higher-priority queue.
3. Queue Promotion and Demotion:
1. Demotion: If a process in the highest-priority queue does not finish within its allocated
time slice, it is demoted to a lower-priority queue. This helps in handling longer processes
and ensuring that they do not monopolize the CPU.
2. Promotion: Processes that wait too long in lower-priority queues might be promoted to
higher-priority queues. This prevents starvation and ensures that processes eventually get
a chance to run.
 Advantages of MLFQ:
• Flexibility: Adapts to different process behaviors by dynamically
adjusting priorities.
• Efficiency: Balances the needs of both short and long processes,
improving overall system responsiveness and throughput.
• Fairness: Reduces the likelihood of starvation through the feedback
mechanism.
 Disadvantages of MLFQ:
• Complexity: More complex to implement compared to simpler
scheduling algorithms.
• Configuration: Requires careful tuning of parameters such as the
number of queues and time quanta to achieve optimal performance.

You might also like