0% found this document useful (0 votes)
24 views54 pages

Operating System BCS 401 - Important Questions With Solutions

Uploaded by

yashrajawat012
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views54 pages

Operating System BCS 401 - Important Questions With Solutions

Uploaded by

yashrajawat012
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

IEC College of Engineering & Technology, Greater Noida

Department of Computer Science and Engineering

Operating System BCS 401


Important Questions - for University Exam
Unit-1
Q. 1. Describe the classification of operating system.
The classification of Operating Systems (OS) can be done based on various criteria, such as
the number of users, the number of tasks, the mode of operation, or how resources are
managed. Below is a detailed classification:
1. Based on the Number of Users
a) Single-User Operating System
• Supports only one user at a time.
• Example: MS-DOS, Windows 95.
b) Multi-User Operating System
• Allows multiple users to access the system simultaneously.
• Resources like CPU and memory are shared.
• Example: UNIX, Linux, Windows Server.
2. Based on the Number of Tasks
a) Single-tasking Operating System
• Can execute only one task at a time.
• Example: Early versions of MS-DOS.
b) Multi-tasking Operating System
• Can perform multiple tasks simultaneously.
• Efficiently manages CPU time between processes.
• Example: Windows, Linux, macOS.
3. Based on the Mode of Interaction
a) Command-Line Interface (CLI) OS
• User interacts through commands typed in a terminal or shell.
• Requires knowledge of command syntax.
• Example: MS-DOS, UNIX.
b) Graphical User Interface (GUI) OS
• Uses graphical elements like windows, icons, and menus.
• Easier for general users.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Example: Windows, macOS.


4. Based on Processing Capability
a) Batch Operating System
• Jobs are grouped into batches and processed without user interaction.
• Suitable for large-scale data processing.
• Example: Early IBM OS.
b) Time-Sharing Operating System
• Each user gets a time slice of CPU; ensures quick response.
• Supports multiple terminals/users.
• Example: UNIX, Multics.
c) Real-Time Operating System (RTOS)
• Provides immediate processing; meets deadlines.
• Used in embedded systems, robotics, etc.
• Example: VxWorks, RTLinux.
d) Distributed Operating System
• Manages a group of independent computers and makes them appear as a single
system.
• Example: Amoeba, Mach.
e) Network Operating System
• Provides services to computers connected in a network.
• Manages file sharing, communication, etc.
• Example: Novell NetWare, Windows Server.
5. Based on Usage and Platform
a) Desktop OS
• Designed for personal computers.
• Example: Windows 10, macOS, Ubuntu Desktop.
b) Server OS
• Optimized for server hardware and networking.
• Provides services like file sharing, database management, etc.
• Example: Windows Server, Red Hat Enterprise Linux.
c) Mobile OS

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Designed for smartphones and tablets.


• Example: Android, iOS.
d) Embedded OS
• Used in embedded systems with limited resources.
• Example: Embedded Linux, FreeRTOS.
Summary Table:
Criteria Types Examples
Users Single-user, Multi-user MS-DOS, UNIX
Tasks Single-tasking, Multi-tasking DOS, Windows
Interface CLI, GUI UNIX, Windows
Processing Batch, Time-sharing, RTOS, Distributed, IBM OS, UNIX, VxWorks
Network
Platform Desktop, Server, Mobile, Embedded Windows, Android,
FreeRTOS

Q. 2. Describe the differences between symmetric and asymmetric multi-processing.


1. Symmetric Multiprocessing (SMP)
Feature Description
Definition All processors share a common memory and are controlled by a single OS.
Each processor performs all tasks including OS functions and user
processes.
Processor All processors are equal and perform similar functions. No master-slave
Role relationship.
OS Control A single copy of the operating system controls all processors.
Memory Processors share the same main memory and I/O devices.
Sharing
Task Any task can be assigned to any processor.
Handling
Reliability More reliable and fault-tolerant; failure of one CPU does not halt the
system.
Examples Windows, Linux, and UNIX systems using multi-core processors.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

2. Asymmetric Multiprocessing (AMP)


Feature Description
Definition Each processor is assigned a specific task. One master processor controls
the system, and the others are slaves.
Processor One processor (master) controls the system and assigns tasks to others
Role (slaves).
OS Control Typically, only the master runs the OS; slave processors follow
instructions from the master.
Memory May or may not share memory; memory is sometimes private to each
Sharing processor.
Task Each processor has a defined role; only the master handles OS functions.
Handling
Reliability Less reliable; if the master processor fails, the whole system may stop.
Examples Early embedded systems, real-time systems, and some mobile SoCs
(System on Chips).
Key Differences
Feature Symmetric Multiprocessing (SMP) Asymmetric Multiprocessing
(AMP)
Control Shared control among all CPUs Master-slave architecture
Task Tasks are dynamically assigned Tasks are pre-assigned
Distribution
OS Execution All CPUs run the OS Only the master runs the OS
Fault Tolerance High Low (master failure = system
failure)
Complexity More complex Simpler architecture
Efficiency More efficient for general-purpose Efficient for specialized tasks
computing

Q. 3. What is an operating system? Discuss the operating system structure.


An Operating System (OS) is system software that acts as an interface between the user
and the computer hardware. It manages hardware resources, provides an environment for
application execution, and offers services such as file handling, memory management, and
process control.
Key Functions of an Operating System:
1. Process Management – Handles creation, scheduling, and termination of processes.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

2. Memory Management – Allocates and manages the main memory (RAM).


3. File System Management – Organizes, stores, retrieves, and manages data on storage
devices.
4. Device Management – Controls peripheral devices through drivers.
5. Security and Access Control – Protects data and system resources from unauthorized
access.
6. User Interface – Provides CLI (Command Line Interface) or GUI (Graphical User
Interface).
Operating System Structure
The structure of an operating system refers to how its components are organized and how
they interact. There are several types of OS structures:
1. Monolithic Structure
• Description: The OS is a single large program containing all necessary services.
• Characteristics:
o All components (file system, memory, I/O, etc.) run in kernel mode.
o Communication between components via system calls.
• Pros: Fast execution due to tight integration.
• Cons: Hard to maintain and debug.
• Example: Early UNIX.
2. Layered Structure
• Description: OS is divided into a hierarchy of layers, each built on top of lower ones.
• Layer Examples:
o Hardware
o Kernel
o Memory management
o File system
o User programs
• Pros: Modular design; easier to debug.
• Cons: Lower performance due to overhead.
• Example: THE operating system.
3. Microkernel Structure

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Description: Only the most essential services (e.g., inter-process communication,


basic scheduling) run in the kernel. Others run in user space.
• Pros: Better security and stability.
• Cons: Slower due to user-kernel communication.
• Example: QNX, MINIX.
4. Modular Structure
• Description: A hybrid of monolithic and microkernel; uses loadable modules.
• Pros: Flexibility, performance, and easier updates.
• Example: Modern Linux distributions.
5. Virtual Machine Structure
• Description: OS creates virtual environments for users or applications, each with its
own resources.
• Used in: Cloud computing, system emulation.
• Example: VMware, VirtualBox.
6. Client-Server Model
• Description: System services are provided by servers (running in user mode) that
communicate with client programs.
• Pros: Easy maintenance and strong security.
• Example: Modern distributed systems.
Summary
Structure Type Main Feature Example
Monolithic Single large kernel Early UNIX
Layered Built in hierarchical layers THE OS
Microkernel Minimal kernel with user-mode services MINIX, QNX
Modular Kernel with dynamically loaded modules Linux
Virtual Machine OS supports multiple virtual OSs VMware
Client-Server OS services via clients and servers Distributed OSs

Q. 4. Explain briefly layered operating system structure with neat sketch. Also explain
protection and security.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

A Layered Operating System is designed in layers, where each layer is built on top of the
lower one. Each layer provides specific functions and hides its implementation from higher
layers, promoting modularity and ease of debugging.
Key Features:
• Each layer interacts only with its adjacent layers.
• The bottom layer deals with hardware.
• The top layer provides a user interface.
• The system is designed as a hierarchy of layers (0 = lowest, N = highest).
Typical Layers in a Layered OS:
Layer Layer Name Description
Number
0 Hardware Physical devices like CPU, memory, I/O.
1 CPU Scheduling & Memory Handles CPU allocation and memory
Mgmt management.
2 Device Management Manages I/O devices.
3 System Calls Interface Provides APIs for user programs.
4 User Programs & Services Application programs and user interface.

Diagram: Layered Operating System Structure


Here's a neat sketch representation:
+--------------------------+
| User Applications | ← Layer 4
+--------------------------+
| System Call Interface | ← Layer 3
+--------------------------+
| Device Management Layer | ← Layer 2
+--------------------------+
| CPU & Memory Management | ← Layer 1
+--------------------------+
| Hardware Layer | ← Layer 0
+--------------------------+

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Protection and Security in Operating Systems


1. Protection
Protection refers to the mechanism that controls access to system resources like CPU,
memory, files, and devices among multiple users or processes.
Goals of Protection:
• Prevent unauthorized access.
• Ensure process isolation.
• Protect user data and system resources.
Examples of Protection Mechanisms:
• Access Control Lists (ACLs)
• User authentication and authorization
• File permissions (read/write/execute)
2. Security
Security is a broader concept that includes protection but also defends the system from
external threats, such as viruses, hackers, and network attacks.
Security Goals:
• Confidentiality – Prevent unauthorized disclosure.
• Integrity – Prevent unauthorized modification.
• Availability – Ensure resources are available when needed.
Common Security Measures:
• User authentication (username/password, biometrics)
• Encryption of data and communication
• Firewalls and antivirus software
• Secure boot mechanisms
Difference Between Protection and Security:
Aspect Protection Security
Scope Internal resource management Internal + External threats
Focus Access control Confidentiality, integrity, availability
Example File permissions Virus protection, encryption

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Q.5. What do you understand by system call? How is a system call made? How is a system
call handled by the system? Choose suitable examples for explanation.
What is a System Call?
A System Call is a programmatic way for user applications to interact with the
operating system. It acts as a bridge between user space and kernel space, allowing
programs to request services like file access, memory management, process control, etc.
In simple terms: A system call is like raising your hand (from user program) to ask the OS
(kernel) to do something on your behalf that you don’t have permission to do directly.
Common Services Requested via System Calls:

Category Examples

Process Control fork(), exec(), exit()

File Management open(), read(), write(), close()

Device Management ioctl(), read(), write()

Information Maintenance getpid(), alarm(), sleep()

Communication pipe(), shmget(), send(), recv()

How is a System Call Made?


1. User Program Request:
• The program calls a library function (e.g., printf(), open()).
• That library internally invokes a system call.
2. Mode Switch:
• The processor switches from user mode to kernel mode using a trap instruction
(like int 0x80 in x86 or syscall).
3. Jump to OS Handler:
• The control is transferred to a predefined system call handler in the OS.
4. Service Performed by OS:
• The OS performs the requested service (e.g., reading a file).
5. Return to User Space:
• After completion, control returns to the user program with the result.
Example – File Reading in Linux
Here’s a breakdown of how read() works:
CopyEdit

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

#include <unistd.h>
#include <fcntl.h>

int main() {
int fd = open("file.txt", O_RDONLY); // System call to open a file
char buffer[100];
read(fd, buffer, 100); // System call to read from file
close(fd); // System call to close the file
return 0;
}
Behind the Scenes:
• open() invokes system call number 2 (for example, in Linux).
• The CPU switches to kernel mode.
• OS locates the file, creates a file descriptor.
• Once reading is complete, it returns to user mode with data.
How is a System Call Handled?
Step-by-Step System Call Handling:
1. User Process invokes system call via wrapper function.
2. System Call Interface (SCI) identifies the request and passes it to the OS kernel.
3. Trap to Kernel Mode: CPU switches from user to kernel mode using a system
interrupt.
4. Kernel Handler processes the request.
5. Return Values (or error codes) are passed back to user process.
6. Return to User Mode and continue execution.
Key Points:
• System calls are safe gateways for accessing hardware and OS resources.
• They protect system integrity by restricting direct hardware access.
• System call interfaces differ by OS (Linux, Windows, macOS).

Q.6. What do you understand by dual mode operation of processors? What is the reason
behind dual mode operation of processors?

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Dual Mode Operation of Processors


Dual mode operation refers to the capability of a CPU to operate in two distinct modes:
1. User Mode
2. Kernel Mode (also called Supervisor Mode or Privileged Mode)
This feature is implemented to protect the operating system and critical system resources
from being accidentally or maliciously modified by user programs.
How Dual Mode Works:
• When a user program is running, the CPU operates in user mode.
• If the program needs to perform a privileged operation (like accessing hardware,
reading/writing files, or creating processes), it makes a system call.
• The CPU switches to kernel mode to execute the requested operation.
• After completing the operation, the system returns to user mode.
Mode Switching Triggers:

Trigger Type Description

System Calls User program requests a service from the OS (e.g., read(), write()).

Interrupts Signals from hardware (like keyboard input, disk I/O completion).

Exceptions Errors like divide by zero or invalid memory access.

Mode Bit:
• The CPU maintains a mode bit (often a single binary flag) in the processor status
register:
o 0 → Kernel Mode
o 1 → User Mode
• This bit determines the current operating mode of the CPU.
Reason Behind Dual Mode Operation
The primary purpose of dual mode operation is protection and controlled resource access.
Main Reasons:
Reason Explanation
System Prevents user applications from directly accessing hardware and
Protection critical OS code.
Controlled Allows only the OS to perform sensitive operations like I/O, memory
Execution management, and process control.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Stability Avoids system crashes by isolating faulty or malicious user code from
the core OS.
Security Ensures secure handling of files, memory, and device access by
limiting user privileges.

Analogy:
Think of user mode as a guest in a house who can only access the living room and kitchen,
but needs permission to access the owner's room (kernel mode), where all valuables are
stored.
Summary Table
Feature User Mode Kernel Mode
Privileges Limited Full access to hardware and resources
Who Uses It Application Programs Operating System Core
Access to I/O Not Allowed Directly Allowed
Risk of Damage Low (but limited functionality) High (needs strict control)

Unit-2
Q. 1. Explain the principle of concurrency.
Principle of Concurrency
Concurrency is the ability of a system to execute multiple tasks (processes or threads)
seemingly at the same time. It is a fundamental principle in modern operating systems to
improve system performance, responsiveness, and resource utilization.
Definition:
Concurrency refers to the execution of multiple sequences of operations simultaneously —
not necessarily at the exact same moment, but in overlapping time periods.
Key Concepts in Concurrency:
Concept Explanation
Process An independent program in execution.
Thread A lightweight process; part of a process that can run independently.
Context Switching the CPU between different processes or threads.
Switching
Parallelism A special case of concurrency where tasks truly run at the same time
on multiple processors.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Interleaving Concurrency on a single CPU by rapidly switching between tasks.

Objectives of Concurrency:
1. Maximize CPU Utilization – Keep the processor busy by overlapping I/O and
computation.
2. Improve Responsiveness – Handle multiple user interactions simultaneously.
3. Enable Multiprogramming – Run multiple programs at once.
4. Support Distributed Systems – Communicate and coordinate between multiple
systems or processes.
Challenges in Concurrency:
Problem Description
Race Multiple processes access shared data simultaneously, leading to
Conditions unpredictable results.
Deadlock Two or more processes wait indefinitely for resources locked by each
other.
Starvation A process waits forever because other higher-priority processes keep
executing.
Synchronization Coordinating processes to ensure correct access to shared resources.

Concurrency Control Mechanisms:


• Mutex (Mutual Exclusion)
• Semaphores
• Monitors
• Message Passing
• Locks and Condition Variables
Example:
Let’s say two users are accessing a shared bank account:
• User A wants to withdraw ₹5,000.
• User B wants to check the balance at the same time.
If there is no proper concurrency control, both may see inconsistent data, or the system may
crash.
Concurrency in Operating Systems:

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

The OS achieves concurrency by:


• Time-sharing on a single CPU
• Multiprocessing on multiple CPUs
• Thread scheduling to efficiently manage tasks
• Process synchronization to prevent data conflicts
The principle of concurrency is essential for building efficient, responsive, and reliable
systems. It allows multiple operations to overlap in execution, making full use of system
resources while maintaining correctness and consistency through careful management.

Q.2. Explain the following terms briefly :


i. Dekker's solution
ii. Busy waiting
Dekker's Solution
Dekker's Solution is one of the earliest known algorithms for achieving mutual exclusion
in concurrent programming. It allows two processes to share a single-use resource without
conflict using only shared variables and no hardware support like semaphores or mutexes.
Key Features:
• Solves the critical section problem for two processes.
• Ensures mutual exclusion, progress, and bounded waiting.
• Uses flags to indicate interest and a turn variable to avoid deadlock.
Concept:
Each process:
• Sets its own flag to indicate interest.
• Checks if the other process is also interested.
• If both are interested, one yields based on the turn variable.
• If not, it enters the critical section.
Limitation:
• Only works for two processes.
• Can be complex and inefficient in modern systems.
ii. Busy Waiting-

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Busy waiting (also known as spinlock) is a condition where a process continuously checks
for a condition (like a lock to be released) without giving up the CPU.
Example:

while (lock == 1); // Keep checking until lock becomes 0


Problems with Busy Waiting:
• Wastes CPU cycles while waiting.
• Inefficient for long wait times.
• Can degrade overall system performance.
When It’s Used:
• In low-level programming or multiprocessor systems where wait times are very
short.
• In kernel-level spinlocks where sleeping is not an option.
Summary Table:
Term Description
Dekker’s Algorithm for mutual exclusion between two processes without
Solution hardware support.
Busy Waiting A method of waiting by continuously checking a condition, consuming
CPU.

Q.3. State the critical section problem. Illustrate the software based solution to the critical
section problem.
Critical Section Problem
The Critical Section Problem arises in concurrent programming when multiple processes
(or threads) share resources (like variables, files, or hardware) and try to access or modify
them simultaneously. If not handled properly, this can lead to inconsistent or incorrect
data.
What is a Critical Section?
A Critical Section is a part of the program where shared resources are accessed. Only one
process should execute in its critical section at a time to maintain data consistency.
Requirements for Solving the Critical Section Problem:
1. Mutual Exclusion: Only one process can be in the critical section at any given time.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

2. Progress: If no process is in the critical section, one of the waiting processes should
be allowed to enter.
3. Bounded Waiting: There should be a limit on how many times other processes can
enter their critical sections before a waiting process gets a turn.
Software-Based Solution: (Peterson's Algorithm)
(For two processes, say P0 and P1)
Shared Variables:
bool flag[2]; // flag[i] = true means Pi wants to enter critical section
int turn; // Indicates whose turn it is
Algorithm for Process P0:
flag[0] = true;
turn = 1;
while (flag[1] && turn == 1); // Busy wait

// --- Critical Section ---

flag[0] = false; // Exit section


Algorithm for Process P1:
flag[1] = true;
turn = 0;
while (flag[0] && turn == 0); // Busy wait

// --- Critical Section ---

flag[1] = false; // Exit section


How It Works:
• Each process signals its intention to enter by setting flag[i] = true.
• The turn variable ensures only one process proceeds if both want to enter.
• Mutual exclusion is achieved because both cannot proceed through the while loop
at the same time.
• The solution avoids deadlock and starvation and satisfies all three requirements.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Advantages of Peterson's Solution:


• Does not require special hardware.
• Satisfies mutual exclusion, progress, and bounded waiting.
Limitations:
• Works for only two processes.
• Not suitable for modern multi-core systems due to compiler/hardware optimization
issues.

Q. 4. State the finite buffer producer consumer problem. Give solution of the problem using
semaphores.
Finite Buffer Producer-Consumer Problem
The Producer-Consumer Problem is a classic example of a synchronization problem in
operating systems and concurrent programming.
Problem Statement:
• Producer: Generates data items and places them into a bounded (finite) buffer.
• Consumer: Takes items from the buffer and processes them.
• Constraint:
o The producer must wait if the buffer is full.
o The consumer must wait if the buffer is empty.
• Goal: Ensure that producer and consumer operate concurrently without conflicts or
data corruption.
Finite Buffer Example:
A buffer of size N = 5:
markdown
CopyEdit
Buffer: [__, __, __, __, __]

Solution Using Semaphores


We use three semaphores and one mutex:

Semaphore Purpose Initial Value

mutex For mutual exclusion on buffer 1

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Semaphore Purpose Initial Value

empty Counts empty slots N (buffer size)

full Counts filled slots 0

Producer Code (Pseudocode in C-like syntax):

do {
// produce an item in nextProduced

wait(empty); // Decrease empty count


wait(mutex); // Enter critical section

buffer[in] = nextProduced;
in = (in + 1) % N;

signal(mutex); // Exit critical section


signal(full); // Increase full count
} while (true);
Consumer Code:

do {
wait(full); // Decrease full count
wait(mutex); // Enter critical section

nextConsumed = buffer[out];
out = (out + 1) % N;

signal(mutex); // Exit critical section


signal(empty); // Increase empty count

// consume the item in nextConsumed

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

} while (true);
How It Works:
• mutex ensures only one process (producer or consumer) accesses the buffer at a time.
• empty keeps track of available slots for the producer.
• full keeps track of available items for the consumer.
• The wait() operation decrements the semaphore; if it’s zero, the process blocks.
• The signal() operation increments the semaphore; if processes are waiting, one is
unblocked.
Advantages:
• Ensures synchronization and mutual exclusion.
• Works efficiently with bounded buffers.
• Prevents race conditions, overflow, and underflow.

Q. 5. Discuss mutual exclusion implementation with test and set() instruction.


Mutual Exclusion Implementation Using Test-and-Set() Instruction
What is Mutual Exclusion?
Mutual exclusion ensures that only one process enters its critical section (CS) at a time when
accessing shared resources, preventing race conditions.
What is Test-and-Set() Instruction?
• A hardware atomic instruction used for synchronization.
• It tests the value of a memory location and sets it to 1 in a single, indivisible
operation.
• Because it’s atomic, no other process can interrupt it midway.
Test-and-Set() Prototype:

boolean test_and_set(boolean *target) {


boolean old = *target;
*target = true;
return old;
}
• Returns the original value of target.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Sets target to true (1).


How to Use Test-and-Set() for Mutual Exclusion
We use a shared lock variable initialized to false (0):

boolean lock = false;


Entry Section (Try to Enter Critical Section):
c
CopyEdit
while (test_and_set(&lock)) {
// Busy wait until lock becomes false
}
• If lock was already true, the process waits.
• If lock was false, it becomes true, and the process enters CS.
Exit Section (Leaving Critical Section):
c
CopyEdit
lock = false;
• Release the lock for others.
Complete Code Skeleton for a Process:

do {
// Entry section
while (test_and_set(&lock)) {
// busy wait
}

// Critical section
// Access shared resource

// Exit section

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

lock = false;

// Remainder section
} while (true);
Advantages:
• Simple to implement.
• Provides mutual exclusion.
• Efficient on multiprocessor systems.
Disadvantages:
• Uses busy waiting (wastes CPU cycles).
• May cause starvation if some process keeps locking.
Summary:
Feature Description
Atomicity Test-and-Set executes indivisibly to prevent race.
Mutual Exclusion Only one process sets the lock and enters CS.
Busy Waiting Processes wait actively while lock is held.
Hardware Support Requires special CPU instruction support.

Q. 6. Explain dining philosopher problem with its suitable solution.


Dining Philosophers Problem
Problem Statement:
The Dining Philosophers Problem is a classic synchronization problem illustrating the
challenges of resource sharing and deadlock among concurrent processes.
• Scenario:
Five philosophers sit around a circular table. Between each pair of philosophers, there
is one fork (total 5 forks).
• Each philosopher alternates between thinking and eating.
• To eat, a philosopher needs both forks on their left and right.
• Philosophers cannot communicate with each other.
• The problem is to design a protocol so that:

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

o No two philosophers use the same fork at the same time (mutual exclusion).
o No philosopher starves (everyone gets to eat eventually).
o Deadlock and starvation are prevented.
Why is it challenging?
• If all philosophers pick up their left fork simultaneously, no one can pick up their
right fork → deadlock.
• If some philosophers never get both forks, they starve.
Classical Solution Using Semaphores
Setup:
• Let N = 5 be the number of philosophers.
• Each fork is represented by a binary semaphore, initialized to 1.
• Each philosopher performs:

do {
think();
wait(fork[left]); // pick left fork
wait(fork[right]); // pick right fork

eat();

signal(fork[left]); // put down left fork


signal(fork[right]); // put down right fork
} while (true);

Deadlock Issue:
If all philosophers pick the left fork first simultaneously, they wait forever for the right fork
→ deadlock.
Deadlock-Free Solution:

Solution 1: Resource Hierarchy / Numbering


• Number forks from 1 to N.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Each philosopher picks up the lower-numbered fork first, then the higher one.
• This breaks the circular wait condition and prevents deadlock.
For philosopher i:
if (i % 2 == 0) {
wait(fork[i]); // pick left fork
wait(fork[(i + 1) % N]); // pick right fork
} else {
wait(fork[(i + 1) % N]); // pick right fork
wait(fork[i]); // pick left fork
}

eat();

signal(fork[i]);
signal(fork[(i + 1) % N]);

Solution 2: Using a Semaphore to Limit Access


• Use a semaphore room initialized to N-1.
• Only N-1 philosophers can try to pick forks at the same time.
• This guarantees at least one philosopher can eat, avoiding deadlock.
Pseudocode:
wait(room); // Enter room

wait(fork[left]);
wait(fork[right]);

eat();

signal(fork[left]);
signal(fork[right]);

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

signal(room); // Leave room


Summary
Issue Solution
Deadlock Pick forks in a fixed order / limit access
Starvation Fair scheduling or using monitors/semaphores
Mutual Exclusion Binary semaphores for forks

Q.7. Discuss message passing systems. Explain how message passing can be used to solve
buffer producer consumer problem with infinite buffer.
What is Message Passing?
Message passing is a method of inter-process communication (IPC) where processes
communicate and synchronize by sending and receiving messages.
• No shared memory is required.
• Processes exchange data via messages.
• Commonly used in distributed systems or where memory sharing is not possible.
Key Features of Message Passing:
Feature Description
Communication Processes send and receive messages explicitly.
Synchronization Can be blocking (sender/receiver waits) or non-blocking.
Modes Direct (process-to-process) or indirect (via mailbox/ports).
Types Synchronous or asynchronous messaging.

Basic Operations:
• send(destination, message) — Send a message.
• receive(source, message) — Receive a message.

Message Passing Solution to Producer-Consumer Problem (Infinite Buffer)

Problem Recap:
• Producer generates items.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Consumer consumes items.


• The buffer is infinite (no size limitation), so no blocking for full buffer.
• Synchronization is needed so consumer waits if no item is available.
How Message Passing Solves This:
• Use two processes: Producer and Consumer.
• Producer sends messages containing produced items.
• Consumer receives messages when items are available.
• Since buffer is infinite, no need to wait for buffer space.
• Consumer blocks waiting for messages if none are available.
Pseudocode:
Producer Process:

while (true) {
item = produce_item();
send(consumer, item); // Send produced item to consumer
}
Consumer Process:
c
CopyEdit
while (true) {
receive(producer, item); // Wait and receive item from producer
consume_item(item);
}
Explanation:
• send() places the item in a message queue (buffer).
• Since buffer is infinite, send() never blocks.
• receive() blocks if there are no messages, so consumer waits when buffer empty.
• This naturally synchronizes producer and consumer without shared memory or
explicit semaphores.
Advantages of Message Passing in this Problem:
• No shared memory needed.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Simpler synchronization.
• Naturally handles waiting when buffer empty (consumer blocks on receive).
• Suitable for distributed systems.

Unit-3
Q.1. Discuss the performance criteria for CPU scheduling.
Performance Criteria for CPU Scheduling
When evaluating and designing CPU scheduling algorithms, the following criteria are
commonly used to measure their effectiveness:

Criteria Description
CPU Maximize the CPU usage by keeping it as busy as possible (usually
Utilization target 40% to 90% or more).
Throughput Number of processes completed per unit time. Higher throughput means
more work done efficiently.
Turnaround Total time taken from submission of a process to its completion
Time (completion time - arrival time). Lower is better.
Waiting Time Total time a process spends waiting in the ready queue. Minimizing
waiting time improves responsiveness.
Response Time Time from submission of a request until the first response is produced.
Important for interactive systems.
Fairness Ensures all processes get a fair share of the CPU without starvation.

Explanation of Key Terms:


• CPU Utilization: Measures how well the CPU is kept busy. Idle CPU means wasted
resource.
• Throughput: Indicates system’s efficiency by measuring work done per time.
• Turnaround Time: Measures the total duration a process takes; important for batch
jobs.
• Waiting Time: Important for evaluating process delays before execution.
• Response Time: Critical for user-interactive systems where users expect quick
feedback.
• Fairness: Avoids indefinite postponement or starvation of processes.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Summary:
A good CPU scheduling algorithm aims to:
• Maximize CPU utilization and throughput.
• Minimize turnaround time, waiting time, and response time.
• Ensure fairness among all processes.

Q.2. Draw and explain the process state transition diagram.

States Explanation:
1. New
o Process is being created.
o Not yet admitted to the system.
2. Ready
o Process is loaded into main memory.
o Waiting for CPU allocation.
o Ready to run but waiting in the ready queue.
3. Running
o Process is currently executing on the CPU.
4. Waiting (Blocked)
o Process is waiting for some event to occur (like I/O completion).
o Not eligible for CPU until the event occurs.
5. Terminated (Exit)
o Process has finished execution.
o Removed from the system.
Transitions:

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Transition Description

New → Ready Process admitted by OS scheduler.

Ready → Running Scheduler selects process for execution.

Running → Waiting Process requests I/O or waits for an event.

Waiting → Ready I/O or event completed; process ready again.

Running → Ready Process is preempted (time slice expired).

Running → Terminated Process finishes execution or is terminated.

Summary:
• Processes cycle through states based on resource availability and execution progress.
• The scheduler manages transitions between Ready and Running.
• Waiting occurs when processes need I/O or other resources.
• Termination happens after completion.

Q. 3. What is the process control block? Explain all its components.


Process Control Block?
• The Process Control Block (PCB) is a data structure used by the Operating System
to store all the information about a process.
• It acts as the process’s identity card and is crucial for process management and
context switching.
• Each process has a unique PCB.
Purpose of PCB:
• Keeps track of process state.
• Saves process context during interrupts or preemption.
• Helps in process scheduling and management.
Components of PCB:
Component Description
Process State Current state of the process (New, Ready, Running, Waiting,
Terminated).
Process ID (PID) Unique identifier assigned to the process.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Program Counter Address of the next instruction to execute for the process.
(PC)
CPU Registers Contents of all CPU registers (general-purpose, index, stack
pointers) for context switching.
CPU Scheduling Info Information like priority, scheduling queue pointers, and other
scheduling parameters.
Memory Management Details such as base and limit registers, page tables, segment
Info tables, or memory limits.
Accounting Info CPU usage time, process execution time, time limits, job or
process numbers for accounting.
I/O Status Info List of I/O devices allocated to the process, open files, and I/O
requests.
Pointer to Parent Reference to the parent process’s PCB.
Process
Process Privileges Security and access control information.

Summary:
Component Role in Process Management
Process State Tracks lifecycle stage of process.
Process ID Uniquely identifies the process.
Program Counter Keeps track of next instruction to execute.
CPU Registers Saves process context for resuming execution.
Scheduling Info Helps scheduler make decisions.
Memory Info Manages process’s allocated memory space.
Accounting Info Records resource usage for billing and monitoring.
I/O Status Manages process’s interaction with I/O devices and files.

Q. 4. Explain the functioning of multilevel feedback queue scheduling.


Multilevel Feedback Queue Scheduling (MLFQ)
Multilevel Feedback Queue Scheduling is a CPU scheduling algorithm that:
• Uses multiple queues with different priority levels.
• Allows processes to move between queues based on their behavior and CPU burst
characteristics.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Adapts dynamically to the nature of processes to improve responsiveness and


throughput.
Key Features:
• Multiple ready queues, each with a different priority.
• Higher priority queues have shorter time quantum (for example, Round Robin).
• Processes start in the highest priority queue.
• If a process uses up its time slice, it is moved to a lower priority queue.
• If a process waits too long in a lower priority queue, it can be promoted to a
higher priority queue (to prevent starvation).
• This feedback mechanism helps balance between CPU-bound and I/O-bound
processes.
How it Works:
1. Process Arrival:
A new process enters the highest priority queue (Q0).
2. Scheduling and Execution:
o The CPU scheduler picks processes from the highest priority non-empty
queue.
o Uses Round Robin scheduling within each queue with a time quantum
appropriate to the queue.
3. Demotion:
o If a process does not finish within its time quantum in a queue, it is moved
down to the next lower priority queue.
o This penalizes CPU-bound processes that use long CPU bursts.
4. Promotion:
o If a process waits too long in a lower priority queue (starvation risk), it is
promoted to a higher priority queue.
Example:
• Suppose 3 queues: Q0 (highest priority), Q1, Q2 (lowest priority).
• Q0 time quantum = 8 ms, Q1 = 16 ms, Q2 = FCFS (First-Come-First-Serve).
• New processes enter Q0.
• If process uses full 8 ms in Q0, it moves to Q1.
• If it uses full 16 ms in Q1, it moves to Q2.
• If a process waits too long in Q2, it may be promoted to Q1 or Q0.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Advantages:
• Differentiates between CPU-bound and I/O-bound processes.
• Good for interactive systems.
• Prevents starvation with promotion.
• Flexible and adaptive scheduling.
Disadvantages:
• Complex to implement.
• Requires careful tuning of parameters like time quantum and promotion/demotion
rules.
• Overhead of managing multiple queues.
Summary Table:
Feature Description
Multiple Queues Several ready queues with different priority and time quantum.
Feedback Processes move between queues based on CPU usage.
Priority Scheduling Higher priority queues are checked first.
Starvation Prevention Aging or promotion to prevent starvation.

Q.5. What is deadlock? What are the necessary conditions for deadlock?
What is Deadlock
A deadlock is a situation in a multiprogramming environment where a set of processes are
blocked indefinitely, each waiting for a resource that another process in the set holds.
Because each process waits for a resource held by another, none can proceed — the system is
stuck.
Characteristics of Deadlock:
• Processes cannot continue because they are waiting for resources.
• None of the processes can release resources because they are waiting.
• The system halts or slows down significantly if deadlock occurs.
Necessary Conditions for Deadlock (The Coffman Conditions):
For a deadlock to occur, all four conditions must hold simultaneously:
Condition Explanation
Mutual At least one resource must be held in a non-shareable mode (only one
Exclusion process can use it at a time).

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Hold and A process is holding at least one resource and waiting to acquire
Wait additional resources held by other processes.
No Resources cannot be forcibly taken away from a process; they must be
Preemption released voluntarily.
Circular Wait There exists a set of processes {P1, P2, ..., Pn} such that P1 is waiting for
a resource held by P2, P2 is waiting for P3, ..., and Pn is waiting for P1,
forming a circular chain.

Summary:
Deadlock: Processes are stuck waiting indefinitely for resources.
Necessary Conditions 1) Mutual Exclusion
2) Hold and Wait
3) No Preemption
4) Circular Wait

Q.6. Describe Banker's algorithm for safe allocation.


Banker's Algorithm for Safe Resource Allocation
• Banker's Algorithm is a deadlock avoidance algorithm proposed by Edsger Dijkstra.
• It is used to allocate resources safely to processes by ensuring the system never
enters an unsafe (potential deadlock) state.
• The system pretends to be a banker who only grants loans (resources) if they can be
safely paid back (process can finish).
Key Idea:
• Before allocating resources to a process, the algorithm checks if the system will
remain in a safe state after allocation.
• A safe state means there exists a sequence of processes such that each process can get
the resources it needs, finish, and release resources for the next process.
• If the allocation leads to an unsafe state, the request is denied, and the process waits.
Terminology & Data Structures:
Term Description
Available Vector indicating the number of available resources of each type.
Max Matrix defining the maximum demand of each process for each resource.
Allocation Matrix showing current resource allocation to each process.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Need Matrix showing remaining resource needs of each process (Need = Max -
Allocation).

Algorithm Steps:
1. When a process requests resources:
o Check if the request ≤ Need (process cannot ask more than max declared).
o Check if the request ≤ Available (enough resources are free).
o If both true, pretend to allocate requested resources:
▪ Available = Available - Request
▪ Allocation = Allocation + Request
▪ Need = Need - Request
2. Check system safety:
o Initialize Work = Available, Finish[i] = false for all processes.
o Find a process i such that:
▪ Finish[i] == false and Need[i] ≤ Work.
o If found, set:
▪ Work = Work + Allocation[i]
▪ Finish[i] = true
o Repeat until no such process found.
3. If all Finish[i] == true, the system is safe and allocation is allowed.
4. Otherwise, roll back the tentative allocation and make the process wait.
Summary:
Step Purpose
Check request validity Request ≤ Need and Request ≤ Available
Tentative allocation Simulate resource allocation
Safety check Verify system can still finish all processes
Grant or deny Allocate if safe; otherwise, deny and wait

Benefits:
• Avoids deadlock by never entering unsafe states.
• Dynamically checks safety at each allocation.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Unit-4
Q.1. Differentiate between fixed partitioning and variable partitioning.
comparison between Fixed Partitioning and Variable Partitioning in memory management:
Aspect Fixed Partitioning Variable Partitioning
Definition Memory is divided into fixed-size Memory is divided dynamically into
partitions at system startup. partitions based on process size.
Partition Size Partitions have fixed, predefined Partition sizes vary according to
sizes. process requirements.
Number of Fixed number of partitions. Number of partitions varies as
Partitions processes come and go.
Memory Can cause internal Can cause external fragmentation
Utilization fragmentation due to fixed size. due to dynamic allocation.
Process Process must fit into a fixed Process fits exactly into a partition
Allocation partition. sized for it.
Flexibility Less flexible; not efficient for More flexible; partitions adapt to
varying process sizes. process sizes.
Overhead Less overhead as partitions are More overhead managing dynamic
fixed. partitions.
Example Use Simple systems or early OS Modern systems with dynamic
memory management. memory allocation.

Summary:
• Fixed Partitioning is simple but inefficient due to wasted space inside fixed
partitions.
• Variable Partitioning is more memory-efficient but suffers from fragmentation and
requires more complex management.

Q.2. What do you mean by paging? When do page faults occur? Describe the actions taken
by the operating system when a page fault occurs.
Paging
Paging is a memory management technique that eliminates the need for contiguous allocation
of physical memory. It:
• Divides the process's logical memory into fixed-size blocks called pages.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Divides physical memory into blocks of the same size called frames.
• Maps pages to any available frames in physical memory.
• Allows efficient and flexible use of memory and prevents external fragmentation.
What is a Page Fault?
• A page fault occurs when a process tries to access a page that is not currently
loaded in physical memory (RAM).
• This means the page is either on secondary storage (like a hard disk) or has never
been loaded yet.
Actions Taken by the Operating System on a Page Fault:
1. Interrupt Generation:
o The hardware detects the page fault and generates a page fault interrupt to the
OS.
2. OS Checks Validity:
o The OS checks if the memory access is valid:
▪ If invalid (e.g., illegal memory access), the process is terminated.
▪ If valid, proceed to next steps.
3. Find a Free Frame:
o The OS searches for a free frame in physical memory.
o If no free frame is available, it selects a victim frame to evict (page
replacement).
4. Load the Page from Disk:
o The required page is read from secondary storage (disk) into the free or victim
frame.
5. Update Page Table:
o The process’s page table is updated to indicate the page is now in memory and
its frame location.
6. Restart the Process:
o The instruction that caused the page fault is restarted now that the page is in
memory.
Summary:

Concept Description

Paging Divides memory into fixed-size pages & frames.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Concept Description

Page Fault Occurs when accessing a page not in physical memory.

OS Actions Handle interrupt → validate → load page → update tables → restart process.

Q.3. Discuss the paged segmentation scheme of memory management and explain how
logical address is translated to physical address in such a scheme.
Paged Segmentation Scheme of Memory Management
Paged segmentation is a memory management scheme that combines the advantages of
both paging and segmentation:
• Memory is divided into segments based on logical divisions of a program (like code,
stack, data segments).
• Each segment is further divided into pages.
• Pages are mapped to physical memory frames.
• This scheme provides logical grouping of information (segmentation) and efficient
memory use & protection (paging).
Key Features:
• The logical address consists of:
o Segment number (segment selector)
o Page number within the segment
o Offset within the page
• Segmentation provides a way to divide the program into meaningful units.
• Paging breaks each segment into fixed-size pages to avoid external fragmentation.
• The combination enables flexible and efficient memory management.
Logical Address Structure:
Part Description
Segment Number Identifies which segment is accessed
Page Number Specifies the page within the segment
Offset Specifies the exact byte within the page

Address Translation Steps:


1. Segment Table Lookup:

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

o Use the segment number to index the Segment Table.


o Obtain base address (starting frame) and segment limit (segment size).
2. Check Segment Limit:
o Verify that the requested page number and offset are within segment limits.
3. Page Table Lookup:
o Using the segment base (which points to the page table of that segment), use
the page number to index the Page Table.
o Get the frame number corresponding to the page.
4. Calculate Physical Address:
o Physical Address = (Frame Number × Frame Size) + Offset

Summary of Logical to Physical Address Translation:


Step Description
1. Segment Number Index segment table to get segment info
2. Validate Check if address is within segment limits
3. Page Number Index page table of the segment
4. Frame Number Get frame number for the page
5. Offset Add offset within page to frame address
6. Physical Address Final physical memory address

Example:
• Suppose:
o Segment Number = 2
o Page Number = 5
o Offset = 100 (bytes)
• Segment Table entry for segment 2 points to page table at base address X.
• Page Table at base X for page 5 contains frame number 10.
• If frame size = 4 KB (4096 bytes), then physical address = (10 × 4096) + 100 = 40960
+ 100 = 41060.

Q.4. Write the difference between paging and segmentation.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

comparison between Paging and Segmentation:


Aspect Paging Segmentation
Definition Divides memory into fixed-size Divides memory into variable-
blocks called pages. size logical segments.
Memory Division Fixed-size pages (usually equal Variable-sized segments (logical
size). units like code, data, stack).
Address Structure Logical address = Page Number + Logical address = Segment
Page Offset. Number + Offset within
segment.
Purpose Eliminates external fragmentation; Reflects the logical structure of
manages physical memory the program.
efficiently.
Fragmentation Can cause internal fragmentation Can cause external
(due to fixed size pages). fragmentation (due to variable
segment sizes).
Visibility to Transparent to programmer Visible to programmer
Programmer (memory divided in pages). (segments represent logical
units).
Protection & Implemented per page. Implemented per segment,
Sharing easier to share and protect
logical units.
Mapping Page table maps pages to frames. Segment table maps segments
to physical memory locations.
Ease of Growth Difficult to grow logically related Easy to grow or shrink
data as it is fragmented across segments independently.
pages.
Use Case Efficient memory management in Useful for programs with well-
general-purpose OS. defined logical divisions.

Q5. Discuss page replacement algorithms with suitable examples.


Page Replacement Algorithms
When a page fault occurs and there is no free frame available in physical memory, the
operating system must replace (evict) one of the existing pages in memory to make space for
the new page. The strategy used to decide which page to replace is called a page
replacement algorithm.
Common Page Replacement Algorithms:

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

1. FIFO (First-In-First-Out) Algorithm


• Idea: Replace the page that has been in memory the longest (the oldest page).
• Implementation: Use a queue to track the order pages are loaded.
• Example:
o Page reference string: 7, 0, 1, 2, 0, 3, 0, 4
o Frames: 3
o Pages loaded in order and oldest replaced first.
• Pros: Simple to implement.
• Cons: May replace pages that are heavily used (Belady’s anomaly).

2. LRU (Least Recently Used) Algorithm


• Idea: Replace the page that has not been used for the longest time.
• Implementation: Track page usage over time (using timestamps or stack).
• Example:
o Same reference string, frames: 3
o Replace page that was used least recently.
• Pros: Better than FIFO; approximates optimal replacement.
• Cons: Higher overhead to track usage.

3. Optimal Page Replacement


• Idea: Replace the page that will not be used for the longest time in the future.
• Implementation: Requires future knowledge of references (impractical but
theoretical benchmark).
• Example:
o Given the reference string, replace the page that is needed farthest in future.
• Pros: Provides the best possible performance.
• Cons: Not implementable in real systems.

4. Clock (Second Chance) Algorithm


• Idea: A practical approximation of LRU.
• Pages are arranged in a circular queue with a use bit.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• On replacement, check the use bit:


o If 0, replace the page.
o If 1, clear the bit and move on.
• Pros: Efficient and easy to implement.
• Cons: Approximate, not always optimal.
Summary Table
Algorithm Replacement Criteria Pros Cons
FIFO Oldest page replaced Simple Belady’s anomaly
LRU Least recently used page Good approximation of Complex to
replaced optimal implement
Optimal Page not used for longest Best performance Requires future
future time knowledge
Clock Pages given second chance Efficient and practical Approximate
based on use bit

Example of FIFO with 3 frames:


Reference string: 7, 0, 1, 2, 0, 3, 0, 4
Step Pages in Frames Page Fault?
7 7 Yes
0 7, 0 Yes
1 7, 0, 1 Yes
2 0, 1, 2 Yes (7 replaced)
0 0, 1, 2 No
3 1, 2, 3 Yes (0 replaced)
0 2, 3, 0 Yes (1 replaced)
4 3, 0, 4 Yes (2 replaced)

Q6. What is thrashing? What is the cause of thrashing? How does the system detect
thrashing? What can the system do to eliminate this problem?
What is Thrashing?
• Thrashing occurs when a system spends more time swapping pages in and out of
memory than executing actual processes.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• It happens due to excessive paging activity, severely degrading system performance.


• The CPU utilization drops because it’s overwhelmed handling page faults instead of
running processes.
Cause of Thrashing:
• Occurs when the degree of multiprogramming (number of processes in memory) is
too high.
• Each process does not have enough frames allocated, causing frequent page faults.
• This leads to continuous page swapping between RAM and disk.
• Typically caused by:
o Poor memory allocation.
o High workload with many active processes.
o Large working sets that don’t fit into available memory.
How Does the System Detect Thrashing?
• High page fault rate: System observes an unusually high frequency of page faults.
• Low CPU utilization with high disk activity: CPU idle time increases while disk
usage (due to paging) spikes.
• Process progress slows down: Processes take much longer to execute or make
minimal progress.
How to Eliminate Thrashing?
1. Reduce Degree of Multiprogramming:
o Swap out or suspend some processes to reduce memory contention.
2. Increase Physical Memory:
o Adding more RAM reduces page faults by fitting more working sets in
memory.
3. Working Set Model:
o Monitor each process's working set (the set of pages actively used).
o Allocate enough frames to hold the working set, avoiding frequent page faults.
4. Page Fault Frequency (PFF) Control:
o Adjust the number of frames allocated to a process based on its page fault rate.
o Increase frames if page faults are high, decrease if low.
5. Locality Management:
o Schedule processes so their active pages do not compete heavily.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Summary
Topic Explanation
Thrashing Excessive paging, degrading performance
Cause Too many processes, insufficient memory per process
Detection High page fault rate, low CPU utilization
Solution Reduce multiprogramming, increase memory, working set & PFF models

Unit-5
Q. 1. What is buffer in devices? What are the types of I/O buffering schemes?
What is a Buffer in Devices?
A buffer is a temporary storage area (usually in memory) used to hold data while it is
being transferred between two devices or between a device and an application. Buffers help
accommodate differences in speed between the producer and consumer of data, ensuring
smooth and efficient I/O operations.
For example, when reading data from a slow device like a disk, the data is first stored in a
buffer before being processed, preventing the CPU from waiting idly.
Types of I/O Buffering Schemes
There are mainly three types of buffering schemes used in operating systems:

1. Single Buffering
• Uses one buffer between the I/O device and the process.
• Data is transferred into the buffer, then the CPU processes it.
• Drawback: While the CPU processes data, the device is idle waiting for the buffer to
be free.

2. Double Buffering
• Uses two buffers.
• While the CPU processes data from one buffer, the I/O device fills the other buffer.
• This allows overlapping of I/O and processing, improving efficiency.
• Once processing on the first buffer is done, the CPU switches to the second buffer,
and the cycle continues.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

3. Circular Buffering (or Multiple Buffering)


• Uses multiple buffers arranged in a circular queue.
• The I/O device and CPU operate on different buffers concurrently.
• Provides better performance when data transfer and processing rates vary.
• Useful in streaming data or real-time processing scenarios.
Summary Table
Buffering Number of Key Idea Advantage Disadvantage
Type Buffers
Single 1 One buffer shared Simple Device or CPU
Buffering for I/O and CPU implementation often waits
Double 2 Two buffers Overlaps I/O and More memory
Buffering alternate between processing needed
I/O and CPU
Circular Multiple Multiple buffers in Maximizes More complex
Buffering a circular queue throughput management

Q.2. Discuss disk scheduling algorithms with example.


Disk Scheduling Algorithms

When multiple I/O requests are made to a disk, the disk scheduling algorithm decides the
order in which these requests are serviced to optimize performance, mainly to reduce seek
time (time to move the disk arm).
Common Disk Scheduling Algorithms:

1. FCFS (First-Come, First-Served)


• Requests are processed in the order they arrive.
• Simple but inefficient as it can cause long seek times.
Example:
Request queue: 98, 183, 37, 122, 14, 124, 65, 67
Starting head at 53
Seek order: 53 → 98 → 183 → 37 → 122 → 14 → 124 → 65 → 67

2. SSTF (Shortest Seek Time First)


• Selects the request closest to the current head position.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Minimizes immediate seek time but can cause starvation.


Example:
Current head at 53, requests: 98, 183, 37, 122, 14, 124, 65, 67
Closest to 53 is 37, then 14, then 65, etc.

3. SCAN (Elevator Algorithm)


• The head moves in one direction, servicing all requests until it reaches the end, then
reverses direction.
• Provides a more uniform wait time than SSTF.
Example:
Head at 53 moving towards 0: services requests in descending order (37, 14), then reverses
and services ascending requests (65, 67, 98, 122, 124, 183).

4. C-SCAN (Circular SCAN)


• Like SCAN but after reaching one end, the head immediately returns to the other end
without servicing requests on the return.
• Provides more uniform wait times.

5. LOOK and C-LOOK


• Variations of SCAN and C-SCAN where the head only goes as far as the last request
in each direction before reversing or jumping.
Example of SSTF:
• Current head at 50
• Requests: 82, 170, 43, 140, 24, 16, 190
• Order of service:
50 → 43 → 24 → 16 → 82 → 140 → 170 → 190

Summary Table:
Algorithm Description Pros Cons
FCFS Process in arrival order Simple Long average seek
time
SSTF Process nearest request Efficient seek times May cause starvation

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

SCAN Move head in one direction Fairer than SSTF Higher overhead than
& reverse SSTF
C-SCAN Circular SCAN, jump to Uniform wait times May waste some head
start end movement
LOOK SCAN variant stopping at Saves unnecessary Slightly complex
last request movement
C-LOOK C-SCAN variant stopping Efficient & fair Slightly complex
at last request

Q. 3. Discuss free space management in disk.


Free Space Management in Disk
Free space management is a crucial part of file system design. It keeps track of the free
(unused) blocks on the disk so that when new files are created or existing files grow, the
system can allocate space efficiently.
Goals of Free Space Management
• Efficiently find free disk blocks for allocation.
• Quickly update the free space information when blocks are allocated or freed.
• Minimize overhead and fragmentation.

Common Methods for Free Space Management:

1. Bit Vector (Bitmap)


• Uses a bit array where each bit represents a disk block.
• 0 means free, 1 means allocated.
• To find free blocks, scan the bitmap for 0 bits.
• Easy to implement and efficient for large disks.
• Overhead: One bit per block (e.g., 1 MB disk with 1 KB blocks requires 1,000 bits =
125 bytes).

2. Linked List of Free Blocks


• Free blocks are linked together using pointers stored in the free blocks themselves.
• The file system maintains a pointer to the first free block.
• When a block is allocated, the pointer moves to the next free block.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Simple to implement but slow to search for multiple free blocks.

3. Grouping
• An enhancement of linked list.
• The first free block contains addresses of several free blocks.
• When these are used up, the system moves to the next free block which contains
addresses of more free blocks.
• Reduces the overhead of traversing one block at a time.

4. Counting
• Keeps track of the number of free blocks following a particular block.
• Instead of storing pointers for all free blocks, store the starting block and the count of
continuous free blocks.
• Useful for disks with many contiguous free blocks.

Summary Table
Method Description Advantages Disadvantages
Bit Vector Bitmap to track Fast, simple for large Requires scanning,
free/allocated blocks disks bitmap size
Linked Blocks linked together Easy to implement Slow to find multiple
List blocks
Grouping Linked list with groups of Reduces overhead More complex than
blocks simple list
Counting Stores count of contiguous Efficient for Less effective with
free blocks contiguous spaces fragmentation

Example: Bit Vector


• Disk blocks: 8
• Bit vector: 0 1 0 0 1 1 0 0
• Blocks 0, 2, 3, 6, 7 are free; blocks 1, 4, 5 are allocated.

Q. 4. Explain the term RAID. Explain various levels of RAID.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

What is RAID?
RAID stands for Redundant Array of Independent (or Inexpensive) Disks.
It is a data storage technology that combines multiple physical disk drives into one logical
unit to improve performance, fault tolerance, and/or capacity.
RAID uses techniques like data striping, mirroring, and parity to provide redundancy and
improve speed.
Goals of RAID
• Increase reliability: By replicating or distributing data.
• Improve performance: By reading/writing data in parallel.
• Increase storage capacity: By combining multiple disks.
Common RAID Levels and Their Characteristics
RAID Description Data Storage Fault Performance
Level Tolerance
RAID Striping without Data split No High (improved
0 redundancy across disks read/write)
RAID Mirroring Exact copy on Yes (full Read performance
1 two or more redundancy) improved; write
disks slower (due to
mirroring)
RAID Striping with Data + parity Yes (can Good read, moderate
5 distributed parity distributed survive 1 disk write due to parity
across disks failure) overhead
RAID Striping with Data + two Yes (can Slightly slower write
6 double distributed parity blocks survive 2 disk than RAID 5
parity failures)
RAID Combination of Mirrored sets High (multiple Very high read/write
10 RAID 1 and RAID 0 of striped disks failures performance
(mirrored stripes) possible)

Explanation of Each Level

RAID 0 (Striping)
• Data is split evenly across all disks (striped).
• No redundancy; if one disk fails, data is lost.
• Improves performance because multiple disks can be accessed in parallel.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

RAID 1 (Mirroring)
• Data is duplicated exactly on two or more disks.
• Provides fault tolerance: if one disk fails, data is available on the other.
• Write speed can be slower; read speed can improve (reading from any mirror).

RAID 5 (Striping with Parity)


• Data and parity information are striped across all disks.
• Parity allows reconstruction of data if one disk fails.
• Efficient use of disk space compared to mirroring.
• Writes are slower due to parity calculation.

RAID 6 (Striping with Double Parity)


• Similar to RAID 5 but with two parity blocks.
• Can tolerate failure of two disks.
• Used in systems needing high fault tolerance.

RAID 10 (1+0)
• Combines mirroring and striping.
• Data is striped across mirrored pairs.
• Offers high performance and fault tolerance but requires at least 4 disks.
Summary Table
RAID Minimum Fault Tolerance Capacity Performance
Level Disks Utilization
RAID 0 2 None 100% High read/write
RAID 1 2 1 disk failure 50% (due to Improved read,
mirroring) slower write
RAID 5 3 1 disk failure (N-1)/N Good read, slower
write
RAID 6 4 2 disk failures (N-2)/N Slightly slower write

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

RAID 10 4 Multiple disk 50% Very high read/write


failures

Q.5. Explain file organization and access mechanism.


1. File Organization
File organization refers to how data is stored and arranged on secondary storage (like disks)
to facilitate efficient access and management.

Common Types of File Organization:


a) Sequential File Organization
• Records stored one after another in a sequence.
• Suitable for batch processing where most operations read all records.
• Advantage: Simple and efficient for sequential access.
• Disadvantage: Inefficient for random access or searching specific records.

b) Indexed File Organization


• An index is created containing pointers to the actual data records.
• Allows both sequential and random access.
• The index is usually a smaller, faster-to-access structure.
• Suitable for databases and applications requiring fast lookup.

c) Direct (or Hashed) File Organization


• Records stored at locations computed using a hash function on a key field.
• Provides fast random access.
• Collisions may occur and need to be handled (e.g., chaining).
• Good for applications needing quick retrieval by key.

2. File Access Mechanisms


Access mechanisms define how files are accessed (read/write):

a) Sequential Access

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

• Data is accessed in order, one record after another.


• Example: Reading a text file from start to end.
• Simple but slow for direct access.

b) Direct (Random) Access


• Data can be read or written in any order.
• Uses an addressing scheme to locate a record directly.
• Example: Database systems or files accessed via indexes.

c) Indexed Access
• Combines sequential and direct access by using an index.
• Allows efficient searching and updating.

Summary Table
File Description Access Type Advantages Disadvantages
Organization
Sequential Records stored Sequential Simple, Slow for random
one after efficient for access
another bulk read
Indexed Index points to Sequential/Direct Fast lookup, Overhead of
records flexible maintaining index
Direct Hash function Direct (Random) Very fast Collisions and
(Hashed) maps key to access by key complexity
location

examples illustrating file organization and access mechanisms:

1. Sequential File Organization Example


Imagine a file storing student records sequentially by their roll numbers:

Roll No Name Marks

101 Alice 85

102 Bob 90

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Roll No Name Marks

103 Charlie 78

• Accessing all students from 101 to 103 is easy and efficient (sequential access).
• To find student 103, the system reads 101, then 102, then 103 — slower if the file is
large.

2. Indexed File Organization Example


Suppose the above student records have an index like:

Roll No Address (Block No.)

101 Block 5

102 Block 8

103 Block 3

• To find student 103, the system looks up the index, finds "Block 3," and reads only
that block.
• Supports fast direct access and also allows sequential reading by scanning the index.

3. Direct (Hashed) File Organization Example


Let’s say you use a hash function to determine where to store a record based on roll number:
Hash function: h(roll_no) = roll_no % 10
• For roll no 101: h(101) = 1 → store in bucket 1
• For roll no 102: h(102) = 2 → store in bucket 2
• For roll no 103: h(103) = 3 → store in bucket 3
To access roll no 102, directly compute the hash, go to bucket 2, and fetch the record.

4. Access Mechanisms Examples


• Sequential Access: Reading a text file line by line from start to end.
• Direct Access: Accessing a student record in a database by roll number instantly
using an index or hash.
• Indexed Access: Looking up a word in a dictionary by first finding the page number
from the index, then going directly to that page.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Q.6. What is a directory? Define any two ways to implement the directory.
Directory
A directory in an operating system is a special file that contains information about other files
and directories. It acts like a folder that organizes files in a hierarchical structure, helping
users and the system keep track of files by storing metadata such as file names, locations, and
attributes.

Two Ways to Implement a Directory:

1. Single-Level Directory
• All files are contained in a single directory.
• Simple structure: just one directory containing all files.
• Advantage: Easy to implement.
• Disadvantage: Not suitable for many users or files because:
o File names must be unique across the system.
o No organization—files can be hard to find.
Example:
makefile
CopyEdit
Directory:
file1.txt
file2.doc
photo.jpg

2. Hierarchical (Tree-Structured) Directory


• Directory contains files and subdirectories.
• Organizes files in a tree structure with multiple levels.
• Allows grouping of related files and directories.
• Each user or application can have its own directory.
• Makes file management and access efficient.
Example:

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

mathematica
CopyEdit
Root Directory
├── Documents
│ ├── file1.txt
│ └── file2.doc
├── Pictures
│ └── photo.jpg
└── Music
└── song.mp3

Q.7. What are the different methods of allocating disk space?


Different Methods of Allocating Disk Space
Disk space allocation refers to how the operating system assigns storage space to files on the
disk. Efficient allocation affects performance, ease of access, and disk utilization.
1. Contiguous Allocation
• Each file occupies a set of contiguous blocks on the disk.
• The file is stored in one continuous area.
Advantages:
• Simple to implement.
• Fast sequential access since all blocks are together.
Disadvantages:
• Causes external fragmentation (free space scattered).
• Difficult to grow files because contiguous space may not be available.
2. Linked Allocation
• Each file is a linked list of disk blocks.
• Each block contains a pointer to the next block.
• Blocks can be scattered anywhere on the disk.
Advantages:
• No external fragmentation.
• Files can easily grow by adding blocks anywhere.

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini
IEC College of Engineering & Technology, Greater Noida
Department of Computer Science and Engineering

Disadvantages:
• Sequential access only (random access is slow).
• Overhead of storing pointers reduces usable space.
• If a pointer is lost, the whole file is affected.
3. Indexed Allocation
• Uses an index block that contains pointers to all file blocks.
• The index block acts like a table of contents for the file.
Advantages:
• Supports direct (random) access efficiently.
• No external fragmentation.
• File can grow by allocating more blocks and updating the index.
Disadvantages:
• Overhead of maintaining the index block.
• For large files, multiple levels of indexing may be needed (multi-level index).

Summary Table
Method Access Type Pros Cons
Contiguous Sequential & Fast access, simple External fragmentation,
Direct inflexible file size
Linked Sequential No external fragmentation, Slow direct access, pointer
flexible overhead
Indexed Direct Efficient direct access, no Extra space for index,
fragmentation complexity

Prepared by: Prof. Madhav P Namdev


Prof. Sonal Saini

You might also like