Operating System BCS 401 - Important Questions With Solutions
Operating System BCS 401 - Important Questions With Solutions
Q. 4. Explain briefly layered operating system structure with neat sketch. Also explain
protection and security.
A Layered Operating System is designed in layers, where each layer is built on top of the
lower one. Each layer provides specific functions and hides its implementation from higher
layers, promoting modularity and ease of debugging.
Key Features:
• Each layer interacts only with its adjacent layers.
• The bottom layer deals with hardware.
• The top layer provides a user interface.
• The system is designed as a hierarchy of layers (0 = lowest, N = highest).
Typical Layers in a Layered OS:
Layer Layer Name Description
Number
0 Hardware Physical devices like CPU, memory, I/O.
1 CPU Scheduling & Memory Handles CPU allocation and memory
Mgmt management.
2 Device Management Manages I/O devices.
3 System Calls Interface Provides APIs for user programs.
4 User Programs & Services Application programs and user interface.
Q.5. What do you understand by system call? How is a system call made? How is a system
call handled by the system? Choose suitable examples for explanation.
What is a System Call?
A System Call is a programmatic way for user applications to interact with the
operating system. It acts as a bridge between user space and kernel space, allowing
programs to request services like file access, memory management, process control, etc.
In simple terms: A system call is like raising your hand (from user program) to ask the OS
(kernel) to do something on your behalf that you don’t have permission to do directly.
Common Services Requested via System Calls:
Category Examples
#include <unistd.h>
#include <fcntl.h>
int main() {
int fd = open("file.txt", O_RDONLY); // System call to open a file
char buffer[100];
read(fd, buffer, 100); // System call to read from file
close(fd); // System call to close the file
return 0;
}
Behind the Scenes:
• open() invokes system call number 2 (for example, in Linux).
• The CPU switches to kernel mode.
• OS locates the file, creates a file descriptor.
• Once reading is complete, it returns to user mode with data.
How is a System Call Handled?
Step-by-Step System Call Handling:
1. User Process invokes system call via wrapper function.
2. System Call Interface (SCI) identifies the request and passes it to the OS kernel.
3. Trap to Kernel Mode: CPU switches from user to kernel mode using a system
interrupt.
4. Kernel Handler processes the request.
5. Return Values (or error codes) are passed back to user process.
6. Return to User Mode and continue execution.
Key Points:
• System calls are safe gateways for accessing hardware and OS resources.
• They protect system integrity by restricting direct hardware access.
• System call interfaces differ by OS (Linux, Windows, macOS).
Q.6. What do you understand by dual mode operation of processors? What is the reason
behind dual mode operation of processors?
System Calls User program requests a service from the OS (e.g., read(), write()).
Interrupts Signals from hardware (like keyboard input, disk I/O completion).
Mode Bit:
• The CPU maintains a mode bit (often a single binary flag) in the processor status
register:
o 0 → Kernel Mode
o 1 → User Mode
• This bit determines the current operating mode of the CPU.
Reason Behind Dual Mode Operation
The primary purpose of dual mode operation is protection and controlled resource access.
Main Reasons:
Reason Explanation
System Prevents user applications from directly accessing hardware and
Protection critical OS code.
Controlled Allows only the OS to perform sensitive operations like I/O, memory
Execution management, and process control.
Stability Avoids system crashes by isolating faulty or malicious user code from
the core OS.
Security Ensures secure handling of files, memory, and device access by
limiting user privileges.
Analogy:
Think of user mode as a guest in a house who can only access the living room and kitchen,
but needs permission to access the owner's room (kernel mode), where all valuables are
stored.
Summary Table
Feature User Mode Kernel Mode
Privileges Limited Full access to hardware and resources
Who Uses It Application Programs Operating System Core
Access to I/O Not Allowed Directly Allowed
Risk of Damage Low (but limited functionality) High (needs strict control)
Unit-2
Q. 1. Explain the principle of concurrency.
Principle of Concurrency
Concurrency is the ability of a system to execute multiple tasks (processes or threads)
seemingly at the same time. It is a fundamental principle in modern operating systems to
improve system performance, responsiveness, and resource utilization.
Definition:
Concurrency refers to the execution of multiple sequences of operations simultaneously —
not necessarily at the exact same moment, but in overlapping time periods.
Key Concepts in Concurrency:
Concept Explanation
Process An independent program in execution.
Thread A lightweight process; part of a process that can run independently.
Context Switching the CPU between different processes or threads.
Switching
Parallelism A special case of concurrency where tasks truly run at the same time
on multiple processors.
Objectives of Concurrency:
1. Maximize CPU Utilization – Keep the processor busy by overlapping I/O and
computation.
2. Improve Responsiveness – Handle multiple user interactions simultaneously.
3. Enable Multiprogramming – Run multiple programs at once.
4. Support Distributed Systems – Communicate and coordinate between multiple
systems or processes.
Challenges in Concurrency:
Problem Description
Race Multiple processes access shared data simultaneously, leading to
Conditions unpredictable results.
Deadlock Two or more processes wait indefinitely for resources locked by each
other.
Starvation A process waits forever because other higher-priority processes keep
executing.
Synchronization Coordinating processes to ensure correct access to shared resources.
Busy waiting (also known as spinlock) is a condition where a process continuously checks
for a condition (like a lock to be released) without giving up the CPU.
Example:
Q.3. State the critical section problem. Illustrate the software based solution to the critical
section problem.
Critical Section Problem
The Critical Section Problem arises in concurrent programming when multiple processes
(or threads) share resources (like variables, files, or hardware) and try to access or modify
them simultaneously. If not handled properly, this can lead to inconsistent or incorrect
data.
What is a Critical Section?
A Critical Section is a part of the program where shared resources are accessed. Only one
process should execute in its critical section at a time to maintain data consistency.
Requirements for Solving the Critical Section Problem:
1. Mutual Exclusion: Only one process can be in the critical section at any given time.
2. Progress: If no process is in the critical section, one of the waiting processes should
be allowed to enter.
3. Bounded Waiting: There should be a limit on how many times other processes can
enter their critical sections before a waiting process gets a turn.
Software-Based Solution: (Peterson's Algorithm)
(For two processes, say P0 and P1)
Shared Variables:
bool flag[2]; // flag[i] = true means Pi wants to enter critical section
int turn; // Indicates whose turn it is
Algorithm for Process P0:
flag[0] = true;
turn = 1;
while (flag[1] && turn == 1); // Busy wait
Q. 4. State the finite buffer producer consumer problem. Give solution of the problem using
semaphores.
Finite Buffer Producer-Consumer Problem
The Producer-Consumer Problem is a classic example of a synchronization problem in
operating systems and concurrent programming.
Problem Statement:
• Producer: Generates data items and places them into a bounded (finite) buffer.
• Consumer: Takes items from the buffer and processes them.
• Constraint:
o The producer must wait if the buffer is full.
o The consumer must wait if the buffer is empty.
• Goal: Ensure that producer and consumer operate concurrently without conflicts or
data corruption.
Finite Buffer Example:
A buffer of size N = 5:
markdown
CopyEdit
Buffer: [__, __, __, __, __]
do {
// produce an item in nextProduced
buffer[in] = nextProduced;
in = (in + 1) % N;
do {
wait(full); // Decrease full count
wait(mutex); // Enter critical section
nextConsumed = buffer[out];
out = (out + 1) % N;
} while (true);
How It Works:
• mutex ensures only one process (producer or consumer) accesses the buffer at a time.
• empty keeps track of available slots for the producer.
• full keeps track of available items for the consumer.
• The wait() operation decrements the semaphore; if it’s zero, the process blocks.
• The signal() operation increments the semaphore; if processes are waiting, one is
unblocked.
Advantages:
• Ensures synchronization and mutual exclusion.
• Works efficiently with bounded buffers.
• Prevents race conditions, overflow, and underflow.
do {
// Entry section
while (test_and_set(&lock)) {
// busy wait
}
// Critical section
// Access shared resource
// Exit section
lock = false;
// Remainder section
} while (true);
Advantages:
• Simple to implement.
• Provides mutual exclusion.
• Efficient on multiprocessor systems.
Disadvantages:
• Uses busy waiting (wastes CPU cycles).
• May cause starvation if some process keeps locking.
Summary:
Feature Description
Atomicity Test-and-Set executes indivisibly to prevent race.
Mutual Exclusion Only one process sets the lock and enters CS.
Busy Waiting Processes wait actively while lock is held.
Hardware Support Requires special CPU instruction support.
o No two philosophers use the same fork at the same time (mutual exclusion).
o No philosopher starves (everyone gets to eat eventually).
o Deadlock and starvation are prevented.
Why is it challenging?
• If all philosophers pick up their left fork simultaneously, no one can pick up their
right fork → deadlock.
• If some philosophers never get both forks, they starve.
Classical Solution Using Semaphores
Setup:
• Let N = 5 be the number of philosophers.
• Each fork is represented by a binary semaphore, initialized to 1.
• Each philosopher performs:
do {
think();
wait(fork[left]); // pick left fork
wait(fork[right]); // pick right fork
eat();
Deadlock Issue:
If all philosophers pick the left fork first simultaneously, they wait forever for the right fork
→ deadlock.
Deadlock-Free Solution:
• Each philosopher picks up the lower-numbered fork first, then the higher one.
• This breaks the circular wait condition and prevents deadlock.
For philosopher i:
if (i % 2 == 0) {
wait(fork[i]); // pick left fork
wait(fork[(i + 1) % N]); // pick right fork
} else {
wait(fork[(i + 1) % N]); // pick right fork
wait(fork[i]); // pick left fork
}
eat();
signal(fork[i]);
signal(fork[(i + 1) % N]);
wait(fork[left]);
wait(fork[right]);
eat();
signal(fork[left]);
signal(fork[right]);
Q.7. Discuss message passing systems. Explain how message passing can be used to solve
buffer producer consumer problem with infinite buffer.
What is Message Passing?
Message passing is a method of inter-process communication (IPC) where processes
communicate and synchronize by sending and receiving messages.
• No shared memory is required.
• Processes exchange data via messages.
• Commonly used in distributed systems or where memory sharing is not possible.
Key Features of Message Passing:
Feature Description
Communication Processes send and receive messages explicitly.
Synchronization Can be blocking (sender/receiver waits) or non-blocking.
Modes Direct (process-to-process) or indirect (via mailbox/ports).
Types Synchronous or asynchronous messaging.
Basic Operations:
• send(destination, message) — Send a message.
• receive(source, message) — Receive a message.
Problem Recap:
• Producer generates items.
while (true) {
item = produce_item();
send(consumer, item); // Send produced item to consumer
}
Consumer Process:
c
CopyEdit
while (true) {
receive(producer, item); // Wait and receive item from producer
consume_item(item);
}
Explanation:
• send() places the item in a message queue (buffer).
• Since buffer is infinite, send() never blocks.
• receive() blocks if there are no messages, so consumer waits when buffer empty.
• This naturally synchronizes producer and consumer without shared memory or
explicit semaphores.
Advantages of Message Passing in this Problem:
• No shared memory needed.
• Simpler synchronization.
• Naturally handles waiting when buffer empty (consumer blocks on receive).
• Suitable for distributed systems.
Unit-3
Q.1. Discuss the performance criteria for CPU scheduling.
Performance Criteria for CPU Scheduling
When evaluating and designing CPU scheduling algorithms, the following criteria are
commonly used to measure their effectiveness:
Criteria Description
CPU Maximize the CPU usage by keeping it as busy as possible (usually
Utilization target 40% to 90% or more).
Throughput Number of processes completed per unit time. Higher throughput means
more work done efficiently.
Turnaround Total time taken from submission of a process to its completion
Time (completion time - arrival time). Lower is better.
Waiting Time Total time a process spends waiting in the ready queue. Minimizing
waiting time improves responsiveness.
Response Time Time from submission of a request until the first response is produced.
Important for interactive systems.
Fairness Ensures all processes get a fair share of the CPU without starvation.
Summary:
A good CPU scheduling algorithm aims to:
• Maximize CPU utilization and throughput.
• Minimize turnaround time, waiting time, and response time.
• Ensure fairness among all processes.
States Explanation:
1. New
o Process is being created.
o Not yet admitted to the system.
2. Ready
o Process is loaded into main memory.
o Waiting for CPU allocation.
o Ready to run but waiting in the ready queue.
3. Running
o Process is currently executing on the CPU.
4. Waiting (Blocked)
o Process is waiting for some event to occur (like I/O completion).
o Not eligible for CPU until the event occurs.
5. Terminated (Exit)
o Process has finished execution.
o Removed from the system.
Transitions:
Transition Description
Summary:
• Processes cycle through states based on resource availability and execution progress.
• The scheduler manages transitions between Ready and Running.
• Waiting occurs when processes need I/O or other resources.
• Termination happens after completion.
Program Counter Address of the next instruction to execute for the process.
(PC)
CPU Registers Contents of all CPU registers (general-purpose, index, stack
pointers) for context switching.
CPU Scheduling Info Information like priority, scheduling queue pointers, and other
scheduling parameters.
Memory Management Details such as base and limit registers, page tables, segment
Info tables, or memory limits.
Accounting Info CPU usage time, process execution time, time limits, job or
process numbers for accounting.
I/O Status Info List of I/O devices allocated to the process, open files, and I/O
requests.
Pointer to Parent Reference to the parent process’s PCB.
Process
Process Privileges Security and access control information.
Summary:
Component Role in Process Management
Process State Tracks lifecycle stage of process.
Process ID Uniquely identifies the process.
Program Counter Keeps track of next instruction to execute.
CPU Registers Saves process context for resuming execution.
Scheduling Info Helps scheduler make decisions.
Memory Info Manages process’s allocated memory space.
Accounting Info Records resource usage for billing and monitoring.
I/O Status Manages process’s interaction with I/O devices and files.
Advantages:
• Differentiates between CPU-bound and I/O-bound processes.
• Good for interactive systems.
• Prevents starvation with promotion.
• Flexible and adaptive scheduling.
Disadvantages:
• Complex to implement.
• Requires careful tuning of parameters like time quantum and promotion/demotion
rules.
• Overhead of managing multiple queues.
Summary Table:
Feature Description
Multiple Queues Several ready queues with different priority and time quantum.
Feedback Processes move between queues based on CPU usage.
Priority Scheduling Higher priority queues are checked first.
Starvation Prevention Aging or promotion to prevent starvation.
Q.5. What is deadlock? What are the necessary conditions for deadlock?
What is Deadlock
A deadlock is a situation in a multiprogramming environment where a set of processes are
blocked indefinitely, each waiting for a resource that another process in the set holds.
Because each process waits for a resource held by another, none can proceed — the system is
stuck.
Characteristics of Deadlock:
• Processes cannot continue because they are waiting for resources.
• None of the processes can release resources because they are waiting.
• The system halts or slows down significantly if deadlock occurs.
Necessary Conditions for Deadlock (The Coffman Conditions):
For a deadlock to occur, all four conditions must hold simultaneously:
Condition Explanation
Mutual At least one resource must be held in a non-shareable mode (only one
Exclusion process can use it at a time).
Hold and A process is holding at least one resource and waiting to acquire
Wait additional resources held by other processes.
No Resources cannot be forcibly taken away from a process; they must be
Preemption released voluntarily.
Circular Wait There exists a set of processes {P1, P2, ..., Pn} such that P1 is waiting for
a resource held by P2, P2 is waiting for P3, ..., and Pn is waiting for P1,
forming a circular chain.
Summary:
Deadlock: Processes are stuck waiting indefinitely for resources.
Necessary Conditions 1) Mutual Exclusion
2) Hold and Wait
3) No Preemption
4) Circular Wait
Need Matrix showing remaining resource needs of each process (Need = Max -
Allocation).
Algorithm Steps:
1. When a process requests resources:
o Check if the request ≤ Need (process cannot ask more than max declared).
o Check if the request ≤ Available (enough resources are free).
o If both true, pretend to allocate requested resources:
▪ Available = Available - Request
▪ Allocation = Allocation + Request
▪ Need = Need - Request
2. Check system safety:
o Initialize Work = Available, Finish[i] = false for all processes.
o Find a process i such that:
▪ Finish[i] == false and Need[i] ≤ Work.
o If found, set:
▪ Work = Work + Allocation[i]
▪ Finish[i] = true
o Repeat until no such process found.
3. If all Finish[i] == true, the system is safe and allocation is allowed.
4. Otherwise, roll back the tentative allocation and make the process wait.
Summary:
Step Purpose
Check request validity Request ≤ Need and Request ≤ Available
Tentative allocation Simulate resource allocation
Safety check Verify system can still finish all processes
Grant or deny Allocate if safe; otherwise, deny and wait
Benefits:
• Avoids deadlock by never entering unsafe states.
• Dynamically checks safety at each allocation.
Unit-4
Q.1. Differentiate between fixed partitioning and variable partitioning.
comparison between Fixed Partitioning and Variable Partitioning in memory management:
Aspect Fixed Partitioning Variable Partitioning
Definition Memory is divided into fixed-size Memory is divided dynamically into
partitions at system startup. partitions based on process size.
Partition Size Partitions have fixed, predefined Partition sizes vary according to
sizes. process requirements.
Number of Fixed number of partitions. Number of partitions varies as
Partitions processes come and go.
Memory Can cause internal Can cause external fragmentation
Utilization fragmentation due to fixed size. due to dynamic allocation.
Process Process must fit into a fixed Process fits exactly into a partition
Allocation partition. sized for it.
Flexibility Less flexible; not efficient for More flexible; partitions adapt to
varying process sizes. process sizes.
Overhead Less overhead as partitions are More overhead managing dynamic
fixed. partitions.
Example Use Simple systems or early OS Modern systems with dynamic
memory management. memory allocation.
Summary:
• Fixed Partitioning is simple but inefficient due to wasted space inside fixed
partitions.
• Variable Partitioning is more memory-efficient but suffers from fragmentation and
requires more complex management.
Q.2. What do you mean by paging? When do page faults occur? Describe the actions taken
by the operating system when a page fault occurs.
Paging
Paging is a memory management technique that eliminates the need for contiguous allocation
of physical memory. It:
• Divides the process's logical memory into fixed-size blocks called pages.
• Divides physical memory into blocks of the same size called frames.
• Maps pages to any available frames in physical memory.
• Allows efficient and flexible use of memory and prevents external fragmentation.
What is a Page Fault?
• A page fault occurs when a process tries to access a page that is not currently
loaded in physical memory (RAM).
• This means the page is either on secondary storage (like a hard disk) or has never
been loaded yet.
Actions Taken by the Operating System on a Page Fault:
1. Interrupt Generation:
o The hardware detects the page fault and generates a page fault interrupt to the
OS.
2. OS Checks Validity:
o The OS checks if the memory access is valid:
▪ If invalid (e.g., illegal memory access), the process is terminated.
▪ If valid, proceed to next steps.
3. Find a Free Frame:
o The OS searches for a free frame in physical memory.
o If no free frame is available, it selects a victim frame to evict (page
replacement).
4. Load the Page from Disk:
o The required page is read from secondary storage (disk) into the free or victim
frame.
5. Update Page Table:
o The process’s page table is updated to indicate the page is now in memory and
its frame location.
6. Restart the Process:
o The instruction that caused the page fault is restarted now that the page is in
memory.
Summary:
Concept Description
Concept Description
OS Actions Handle interrupt → validate → load page → update tables → restart process.
Q.3. Discuss the paged segmentation scheme of memory management and explain how
logical address is translated to physical address in such a scheme.
Paged Segmentation Scheme of Memory Management
Paged segmentation is a memory management scheme that combines the advantages of
both paging and segmentation:
• Memory is divided into segments based on logical divisions of a program (like code,
stack, data segments).
• Each segment is further divided into pages.
• Pages are mapped to physical memory frames.
• This scheme provides logical grouping of information (segmentation) and efficient
memory use & protection (paging).
Key Features:
• The logical address consists of:
o Segment number (segment selector)
o Page number within the segment
o Offset within the page
• Segmentation provides a way to divide the program into meaningful units.
• Paging breaks each segment into fixed-size pages to avoid external fragmentation.
• The combination enables flexible and efficient memory management.
Logical Address Structure:
Part Description
Segment Number Identifies which segment is accessed
Page Number Specifies the page within the segment
Offset Specifies the exact byte within the page
Example:
• Suppose:
o Segment Number = 2
o Page Number = 5
o Offset = 100 (bytes)
• Segment Table entry for segment 2 points to page table at base address X.
• Page Table at base X for page 5 contains frame number 10.
• If frame size = 4 KB (4096 bytes), then physical address = (10 × 4096) + 100 = 40960
+ 100 = 41060.
Q6. What is thrashing? What is the cause of thrashing? How does the system detect
thrashing? What can the system do to eliminate this problem?
What is Thrashing?
• Thrashing occurs when a system spends more time swapping pages in and out of
memory than executing actual processes.
Summary
Topic Explanation
Thrashing Excessive paging, degrading performance
Cause Too many processes, insufficient memory per process
Detection High page fault rate, low CPU utilization
Solution Reduce multiprogramming, increase memory, working set & PFF models
Unit-5
Q. 1. What is buffer in devices? What are the types of I/O buffering schemes?
What is a Buffer in Devices?
A buffer is a temporary storage area (usually in memory) used to hold data while it is
being transferred between two devices or between a device and an application. Buffers help
accommodate differences in speed between the producer and consumer of data, ensuring
smooth and efficient I/O operations.
For example, when reading data from a slow device like a disk, the data is first stored in a
buffer before being processed, preventing the CPU from waiting idly.
Types of I/O Buffering Schemes
There are mainly three types of buffering schemes used in operating systems:
1. Single Buffering
• Uses one buffer between the I/O device and the process.
• Data is transferred into the buffer, then the CPU processes it.
• Drawback: While the CPU processes data, the device is idle waiting for the buffer to
be free.
2. Double Buffering
• Uses two buffers.
• While the CPU processes data from one buffer, the I/O device fills the other buffer.
• This allows overlapping of I/O and processing, improving efficiency.
• Once processing on the first buffer is done, the CPU switches to the second buffer,
and the cycle continues.
When multiple I/O requests are made to a disk, the disk scheduling algorithm decides the
order in which these requests are serviced to optimize performance, mainly to reduce seek
time (time to move the disk arm).
Common Disk Scheduling Algorithms:
Summary Table:
Algorithm Description Pros Cons
FCFS Process in arrival order Simple Long average seek
time
SSTF Process nearest request Efficient seek times May cause starvation
SCAN Move head in one direction Fairer than SSTF Higher overhead than
& reverse SSTF
C-SCAN Circular SCAN, jump to Uniform wait times May waste some head
start end movement
LOOK SCAN variant stopping at Saves unnecessary Slightly complex
last request movement
C-LOOK C-SCAN variant stopping Efficient & fair Slightly complex
at last request
3. Grouping
• An enhancement of linked list.
• The first free block contains addresses of several free blocks.
• When these are used up, the system moves to the next free block which contains
addresses of more free blocks.
• Reduces the overhead of traversing one block at a time.
4. Counting
• Keeps track of the number of free blocks following a particular block.
• Instead of storing pointers for all free blocks, store the starting block and the count of
continuous free blocks.
• Useful for disks with many contiguous free blocks.
Summary Table
Method Description Advantages Disadvantages
Bit Vector Bitmap to track Fast, simple for large Requires scanning,
free/allocated blocks disks bitmap size
Linked Blocks linked together Easy to implement Slow to find multiple
List blocks
Grouping Linked list with groups of Reduces overhead More complex than
blocks simple list
Counting Stores count of contiguous Efficient for Less effective with
free blocks contiguous spaces fragmentation
What is RAID?
RAID stands for Redundant Array of Independent (or Inexpensive) Disks.
It is a data storage technology that combines multiple physical disk drives into one logical
unit to improve performance, fault tolerance, and/or capacity.
RAID uses techniques like data striping, mirroring, and parity to provide redundancy and
improve speed.
Goals of RAID
• Increase reliability: By replicating or distributing data.
• Improve performance: By reading/writing data in parallel.
• Increase storage capacity: By combining multiple disks.
Common RAID Levels and Their Characteristics
RAID Description Data Storage Fault Performance
Level Tolerance
RAID Striping without Data split No High (improved
0 redundancy across disks read/write)
RAID Mirroring Exact copy on Yes (full Read performance
1 two or more redundancy) improved; write
disks slower (due to
mirroring)
RAID Striping with Data + parity Yes (can Good read, moderate
5 distributed parity distributed survive 1 disk write due to parity
across disks failure) overhead
RAID Striping with Data + two Yes (can Slightly slower write
6 double distributed parity blocks survive 2 disk than RAID 5
parity failures)
RAID Combination of Mirrored sets High (multiple Very high read/write
10 RAID 1 and RAID 0 of striped disks failures performance
(mirrored stripes) possible)
RAID 0 (Striping)
• Data is split evenly across all disks (striped).
• No redundancy; if one disk fails, data is lost.
• Improves performance because multiple disks can be accessed in parallel.
RAID 1 (Mirroring)
• Data is duplicated exactly on two or more disks.
• Provides fault tolerance: if one disk fails, data is available on the other.
• Write speed can be slower; read speed can improve (reading from any mirror).
RAID 10 (1+0)
• Combines mirroring and striping.
• Data is striped across mirrored pairs.
• Offers high performance and fault tolerance but requires at least 4 disks.
Summary Table
RAID Minimum Fault Tolerance Capacity Performance
Level Disks Utilization
RAID 0 2 None 100% High read/write
RAID 1 2 1 disk failure 50% (due to Improved read,
mirroring) slower write
RAID 5 3 1 disk failure (N-1)/N Good read, slower
write
RAID 6 4 2 disk failures (N-2)/N Slightly slower write
a) Sequential Access
c) Indexed Access
• Combines sequential and direct access by using an index.
• Allows efficient searching and updating.
Summary Table
File Description Access Type Advantages Disadvantages
Organization
Sequential Records stored Sequential Simple, Slow for random
one after efficient for access
another bulk read
Indexed Index points to Sequential/Direct Fast lookup, Overhead of
records flexible maintaining index
Direct Hash function Direct (Random) Very fast Collisions and
(Hashed) maps key to access by key complexity
location
101 Alice 85
102 Bob 90
103 Charlie 78
• Accessing all students from 101 to 103 is easy and efficient (sequential access).
• To find student 103, the system reads 101, then 102, then 103 — slower if the file is
large.
101 Block 5
102 Block 8
103 Block 3
• To find student 103, the system looks up the index, finds "Block 3," and reads only
that block.
• Supports fast direct access and also allows sequential reading by scanning the index.
Q.6. What is a directory? Define any two ways to implement the directory.
Directory
A directory in an operating system is a special file that contains information about other files
and directories. It acts like a folder that organizes files in a hierarchical structure, helping
users and the system keep track of files by storing metadata such as file names, locations, and
attributes.
1. Single-Level Directory
• All files are contained in a single directory.
• Simple structure: just one directory containing all files.
• Advantage: Easy to implement.
• Disadvantage: Not suitable for many users or files because:
o File names must be unique across the system.
o No organization—files can be hard to find.
Example:
makefile
CopyEdit
Directory:
file1.txt
file2.doc
photo.jpg
mathematica
CopyEdit
Root Directory
├── Documents
│ ├── file1.txt
│ └── file2.doc
├── Pictures
│ └── photo.jpg
└── Music
└── song.mp3
Disadvantages:
• Sequential access only (random access is slow).
• Overhead of storing pointers reduces usable space.
• If a pointer is lost, the whole file is affected.
3. Indexed Allocation
• Uses an index block that contains pointers to all file blocks.
• The index block acts like a table of contents for the file.
Advantages:
• Supports direct (random) access efficiently.
• No external fragmentation.
• File can grow by allocating more blocks and updating the index.
Disadvantages:
• Overhead of maintaining the index block.
• For large files, multiple levels of indexing may be needed (multi-level index).
Summary Table
Method Access Type Pros Cons
Contiguous Sequential & Fast access, simple External fragmentation,
Direct inflexible file size
Linked Sequential No external fragmentation, Slow direct access, pointer
flexible overhead
Indexed Direct Efficient direct access, no Extra space for index,
fragmentation complexity