Os Paper Solutions
Os Paper Solutions
Q-1
b) What is meant by System Call? A system call is a request made by a program to the
operating system to perform a specific service, such as file manipulation, process control, or
communication.
d) Define a Safe State. A safe state refers to a situation in a system where all processes can
complete their execution without causing a deadlock, ensuring that resources can be allocated
without risk.
e) Define Dispatcher. The dispatcher is a module of the operating system responsible for
switching processes from one to another, ensuring that the CPU is allocated to processes in a
fair manner.
g) What do you mean by Rollback? Rollback is the process of reverting a system to its
previous state, typically after an error or failure, to maintain consistency or recover lost data.
j) What do you mean by Deadlock? Deadlock is a situation in which two or more processes
are unable to proceed because each is waiting for the other to release resources, resulting in a
standstill.
Q-2
a) Explain Operating System Structure.
The structure of an operating system refers to the design and organization of its components,
which are responsible for managing hardware resources, executing processes, and providing
services to applications. The common types of operating system structures include:
1. Monolithic Structure:
o In this structure, the operating system is a single large program. All services
such as process management, memory management, device management, and
file system management are tightly integrated and work together in a single
address space.
o Advantage: It’s efficient because all components are interconnected.
o Disadvantage: Maintenance can be complex, and a bug in one component can
affect the whole system.
2. Layered Structure:
o The operating system is divided into layers, each of which provides services to
the layer above it. The lowest layer is responsible for interacting with the
hardware, and higher layers provide more abstract services.
o Advantage: Easier to maintain and debug, as changes in one layer don’t
necessarily affect others.
o Disadvantage: Can be slower due to the overhead of passing data between
layers.
3. Microkernel Structure:
o This approach minimizes the core of the operating system, placing most of the
system services into user space, with a small kernel responsible for
communication between services.
o Advantage: More modular and flexible, easier to add or remove components.
o Disadvantage: Performance overhead due to frequent communication between
user space and the kernel.
4. Client-Server Structure:
o In this model, the operating system is structured around clients (user
applications) and servers (OS services). Clients request services from the
servers, which manage resources.
o Advantage: Scalable and easy to maintain.
o Disadvantage: Potential network latency and overhead due to communication
between clients and servers.
The Dining Philosophers problem is a classical synchronization problem that illustrates the
challenges of allocating resources among concurrent processes. It involves a scenario with
five philosophers seated around a circular table. Each philosopher thinks and occasionally
needs to eat. To eat, a philosopher requires two utensils (forks), one on each side. The issue is
to prevent deadlock (where no philosopher can eat) and ensure that no philosopher starves (is
left indefinitely without eating).
The problem can be formalized as follows:
Locks/Mutexes: Ensuring that only one philosopher can pick up a fork at a time.
Semaphore or Monitor: Managing the resources in such a way that deadlock and
starvation are avoided.
Deadlock recovery involves various strategies to resolve or prevent deadlocks once they
occur. The main approaches include:
1. Process Termination:
o Abort All Processes: The simplest method is to terminate all processes
involved in the deadlock. This method, although effective, can be expensive
and result in a loss of progress.
o Abort Processes One by One: In this case, the system identifies and
terminates the processes one by one until the deadlock is broken. The process
to terminate could be chosen based on priority, age, or other factors.
2. Resource Preemption:
o Preempt Resources from Processes: Resources held by one or more
processes involved in the deadlock can be forcibly preempted and reassigned
to other processes. The preempted processes are then rolled back to a safe
state.
o Rollback and Restart: A process can be rolled back to a previous safe state
(using checkpoints) if it holds resources that would resolve the deadlock if
released.
3. Manual Intervention:
o User Intervention: In some systems, the deadlock situation can be detected
and resolved manually by a system administrator who can intervene to release
resources or terminate processes.
Note: Each method has its own trade-offs in terms of system efficiency, time, and impact on
users.
d) What is Fragmentation? Explain types of fragmentation in detail.
Fragmentation refers to the inefficient use of memory or disk space, leading to wasted areas
that cannot be utilized effectively. There are two types of fragmentation:
1. Internal Fragmentation:
o This occurs when the memory allocated to a process is larger than what is
actually needed. As a result, the extra space within the allocated block remains
unused.
o Example: If a process requires 40KB of memory, but the system allocates a
50KB block, the remaining 10KB is wasted inside the block.
o Cause: Fixed-size memory allocation, where the allocated space is not fully
utilized.
2. External Fragmentation:
o This happens when free memory is scattered across the system in small
chunks, making it difficult to allocate large contiguous blocks of memory to
processes. Even though total free memory might be enough, there is not
enough contiguous space for new processes.
o Example: A system with 1MB free in total, but fragmented into 100KB,
200KB, and 700KB blocks. While there is enough total free memory, a
500KB process cannot be allocated.
o Cause: Variable-sized memory allocation or process termination that leaves
gaps in memory.
Solutions:
e) List and explain system calls related to Process and Job Control.
System calls related to process and job control allow the operating system to manage
processes, their execution, and their life cycle. The key system calls include:
1. fork():
o Creates a new process by duplicating the calling process. The new process
(child) gets a copy of the parent's address space.
o Usage: Initiating the creation of a new process.
2. exec():
o Replaces the current process image with a new one. After a fork(), the child
can execute a different program using exec().
o Usage: Executing a new program within the current process.
3. wait():
o Makes the parent process wait until one of its child processes finishes
execution. It returns the exit status of the child process.
o Usage: Synchronizing the parent with child processes.
4. exit():
o Terminates the calling process, releasing all resources it held, and passes an
exit status back to the operating system.
o Usage: When a process finishes its execution or is explicitly terminated.
5. kill():
o Sends a signal to a process, which can be used to terminate the process or
trigger some other behavior depending on the signal sent.
o Usage: Terminating or controlling the behavior of a process.
6. getpid():
o Returns the Process ID (PID) of the calling process.
o Usage: Used for process management, especially for system monitoring and
communication.
7. getppid():
o Returns the Parent Process ID (PPID), i.e., the PID of the parent process.
o Usage: To track the parent-child relationship between processes.
These system calls provide the essential mechanisms to manage processes during their
lifecycle, from creation and execution to termination.
Q-3
Definition: A critical section is a segment of code where shared resources are accessed. The
Critical Section Problem involves ensuring that no two processes are in their critical
sections simultaneously, preventing race conditions.
Solution Requirements: To solve the Critical Section Problem, the following three
conditions must be satisfied:
1. Mutual Exclusion: If one process is in its critical section, no other process can be in
their critical section.
2. Progress: If no process is in its critical section, the next process that needs access
should be able to enter its critical section without unnecessary delay.
3. Bounded Waiting: A process must not have to wait indefinitely to enter its critical
section; there must be a bound on the number of times other processes can enter their
critical sections before the waiting process can enter.
Various synchronization mechanisms, such as mutexes, semaphores, and monitors, are used
to prevent race conditions and handle the Critical Section Problem.
b) Explain Different Methods for Recovery from Deadlock.
Deadlock recovery involves strategies to resolve deadlocks once they have occurred in the
system. These methods include:
1. Process Termination:
o Abort All Processes: Terminating all the processes involved in the deadlock
is the simplest method but can result in significant loss of progress.
o Abort One Process at a Time: In this approach, one process at a time is
terminated. The system identifies which process to terminate based on factors
such as priority or resource consumption. The terminated process is rolled
back to a safe state.
2. Resource Preemption:
o Preempt Resources from Processes: Resources held by deadlocked
processes can be preempted (forcibly taken away) and reassigned to other
processes. This might cause some processes to roll back to a previous safe
state. The system can then retry the execution after resource allocation
changes.
o Rollback: In this case, the affected processes are rolled back to a previous
state, where they had not yet encountered the deadlock. This process is often
used in systems with checkpoints.
3. Manual Intervention:
o User Intervention: In some systems, deadlocks can be detected and resolved
manually by an administrator. This involves analyzing the system, terminating
or adjusting processes, and reallocating resources.
Each of these recovery methods comes with its own trade-offs. Process termination can cause
a loss of progress, while resource preemption can lead to system inefficiency and additional
overhead due to process rollbacks.
This is a repeat of question a. Please refer to the answer for question a above for an
explanation of the Critical Section Problem.
d) Calculate Average Turnaround Time and Average Waiting Time for All Set of
Processes Using FCFS Algorithm.
The First-Come, First-Served (FCFS) scheduling algorithm executes processes in the order
they arrive. Here’s how we can calculate the Average Turnaround Time and Average
Waiting Time for the processes:
Given Data:
Proces Burst Time Arrival Time
s (BT) (AT)
P1 5 1
P2 6 0
P3 2 2
P4 4 0
The Completion Time (CT) is the time at which a process finishes execution.
P1 1 5 1+5=6
P2 0 6 6 + 6 = 12
P3 2 2 12 + 2 = 14
P4 0 4 14 + 4 = 18
The Turnaround Time (TAT) is the difference between the Completion Time (CT) and
Arrival Time (AT).
P1 6 1 6-1=5
P2 12 0 12 - 0 = 12
P3 14 2 14 - 2 = 12
P4 18 0 18 - 0 = 18
The Waiting Time (WT) is the difference between Turnaround Time (TAT) and Burst
Time (BT).
WT=TAT−BT\text{WT} = \text{TAT} - \text{BT}
P1 5 5 5-5=0
P2 12 6 12 - 6 = 6
P3 12 2 12 - 2 = 10
P4 18 4 18 - 4 = 14
So, the Average Turnaround Time is 11.75 and the Average Waiting Time is 7.5.
4, 6, 7, 8, 4, 6, 9, 6, 7, 8, 4, 6, 7, 9.
The Number of Frames is 3. Show Page Trace and Calculate Page Faults for the Following
Page Replacement Schemes:
i) FIFO (First-In-First-Out)
Summary:
Q-4
The Shortest Seek Time First (SSTF) is a disk scheduling algorithm that selects the disk I/O
request that is closest to the current position of the disk head. This algorithm reduces the seek
time by minimizing the distance the head needs to move to satisfy the next request. The basic
idea is to choose the request that has the shortest seek time from the current head position.
Working:
Example:
If the current head position is at track 25, and the request queue contains the following tracks:
68, 172, 4, 178, 130, 40, 118, and 136. The algorithm will select the track that has the shortest
distance from 25, say 40, then continue in a similar manner.
Advantages:
Efficient: It minimizes the movement of the disk arm by servicing the closest request
first.
Reduced Seek Time: In comparison to algorithms like FCFS, SSTF can significantly
reduce the average seek time.
Disadvantages:
Starvation: Requests far from the current head position may not be serviced for a
long time if there are continuous requests closer to the head, leading to starvation for
some requests.
i) Logical Address: A logical address (also called a virtual address) is the address generated
by the CPU during a program's execution. These addresses are part of the virtual memory
system and are not tied to physical memory locations. The operating system maps logical
addresses to physical addresses via a process known as address translation. The logical
address is used by a process to access memory, and the system's memory manager translates
this into a physical address in the actual memory hardware.
ii) Physical Address: A physical address refers to an actual location in the physical memory
(RAM). It is the address that corresponds to a specific cell in physical memory. When the
CPU issues a logical address, the memory management unit (MMU) converts it into the
corresponding physical address. The physical address is the address the hardware uses to
access memory.
How It Works:
1. If a process requests a resource, a request edge is drawn from the process node to the
resource node.
2. If the resource is allocated to the process, an assignment edge is drawn from the
resource node to the process node.
3. The graph is used to detect deadlocks by identifying cycles. If a cycle exists in the
graph, it indicates a potential deadlock situation where processes are waiting for
resources that are held by other processes in the cycle.
Example:
Consider three processes (P1, P2, P3) and two resource types (R1 and R2). If P1 requests R1,
P2 requests R2, and R3 is allocated to P1, the graph would look like:
P1 → R1 (request)
R1 → P1 (allocation)
P2 → R2 (request)
R2 → P2 (allocation)
Preemptive Scheduling and Non-Preemptive Scheduling are two types of CPU scheduling
algorithms that differ in how they handle process execution and context switching.
Preemptive Scheduling:
Non-Preemptive Scheduling:
Given:
Steps:
Track Movement:
So, the total head movement for FCFS disk scheduling is 723 tracks.
Q-5
Types of Interrupts:
Importance:
Efficiency: Interrupts allow the CPU to respond promptly to external events, ensuring
that the system is responsive to various hardware devices or system events.
Multitasking: Interrupts enable multitasking by allowing the system to shift attention
from one process to another based on the priority of events.
Types of Semaphores:
Example:
Importance:
Semaphores are crucial for ensuring that shared resources are used in a safe,
controlled manner, avoiding conflicts and ensuring synchronization between
processes.
Fragmentation refers to the condition where the storage space is inefficiently utilized,
resulting in small unused gaps scattered throughout the storage. This happens in both memory
and disk storage systems, making it difficult for large blocks of data to be allocated.
Types of Fragmentation:
1. External Fragmentation:
o Occurs when free memory or storage is broken into small, scattered pieces.
These gaps of free space are too small to accommodate large blocks of data,
even though the total free space might be sufficient.
o Common in systems using dynamic memory allocation or disk space
allocation where fixed-sized blocks of memory or data are continuously
allocated and deallocated.
o Solution: Compaction or garbage collection is typically used to rearrange
memory blocks, consolidating free space into larger contiguous areas.
2. Internal Fragmentation:
o Occurs when allocated memory or disk blocks are larger than the data they
store, leaving unused space within each block.
o Common in systems where memory is allocated in fixed-sized blocks (e.g., in
paging systems), resulting in unused portions within each allocated page or
block.
o Solution: Better memory allocation schemes like paging with dynamic
block sizes or using more flexible memory allocation strategies.
Example:
In memory management, if a program requests 80 bytes of memory and the system allocates
100 bytes (the next available block), the unused 20 bytes are considered internal
fragmentation. However, if there are multiple scattered free spaces across memory that are
too small to fit the program’s 80 bytes, it results in external fragmentation.
Importance:
Fragmentation reduces the overall efficiency of a system by wasting space, which can lead to
slower performance due to the need for additional memory or disk space management
techniques. Efficient memory management is crucial to minimize fragmentation and enhance
system performance.
OS PAPER-2
Q-1
a) Define Process.
A process is a program in execution. It is an active entity, which includes the program code,
its current activity, and the resources allocated to it, such as memory, CPU time, and
input/output devices. A process goes through various states like new, ready, running, waiting,
and terminated during its lifecycle.
b) What is Context Switch?
A context switch is the process of saving the state of a currently running process and loading
the state of the next process to be executed. It involves saving the process's context (such as
CPU registers, program counter, etc.) and restoring the context of the new process, enabling
multitasking in an operating system.
A page frame is a fixed-size block of physical memory in which a page of virtual memory is
stored. The operating system divides physical memory into page frames of the same size, and
each page of virtual memory is mapped to a page frame in physical memory.
Rotational latency refers to the time it takes for the desired disk sector to rotate into position
under the disk's read/write head. It depends on the disk's rotation speed and is a key factor in
determining disk access time, especially in mechanical hard drives.
A critical section is a part of the code or a set of instructions that accesses shared resources
(e.g., memory, files, data) and must not be executed concurrently by more than one process.
Proper synchronization is required to prevent race conditions in the critical section.
i) Define Deadlock.
The operating system (OS) manages hardware resources, provides an interface for user
interaction, ensures security and process synchronization, and handles tasks like memory
management, file management, and process scheduling to ensure efficient and fair resource
allocation.
Q-2
Roles of OS as a Manager:
Conclusion:
The OS acts as a bridge between user and hardware, efficiently managing tasks, resources,
and users, thus justifying its role as a manager.
Scheduling is the process by which the operating system decides the order of execution of
processes in the CPU. It improves system performance and resource utilization.
Short-Term Scheduler
Feature Medium-Term Scheduler
(CPU Scheduler)
Conclusion:
The short-term scheduler handles CPU allocation, while the medium-term scheduler
manages memory and system load by controlling process suspension and resumption.
Process Control Block (PCB) is a data structure maintained by the OS for every process. It
contains all the information about a process.
Diagram of PCB:
+-----------------------+
| Process ID (PID) |
+-----------------------+
| Process State |
+-----------------------+
| Program Counter |
+-----------------------+
| CPU Registers |
+-----------------------+
| Memory Management Info|
+-----------------------+
| Accounting Info |
+-----------------------+
| I/O Status Info |
+-----------------------+
Explanation of Fields:
Conclusion:
PCB is essential for context switching and process management.
Conclusion:
While multiprogramming maximizes CPU usage using one CPU, multiprocessing boosts
performance using multiple CPUs.
e) Draw and Explain the Process State Diagram.
+---------+
| New |
+---------+
|
v
+---------+ +------------+
| Ready |<------->| Waiting |
+---------+ +------------+
|
v
+---------+
| Running |
+---------+
|
v
+---------+
| Terminate|
+---------+
Explanation of States:
State Transitions:
Conclusion:
This model helps in managing and tracking the lifecycle of a process efficiently.
Q-3
a) Compare Internal and External Fragmentation.
Aspect Internal Fragmentation External Fragmentation
Conclusion:
Both types reduce memory efficiency, but occur due to different allocation strategies.
P1 10
P2 1
P3 2
P4 1
P5 5
Gantt Chart:
| P2 | P4 | P3 | P5 | P1 |
0 1 2 4 9 19
ii) Calculate Turnaround Time (TAT) and Waiting Time (WT)
P2 1 1 1 0
P4 1 2 2 1
P3 2 4 4 2
P5 5 9 9 4
P1 10 19 19 9
Types of Semaphores:
Operations:
Wait (P): Decrements the semaphore value. If it's negative, the process is blocked.
Signal (V): Increments the semaphore value. If processes are blocked, one is
unblocked.
A deadlock is a condition where a group of processes are blocked, each waiting for a
resource held by the others, creating a circular wait situation.
1. Deadlock Prevention:
o Design the system in a way that one of the necessary conditions (mutual
exclusion, hold & wait, no preemption, circular wait) never holds.
o Example: Use resource hierarchy or preempt resources.
2. Deadlock Avoidance:
o Requires information about future requests.
o Banker's Algorithm is commonly used.
o System only grants resources if it leads to a safe state.
3. Deadlock Detection and Recovery:
o Allow deadlock to occur but detect it using a resource allocation graph or
wait-for graph.
o Recover by killing processes or preempting resources.
4. Ignore the Problem (Ostrich Algorithm):
o Used in systems where deadlocks are rare and cost of prevention is high.
o Example: Most desktop operating systems.
The directory structure organizes files in a hierarchical or linear fashion to manage data
efficiently.
1. Single-Level Directory:
o All files in the same directory.
o Simple but causes name conflicts and is hard to manage.
2. Two-Level Directory:
o Each user has a separate directory.
o Solves name conflicts but no grouping of files within user directories.
3. Tree-Structured Directory:
o Hierarchical directory structure.
o Allows subdirectories and better organization.
4. Acyclic Graph Directory:
o Allows sharing of files using links.
o Avoids cycles.
5. General Graph Directory:
o Allows full flexibility of file sharing with cycles.
o Requires garbage collection to handle cycles and dangling pointers.
Conclusion:
Directory structures enhance file management by providing logical organization, security,
and access control.
Q-4
Linked Allocation is a method of file storage allocation where each file is a linked list of
disk blocks. The directory stores the pointer to the first block of the file. Each block
contains a pointer to the next block and the file data.
Advantages:
No external fragmentation.
Easy to grow files dynamically.
Disadvantages:
Diagram:
Directory → [Block 1] → [Block 4] → [Block 7] → [Block 10] → NULL
Logical
Page number + Offset Segment number + Offset
Address
Given:
FCFS Order:
Head Movements:
|125 - 84| = 41
|84 - 145| = 61
|145 - 89| = 56
|89 - 168| = 79
|168 - 93| = 75
|93 - 128| = 35
|128 - 100| = 28
|100 - 68| = 32
41 + 61 + 56 + 79 + 75 + 35 + 28 + 32 = 407 tracks
File structures define how data is organized in a file. The common file structures are:
1. Byte Sequence:
o A stream of bytes with no structure.
o Used in UNIX and Linux systems.
o Application interprets the data.
2. Record Sequence:
o File is a sequence of fixed- or variable-size records.
o Useful for databases and tables.
3. Tree Structure:
o Records are organized in a tree or hierarchical format.
o Supports fast search and categorization.
Diagram Example:
+--------------+ +------------+ +-------------+
| File Header | --> | Record 1 | --> | Record 2 | --> ...
+--------------+ +------------+ +-------------+
Each structure is suited to different applications like text files, binary files, and databases.
Reference String: 9, 2, 3, 4, 2, 5, 2, 6, 4, 5, 2, 5, 4, 3, 4, 2, 3, 9, 2, 3
FIFO (First-In-First-Out):
Step-by-step Simulation:
1. 9 → Page Fault
2. 2 → Page Fault
3. 3 → Page Fault
4. 4 → Page Fault
5. 2 → Hit
6. 5 → Page Fault (Evict 9)
7. 2 → Hit
8. 6 → Page Fault (Evict 2)
9. 4 → Hit
10. 5 → Hit
11. 2 → Page Fault (Evict 3)
12. 5 → Hit
13. 4 → Hit
14. 3 → Page Fault (Evict 4)
15. 4 → Page Fault (Evict 5)
16. 2 → Hit
17. 3 → Hit
18. 9 → Page Fault (Evict 6)
19. 2 → Hit
20. 3 → Hit
Q-5
a) Spooling
Example:
When multiple documents are sent to a printer, they are stored in a spool (disk file or
memory) and printed one by one.
Problem Setup:
Five philosophers sit around a table with one chopstick between each.
Each philosopher must pick up two chopsticks (shared resources) to eat.
They alternate between thinking and eating.
Issues Involved:
Solutions:
Features:
Example:
If a process needs 100 KB and memory has 120 KB free in pieces of 60 KB each, the process
can't be allocated memory despite enough total free space.
OS PAPER-3
Q-1
System programs provide an environment for program development and execution. They
include file management, editors, compilers, loaders, etc.
b) What is Multiprogramming?
1. Program Execution
2. File System Manipulation
A waiting system refers to a process that is in the waiting state, i.e., it is not ready for
execution until an event or resource becomes available.
f) What is the Purpose of CPU Scheduling?
CPU scheduling determines which process gets the CPU next, aiming to optimize CPU
utilization, throughput, and response time.
1. Increased throughput
2. Fault tolerance and reliability
3. Faster processing
j) What is Fragmentation?
Fragmentation is the wastage of memory that occurs when memory blocks are not used
efficiently. It can be internal or external.
Q-2
Contents of PCB:
Internal Fragmentation:
Occurs when fixed-size memory blocks are allocated, and the process doesn't use the entire
block.
Example: If block size is 8KB and process uses 6KB, 2KB is wasted.
External Fragmentation:
Occurs when free memory is scattered in small blocks between allocated memory blocks.
Example: Multiple small free spaces (2KB, 3KB, 4KB) can't satisfy a request of 9KB even
though 9KB is available in total.
Given:
P1 0 ms 4 ms
P2 2 ms 5 ms
P3 5 ms 6 ms
P4 6 ms 2 ms
Gantt Chart:
0 2 6 8 14 20
| P1 | P2 | P4 | P2 | P3 |
Completion Times:
P1 = 2
P2 = 14
P3 = 20
P4 = 8
P1 = 2 - 0 = 2
P2 = 14 - 2 = 12
P3 = 20 - 5 = 15
P4 = 8 - 6 = 2
P1 = 2 - 4 = 0
P2 = 12 - 5 = 7
P3 = 15 - 6 = 9
P4 = 2 - 2 = 0
Average Waiting Time = (0 + 7 + 9 + 0) / 4 = 4 ms
Reference String:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
Frames:
Q-3
b) Banker's Algorithm
Given:
ABC ABC
P0 010 753
Proces Allocati Maximu
s on m
P1 200 322
P2 302 902
P3 211 222
P4 002 433
Available: A = 3, B = 3, C = 2
P0 743
P1 122
P2 600
P3 011
P4 431
Deadlock Condition:
If there exists a circular wait in the graph (e.g., P1 → R1 → P2 → R2 → P1), then deadlock
may occur.
Critical Section: A part of the program where the process accesses shared resources
(variables, files, etc.).
Problem:
Multiple processes trying to enter their critical sections may lead to race conditions, data
inconsistency, or deadlocks.
Requirements:
Q-4
Given:
Disk tracks: 0 to 99
Initial head position: 49
Previous request: 90 (direction not relevant in FCFS)
Request queue (in FIFO order): 86, 47, 91, 77, 94, 50, 02, 75, 30
Steps:
Role of OS:
The Operating System (OS) acts as an intermediary between users and hardware. It
manages hardware resources, provides a user interface, and ensures that different programs
and users operate efficiently and securely.
1. Process Management:
o Scheduling, creation, termination of processes
o Ensures synchronization and communication
2. Memory Management:
o Allocates and deallocates memory space as needed
o Maintains memory hierarchy, paging, segmentation
3. File System Management:
o Handles storage, retrieval, and naming of files
o Provides directories, permissions, file sharing
4. Device Management:
o Manages device communication via drivers
o Provides buffering, caching, spooling
5. Security and Protection:
o Prevents unauthorized access to resources
o Ensures data integrity and user authentication
6. User Interface:
o Provides command-line or graphical interface
o Helps users interact with the system easily
Contiguous Allocation:
Each file/process occupies a set of contiguous blocks in memory.
Advantages:
Disadvantages:
Deadlock:
A situation where a group of processes are blocked, each waiting for a resource held by
another, such that no process can proceed.
Segmentation:
A memory management technique where each process is divided into logical segments like
code, data, stack, etc. Each segment has its own base and limit.
Key Points:
Diagram:
Logical View (Process Segments): Segment Table (for mapping):
+-----------+ +--------------------------+
| Segment 0 | Code | Segment | Base | Limit |
| Segment 1 | Data | 0 | 1000 | 400 |
| Segment 2 | Stack | 1 | 1400 | 200 |
+-----------+ | 2 | 1600 | 300 |
Q-5
a) Context Switching
Context switching is the process of storing the state of a currently running process so that the
CPU can switch to another process. This is essential in multitasking operating systems where
multiple processes share a single CPU.
Key Points:
Involves saving the process control block (PCB) of the current process.
The CPU loads the PCB of the next process to resume execution.
Causes overhead as no useful work is done during the switch.
Helps achieve concurrent execution and CPU utilization.
b) Deadlock
A deadlock occurs in a system when a group of processes are stuck in a state where each
process is waiting for a resource held by another, and none can proceed.
1. Mutual Exclusion
2. Hold and Wait
3. No Preemption
4. Circular Wait
Deadlock results in system inaction and may require detection, prevention, avoidance, or
recovery techniques to handle it.
c) Semaphore
Types:
1. Counting Semaphore – Can have any non-negative integer value, used for managing
access to multiple instances of a resource.
2. Binary Semaphore – Has only two values (0 and 1), acts like a lock.
Operations:
OS PAPER-4
Q-1
LRU (Least Recently Used) is a page replacement algorithm that replaces the page that has
not been used for the longest period of time. It aims to improve the efficiency of memory
usage by keeping the most recently accessed pages in memory.
b) Context Switch
A context switch is the process of saving the state (context) of a currently running process
and loading the state of the next process to be executed. This occurs in multitasking systems
to allow efficient CPU utilization.
c) Page Frame
A page frame is a fixed-size block of physical memory that holds a page of data in a system
that uses paging. The size of the page frame matches the size of a page in virtual memory.
Seek time is the time it takes for the disk's read/write head to move to the track where data is
stored or requested. It is a crucial factor in determining the overall disk I/O performance.
f) Compaction
g) Belady's Anomaly
Belady's Anomaly occurs in certain page replacement algorithms, specifically FIFO, where
increasing the number of page frames can actually lead to an increase in the number of page
faults.
i) Safe State
A safe state is a situation in which there exists at least one sequence of processes that can
complete without causing deadlock, meaning each process can eventually obtain the
resources it needs.
j) Starvation
Starvation occurs when a process is indefinitely postponed because the system always grants
resources to other processes. It is a type of resource allocation problem where a process
cannot proceed because it never gets the required resources.
Q-2
Types of OS Structures:
1. Monolithic Structure:
o All OS services run in kernel mode, without strict separation.
o It is efficient but harder to maintain and modify because changes to one part
can affect the whole system.
o Example: Linux, early UNIX systems.
2. Layered Structure:
o The OS is divided into multiple layers, each providing specific services.
o Lower layers provide fundamental services, and higher layers provide user-
level services.
o Example: THE system, some aspects of modern UNIX.
3. Microkernel Structure:
o The kernel only provides essential services like communication and basic
process management.
o Additional services like file systems and device drivers run as user-level
processes.
o Example: Minix, modern versions of macOS and Windows.
4. Hybrid System:
o A combination of monolithic and microkernel designs, where some services
run in user space, while critical ones run in kernel space.
o Example: Windows NT.
Scheduling refers to the method by which the operating system decides which process to
execute at any given time. It ensures that the CPU is efficiently utilized and that processes are
executed in a timely manner.
Function: Decides which process should be admitted to the system (from the queue
of jobs waiting to enter the ready queue).
Frequency: It runs less frequently (seconds or minutes) as compared to the short-term
scheduler.
Criteria: Based on process type, memory requirements, and resource availability.
Example: In a batch processing system, it might admit jobs based on resource
availability.
Comparison:
Round Robin (RR) is a preemptive CPU scheduling algorithm where each process is
assigned a fixed time slot (quantum) in a circular order. If a process does not complete within
its quantum, it is placed at the end of the queue, and the next process gets the CPU.
Example:
Proces Arrival Burst Time Quantum
s Time Time =4
P1 0 8 4
P2 1 4 4
P3 2 9 4
P4 3 5 4
Execution Order:
Gantt Chart:
| P1 | P2 | P3 | P4 | P1 | P3 | P4 | P3 |
0 4 8 12 16 20 24 25 26
Average Turnaround Time and Waiting Time:
Operations on Semaphores:
wait(P): Decrements the semaphore value. If the value is negative, the process waits.
signal(V): Increments the semaphore value. If there are processes waiting, one is
unblocked.
Diagram:
| Process P1 | Process P2 | Process P3 | Free Space |
|------------|------------|------------|------------|
| 0-10 | 11-20 | 21-30 | 31-50 |
Steps:
Advantages:
Disadvantages:
The Critical Section Problem refers to the issue of managing the shared resources that
multiple processes or threads need to access concurrently. A critical section is a segment of a
program that accesses shared resources (e.g., variables, memory, files) that must not be
accessed simultaneously by more than one process or thread to avoid data inconsistency.
1. Mutual Exclusion: Only one process can execute in the critical section at any time.
2. Progress: If no process is executing in the critical section and one or more processes
wish to enter, the selection of the process to execute next must be made without delay.
3. Bounded Waiting: A process should not be forced to wait indefinitely to enter the
critical section.
Various synchronization mechanisms like semaphores, locks, mutexes, and monitors are
used to solve this problem.
Given the following processes with burst times and arrival times:
P1 3 3
P2 3 6
P3 4 0
P4 5 2
Gantt Chart:
| P3 | P1 | P4 | P2 |
0 4 7 12 15
ii) Average Turnaround Time and Average Waiting Time
P3: TAT = 4 - 0 = 4, WT = 4 - 4 = 0
P1: TAT = 7 - 3 = 4, WT = 4 - 3 = 1
P4: TAT = 12 - 2 = 10, WT = 10 - 5 = 5
P2: TAT = 15 - 6 = 9, WT = 9 - 3 = 6
A deadlock occurs in a multi-process system when two or more processes are unable to
proceed because each is waiting for a resource held by another. In simple terms, processes are
stuck in a state of mutual waiting, and none can proceed.
1. Mutual Exclusion: Resources are limited and can only be used by one process at a
time.
2. Hold and Wait: Processes hold at least one resource and wait for others.
3. No Preemption: Resources cannot be forcibly taken away from processes holding
them.
4. Circular Wait: A set of processes exists such that each process is waiting for a
resource held by the next process in the set.
Deadlock Prevention/Recovery:
The File System Access Methods define how a process can access data stored in a file.
These methods are crucial for reading, writing, and organizing files.
Types of Access Methods:
1. Sequential Access: Data is read or written in a sequential manner, from the beginning
to the end. This is the simplest access method. Example: Text files.
2. Direct (Random) Access: Data can be read or written at any location within the file.
The system calculates the address of the data using an index or pointer. Example:
Database files.
3. Indexed Access: An index is maintained that maps logical data addresses to physical
locations. This method allows efficient data retrieval. Example: File systems using
index blocks.
Paging is a memory management scheme that eliminates the need for contiguous allocation
of physical memory. It divides physical memory into fixed-size blocks called frames, and
divides logical memory into blocks of the same size called pages.
The process is divided into pages, and physical memory is divided into frames.
Each page is mapped to a frame, and a page table maintains this mapping.
This scheme eliminates external fragmentation and allows non-contiguous
memory allocation.
Page Table:
The page table keeps track of the frame location for each page, providing a mapping from
logical pages to physical frames.
Advantages of Paging:
Disadvantages of Paging:
1. Internal fragmentation can still occur (if the last page is only partially used).
2. Extra overhead in maintaining page tables.
Diagram:
Logical Memory (Pages) -> Page Table -> Physical Memory (Frames)
Page 0 -> Frame 2 -> Frame 2
Page 1 -> Frame 4 -> Frame 4
Page 2 -> Frame 1 -> Frame 1
Page 3 -> Frame 0 -> Frame 0
Q-4
In SSTF (Shortest Seek Time First) disk scheduling, the disk head moves to the track that is
closest to its current position, minimizing the seek time.
Given:
1. The initial head position is 50. The closest request is 43 (distance = 7).
2. Move to track 43, the closest request.
3. The closest request to 43 is 24 (distance = 19).
4. Move to track 24.
5. The closest request to 24 is 16 (distance = 8).
6. Move to track 16.
7. The closest request to 16 is 82 (distance = 66).
8. Move to track 82.
9. The closest request to 82 is 140 (distance = 58).
10. Move to track 140.
11. The closest request to 140 is 170 (distance = 30).
12. Move to track 170.
13. The closest request to 170 is 190 (distance = 20).
14. Move to track 190.
From 50 to 43 = 7
From 43 to 24 = 19
From 24 to 16 = 8
From 16 to 82 = 66
From 82 to 140 = 58
From 140 to 170 = 30
From 170 to 190 = 20
A Job Control Block (JCB) is used by the operating system to manage and control the
execution of a process or job. It contains important information about the process such as its
state, priority, CPU time, and resources.
Components of a JCB:
Diagram:
+-------------------+
| Job Control Block |
+-------------------+
| Job ID |
| Process State |
| CPU Time |
| Priority |
| Memory Required |
| I/O Operations |
| Resource Allocation|
+-------------------+
The Optimal page replacement algorithm replaces the page that will not be used for the
longest period in the future.
Steps:
Page Faults = 7
In FIFO, the page that has been in memory the longest is replaced.
Steps:
Page Faults = 7
iii) LRU (Least Recently Used) Page Replacement Algorithm:
In LRU, the page that has not been used for the longest period of time is replaced.
Steps:
Page Faults = 7
Fragmentation occurs when memory is allocated in such a way that it is not fully utilized
due to the creation of small gaps or unused spaces.
Types of Fragmentation:
1. External Fragmentation: When free memory is scattered in small blocks across the
system and is too small to satisfy the memory requests. This can lead to inefficient
memory usage.
o Example: A system has 1000 units of free memory, but they are scattered
across various locations (e.g., 100, 200, 300 units), making it difficult to
allocate large processes.
2. Internal Fragmentation: When allocated memory may be larger than needed,
leaving unused memory within a partition.
o Example: Allocating 500 units of memory for a process that only needs 450
units results in 50 units of unused memory within the allocated partition.
Diagram:
Fragmentation reduces the efficiency of memory usage, and compaction (shifting memory
contents to remove gaps) can help mitigate external fragmentation.
Q-5
a) Shortest Seek Time First (SSTF)
Shortest Seek Time First (SSTF) is a disk scheduling algorithm that selects the disk I/O
request that requires the least movement of the disk arm from its current position. This
algorithm minimizes the seek time by prioritizing the request closest to the current position of
the disk head.
Working: The disk controller scans the request queue and chooses the request with
the shortest distance from the current disk head position. After servicing this request,
the process repeats, selecting the next closest request.
Advantages:
1. Reduces the average seek time.
2. More efficient than First-Come-First-Served (FCFS).
Disadvantages:
1. May lead to starvation, where some requests (especially those far from the
current position) may never be serviced if closer requests keep coming.
2. It is harder to implement in real-time systems where time constraints are
critical.
Example: If the disk head is at track 50 and the request queue contains [82, 170, 43, 140, 24,
16, 190], the disk will first move to track 43 (as it is the closest to 50), then to 24, and so on.
Linked allocation is a file allocation method where each file is stored as a linked list of
blocks scattered across the disk. Each block contains a pointer to the next block in the file.
This method is simple and efficient but requires additional space for pointers.
Working: Each file block contains a pointer to the next block in the sequence. The
last block of the file points to null, indicating the end of the file. This approach doesn't
require contiguous space for the file, so it helps in handling fragmented files.
Advantages:
1. Efficient use of space: Files can be stored in non-contiguous blocks, allowing
efficient use of fragmented disk space.
2. No external fragmentation: The file doesn't need a contiguous block of
space, so there's no risk of fragmentation in allocation.
Disadvantages:
1. Performance issues: Accessing a file requires following the chain of pointers,
which is slower than direct access methods (like contiguous allocation).
2. Overhead: Requires additional space for storing pointers in each block,
leading to increased storage overhead.
Example: If a file consists of three blocks, Block 1 stores a pointer to Block 2, and Block 2
stores a pointer to Block 3. Block 3 points to null, marking the end of the file.
c) Address Binding in Case of Memory Management
Address binding refers to the process of mapping logical addresses (generated by a program)
to physical addresses (actual locations in memory). This is an essential part of memory
management in an operating system, as it allows programs to access physical memory in a
consistent and efficient way.