100% found this document useful (1 vote)
76 views52 pages

Os Paper Solutions

The document discusses various concepts related to operating systems, including services like memory and process management, system calls, and process definitions. It explains the structure of operating systems, synchronization problems like the Dining Philosophers, deadlock recovery methods, fragmentation types, and critical section problems. Additionally, it covers process scheduling using the FCFS algorithm and page replacement schemes such as FIFO.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
76 views52 pages

Os Paper Solutions

The document discusses various concepts related to operating systems, including services like memory and process management, system calls, and process definitions. It explains the structure of operating systems, synchronization problems like the Dining Philosophers, deadlock recovery methods, fragmentation types, and critical section problems. Additionally, it covers process scheduling using the FCFS algorithm and page replacement schemes such as FIFO.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 52

OS PAPER-1

Q-1

a) Two services provided by OS:

1. Memory Management: The operating system manages the computer's memory,


allocating and deallocating memory spaces as needed for processes.
2. Process Management: The OS handles the creation, scheduling, and termination of
processes, ensuring that resources are efficiently shared.

b) What is meant by System Call? A system call is a request made by a program to the
operating system to perform a specific service, such as file manipulation, process control, or
communication.

c) What is Process? A process is a program in execution, which includes the program


counter, registers, variables, and the execution state.

d) Define a Safe State. A safe state refers to a situation in a system where all processes can
complete their execution without causing a deadlock, ensuring that resources can be allocated
without risk.

e) Define Dispatcher. The dispatcher is a module of the operating system responsible for
switching processes from one to another, ensuring that the CPU is allocated to processes in a
fair manner.

f) What is Semaphores? Semaphores are synchronization primitives used to manage


concurrent processes, preventing race conditions by controlling access to shared resources.

g) What do you mean by Rollback? Rollback is the process of reverting a system to its
previous state, typically after an error or failure, to maintain consistency or recover lost data.

h) What is meant by Address Binding? Address binding is the mapping of logical


addresses (used by programs) to physical addresses in memory, either during compile time,
load time, or run time.

i) List various operations on File.

 Create: To create a new file.


 Read: To retrieve data from a file.
 Write: To store data into a file.
 Delete: To remove a file from the system.
 Rename: To change the name of a file.

j) What do you mean by Deadlock? Deadlock is a situation in which two or more processes
are unable to proceed because each is waiting for the other to release resources, resulting in a
standstill.

Q-2
a) Explain Operating System Structure.

The structure of an operating system refers to the design and organization of its components,
which are responsible for managing hardware resources, executing processes, and providing
services to applications. The common types of operating system structures include:

1. Monolithic Structure:
o In this structure, the operating system is a single large program. All services
such as process management, memory management, device management, and
file system management are tightly integrated and work together in a single
address space.
o Advantage: It’s efficient because all components are interconnected.
o Disadvantage: Maintenance can be complex, and a bug in one component can
affect the whole system.
2. Layered Structure:
o The operating system is divided into layers, each of which provides services to
the layer above it. The lowest layer is responsible for interacting with the
hardware, and higher layers provide more abstract services.
o Advantage: Easier to maintain and debug, as changes in one layer don’t
necessarily affect others.
o Disadvantage: Can be slower due to the overhead of passing data between
layers.
3. Microkernel Structure:
o This approach minimizes the core of the operating system, placing most of the
system services into user space, with a small kernel responsible for
communication between services.
o Advantage: More modular and flexible, easier to add or remove components.
o Disadvantage: Performance overhead due to frequent communication between
user space and the kernel.
4. Client-Server Structure:
o In this model, the operating system is structured around clients (user
applications) and servers (OS services). Clients request services from the
servers, which manage resources.
o Advantage: Scalable and easy to maintain.
o Disadvantage: Potential network latency and overhead due to communication
between clients and servers.

b) Explain ‘Dining Philosopher’ Synchronization Problem.

The Dining Philosophers problem is a classical synchronization problem that illustrates the
challenges of allocating resources among concurrent processes. It involves a scenario with
five philosophers seated around a circular table. Each philosopher thinks and occasionally
needs to eat. To eat, a philosopher requires two utensils (forks), one on each side. The issue is
to prevent deadlock (where no philosopher can eat) and ensure that no philosopher starves (is
left indefinitely without eating).
The problem can be formalized as follows:

1. Shared Resources: Five forks, one between each pair of philosophers.


2. Condition: Each philosopher must pick up two forks to eat.
3. Challenges:
o Deadlock: All philosophers may pick up one fork and wait indefinitely for the
other, causing a deadlock.
o Starvation: Some philosophers may never get both forks, leading to
starvation.

Solution: The problem is typically solved by introducing synchronization mechanisms such


as:

 Locks/Mutexes: Ensuring that only one philosopher can pick up a fork at a time.
 Semaphore or Monitor: Managing the resources in such a way that deadlock and
starvation are avoided.

c) Explain different methods for recovery from a deadlock.

Deadlock recovery involves various strategies to resolve or prevent deadlocks once they
occur. The main approaches include:

1. Process Termination:
o Abort All Processes: The simplest method is to terminate all processes
involved in the deadlock. This method, although effective, can be expensive
and result in a loss of progress.
o Abort Processes One by One: In this case, the system identifies and
terminates the processes one by one until the deadlock is broken. The process
to terminate could be chosen based on priority, age, or other factors.
2. Resource Preemption:
o Preempt Resources from Processes: Resources held by one or more
processes involved in the deadlock can be forcibly preempted and reassigned
to other processes. The preempted processes are then rolled back to a safe
state.
o Rollback and Restart: A process can be rolled back to a previous safe state
(using checkpoints) if it holds resources that would resolve the deadlock if
released.
3. Manual Intervention:
o User Intervention: In some systems, the deadlock situation can be detected
and resolved manually by a system administrator who can intervene to release
resources or terminate processes.

Note: Each method has its own trade-offs in terms of system efficiency, time, and impact on
users.
d) What is Fragmentation? Explain types of fragmentation in detail.

Fragmentation refers to the inefficient use of memory or disk space, leading to wasted areas
that cannot be utilized effectively. There are two types of fragmentation:

1. Internal Fragmentation:
o This occurs when the memory allocated to a process is larger than what is
actually needed. As a result, the extra space within the allocated block remains
unused.
o Example: If a process requires 40KB of memory, but the system allocates a
50KB block, the remaining 10KB is wasted inside the block.
o Cause: Fixed-size memory allocation, where the allocated space is not fully
utilized.
2. External Fragmentation:
o This happens when free memory is scattered across the system in small
chunks, making it difficult to allocate large contiguous blocks of memory to
processes. Even though total free memory might be enough, there is not
enough contiguous space for new processes.
o Example: A system with 1MB free in total, but fragmented into 100KB,
200KB, and 700KB blocks. While there is enough total free memory, a
500KB process cannot be allocated.
o Cause: Variable-sized memory allocation or process termination that leaves
gaps in memory.

Solutions:

 Compaction: Rearranging the memory contents to create large contiguous blocks of


free space.
 Paging: Dividing memory into fixed-sized pages and mapping them to non-
contiguous physical memory blocks.

e) List and explain system calls related to Process and Job Control.

System calls related to process and job control allow the operating system to manage
processes, their execution, and their life cycle. The key system calls include:

1. fork():
o Creates a new process by duplicating the calling process. The new process
(child) gets a copy of the parent's address space.
o Usage: Initiating the creation of a new process.
2. exec():
o Replaces the current process image with a new one. After a fork(), the child
can execute a different program using exec().
o Usage: Executing a new program within the current process.
3. wait():
o Makes the parent process wait until one of its child processes finishes
execution. It returns the exit status of the child process.
o Usage: Synchronizing the parent with child processes.
4. exit():
o Terminates the calling process, releasing all resources it held, and passes an
exit status back to the operating system.
o Usage: When a process finishes its execution or is explicitly terminated.
5. kill():
o Sends a signal to a process, which can be used to terminate the process or
trigger some other behavior depending on the signal sent.
o Usage: Terminating or controlling the behavior of a process.
6. getpid():
o Returns the Process ID (PID) of the calling process.
o Usage: Used for process management, especially for system monitoring and
communication.
7. getppid():
o Returns the Parent Process ID (PPID), i.e., the PID of the parent process.
o Usage: To track the parent-child relationship between processes.

These system calls provide the essential mechanisms to manage processes during their
lifecycle, from creation and execution to termination.

Q-3

a) State and Explain the Critical Section Problem.

The Critical Section Problem is a fundamental issue in concurrent programming. It involves


multiple processes accessing shared resources, where at least one process must modify the
resource. The problem arises when two or more processes simultaneously attempt to access
the same shared resource, which could result in incorrect or unpredictable outcomes.

Definition: A critical section is a segment of code where shared resources are accessed. The
Critical Section Problem involves ensuring that no two processes are in their critical
sections simultaneously, preventing race conditions.

Solution Requirements: To solve the Critical Section Problem, the following three
conditions must be satisfied:

1. Mutual Exclusion: If one process is in its critical section, no other process can be in
their critical section.
2. Progress: If no process is in its critical section, the next process that needs access
should be able to enter its critical section without unnecessary delay.
3. Bounded Waiting: A process must not have to wait indefinitely to enter its critical
section; there must be a bound on the number of times other processes can enter their
critical sections before the waiting process can enter.

Various synchronization mechanisms, such as mutexes, semaphores, and monitors, are used
to prevent race conditions and handle the Critical Section Problem.
b) Explain Different Methods for Recovery from Deadlock.

Deadlock recovery involves strategies to resolve deadlocks once they have occurred in the
system. These methods include:

1. Process Termination:
o Abort All Processes: Terminating all the processes involved in the deadlock
is the simplest method but can result in significant loss of progress.
o Abort One Process at a Time: In this approach, one process at a time is
terminated. The system identifies which process to terminate based on factors
such as priority or resource consumption. The terminated process is rolled
back to a safe state.
2. Resource Preemption:
o Preempt Resources from Processes: Resources held by deadlocked
processes can be preempted (forcibly taken away) and reassigned to other
processes. This might cause some processes to roll back to a previous safe
state. The system can then retry the execution after resource allocation
changes.
o Rollback: In this case, the affected processes are rolled back to a previous
state, where they had not yet encountered the deadlock. This process is often
used in systems with checkpoints.
3. Manual Intervention:
o User Intervention: In some systems, deadlocks can be detected and resolved
manually by an administrator. This involves analyzing the system, terminating
or adjusting processes, and reallocating resources.

Each of these recovery methods comes with its own trade-offs. Process termination can cause
a loss of progress, while resource preemption can lead to system inefficiency and additional
overhead due to process rollbacks.

c) State and Explain the Critical Section Problem.

This is a repeat of question a. Please refer to the answer for question a above for an
explanation of the Critical Section Problem.

d) Calculate Average Turnaround Time and Average Waiting Time for All Set of
Processes Using FCFS Algorithm.

The First-Come, First-Served (FCFS) scheduling algorithm executes processes in the order
they arrive. Here’s how we can calculate the Average Turnaround Time and Average
Waiting Time for the processes:
Given Data:
Proces Burst Time Arrival Time
s (BT) (AT)

P1 5 1

P2 6 0

P3 2 2

P4 4 0

Step 1: Calculate Completion Time (CT)

The Completion Time (CT) is the time at which a process finishes execution.

Proces Arrival Time Burst Time Completion Time


s (AT) (BT) (CT)

P1 1 5 1+5=6

P2 0 6 6 + 6 = 12

P3 2 2 12 + 2 = 14

P4 0 4 14 + 4 = 18

Step 2: Calculate Turnaround Time (TAT)

The Turnaround Time (TAT) is the difference between the Completion Time (CT) and
Arrival Time (AT).

TAT=CT−AT\text{TAT} = \text{CT} - \text{AT}

Proces Completion Time Arrival Time Turnaround Time


s (CT) (AT) (TAT)

P1 6 1 6-1=5

P2 12 0 12 - 0 = 12

P3 14 2 14 - 2 = 12

P4 18 0 18 - 0 = 18

Step 3: Calculate Waiting Time (WT)

The Waiting Time (WT) is the difference between Turnaround Time (TAT) and Burst
Time (BT).
WT=TAT−BT\text{WT} = \text{TAT} - \text{BT}

Proces Turnaround Time Burst Time Waiting Time


s (TAT) (BT) (WT)

P1 5 5 5-5=0

P2 12 6 12 - 6 = 6

P3 12 2 12 - 2 = 10

P4 18 4 18 - 4 = 14

Step 4: Calculate Average Turnaround Time and Average


Waiting Time

Average Turnaround Time:

Average TAT=5+12+12+184=474=11.75\text{Average TAT} = \frac{5 +


12 + 12 + 18}{4} = \frac{47}{4} = 11.75

Average Waiting Time:

Average WT=0+6+10+144=304=7.5\text{Average WT} = \frac{0 + 6 +


10 + 14}{4} = \frac{30}{4} = 7.5

So, the Average Turnaround Time is 11.75 and the Average Waiting Time is 7.5.

e) Consider the Following Page Reference String:

4, 6, 7, 8, 4, 6, 9, 6, 7, 8, 4, 6, 7, 9.

The Number of Frames is 3. Show Page Trace and Calculate Page Faults for the Following
Page Replacement Schemes:

i) FIFO (First-In-First-Out)

Page Trace for FIFO:

 Initially, no pages are loaded.


 First page (4) is loaded → [4]
 Second page (6) is loaded → [4, 6]
 Third page (7) is loaded → [4, 6, 7]
 Page 8 replaces page 4 → [6, 7, 8]
 Page 4 replaces page 6 → [7, 8, 4]
 Page 6 replaces page 7 → [8, 4, 6]
 Page 9 replaces page 8 → [4, 6, 9]
 Page 6 already in memory → no page fault → [4, 6, 9]
 Page 7 replaces page 4 → [6, 9, 7]
 Page 8 replaces page 6 → [9, 7, 8]
 Page 4 replaces page 9 → [7, 8, 4]
 Page 6 replaces page 7 → [8, 4, 6]
 Page 7 replaces page 8 → [4, 6, 7]
 Page 9 replaces page 4 → [6, 7, 9]

Page Faults (FIFO): 10 page faults.

ii) LRU (Least Recently Used)

Page Trace for LRU:

 Initially, no pages are loaded.


 First page (4) is loaded → [4]
 Second page (6) is loaded → [4, 6]
 Third page (7) is loaded → [4, 6, 7]
 Page 8 replaces page 4 → [6, 7, 8]
 Page 4 replaces page 6 → [7, 8, 4]
 Page 6 replaces page 7 → [8, 4, 6]
 Page 9 replaces page 8 → [4, 6, 9]
 Page 6 already in memory → no page fault → [4, 6, 9]
 Page 7 replaces page 4 → [6, 9, 7]
 Page 8 replaces page 6 → [9, 7, 8]
 Page 4 replaces page 9 → [7, 8, 4]
 Page 6 replaces page 7 → [8, 4, 6]
 Page 7 replaces page 8 → [4, 6, 7]
 Page 9 replaces page 4 → [6, 7, 9]

Page Faults (LRU): 9 page faults.

Summary:

 FIFO: 10 page faults


 LRU: 9 page faults

Q-4

a) What is meant by Shortest Seek Time First (SSTF)? Explain in Detail.

The Shortest Seek Time First (SSTF) is a disk scheduling algorithm that selects the disk I/O
request that is closest to the current position of the disk head. This algorithm reduces the seek
time by minimizing the distance the head needs to move to satisfy the next request. The basic
idea is to choose the request that has the shortest seek time from the current head position.
Working:

1. The disk head starts at a given position.


2. The scheduler identifies the request that is closest to the current head position in terms
of track distance.
3. It then services that request, and the head moves to the position of that request.
4. After servicing, the process repeats, with the head selecting the next request that is
closest to its current position.

Example:

If the current head position is at track 25, and the request queue contains the following tracks:
68, 172, 4, 178, 130, 40, 118, and 136. The algorithm will select the track that has the shortest
distance from 25, say 40, then continue in a similar manner.

Advantages:

 Efficient: It minimizes the movement of the disk arm by servicing the closest request
first.
 Reduced Seek Time: In comparison to algorithms like FCFS, SSTF can significantly
reduce the average seek time.

Disadvantages:

 Starvation: Requests far from the current head position may not be serviced for a
long time if there are continuous requests closer to the head, leading to starvation for
some requests.

b) Define the terms:

i) Logical Address: A logical address (also called a virtual address) is the address generated
by the CPU during a program's execution. These addresses are part of the virtual memory
system and are not tied to physical memory locations. The operating system maps logical
addresses to physical addresses via a process known as address translation. The logical
address is used by a process to access memory, and the system's memory manager translates
this into a physical address in the actual memory hardware.

ii) Physical Address: A physical address refers to an actual location in the physical memory
(RAM). It is the address that corresponds to a specific cell in physical memory. When the
CPU issues a logical address, the memory management unit (MMU) converts it into the
corresponding physical address. The physical address is the address the hardware uses to
access memory.

c) Explain Resource Allocation Graph in Detail.


The Resource Allocation Graph (RAG) is a directed graph used to represent the allocation
of resources to processes and the requests for resources in an operating system. It is used to
detect deadlock situations in resource management.

Components of the Graph:

1. Processes: Represented as nodes (P1, P2, P3, etc.).


2. Resources: Represented as nodes (R1, R2, R3, etc.), where each resource type may
have multiple instances (e.g., R1 may have three instances, R2 may have two).
3. Edges: There are two types of edges in the graph:
o Request edge (P → R): Represents a process's request for a resource.
o Assignment edge (R → P): Represents a resource that is currently allocated to
a process.

How It Works:

1. If a process requests a resource, a request edge is drawn from the process node to the
resource node.
2. If the resource is allocated to the process, an assignment edge is drawn from the
resource node to the process node.
3. The graph is used to detect deadlocks by identifying cycles. If a cycle exists in the
graph, it indicates a potential deadlock situation where processes are waiting for
resources that are held by other processes in the cycle.

Example:

Consider three processes (P1, P2, P3) and two resource types (R1 and R2). If P1 requests R1,
P2 requests R2, and R3 is allocated to P1, the graph would look like:

 P1 → R1 (request)
 R1 → P1 (allocation)
 P2 → R2 (request)
 R2 → P2 (allocation)

The RAG helps in detecting deadlocks by identifying cycles in the graph.

d) What Are the Differences Between Preemptive and Non-Preemptive Scheduling?

Preemptive Scheduling and Non-Preemptive Scheduling are two types of CPU scheduling
algorithms that differ in how they handle process execution and context switching.

Preemptive Scheduling:

 Definition: In preemptive scheduling, a process can be interrupted and moved to the


ready queue by the operating system, even if it has not completed its execution. The
OS may choose another process for execution based on priority or time-sharing.
 Example: Round Robin (RR) and Shortest Remaining Time First (SRTF) are
preemptive scheduling algorithms.
 Advantages:
o Fairness: Ensures that processes receive a fair share of CPU time.
o Quick response: Helps in time-sharing environments and ensures quick
response times for interactive systems.
 Disadvantages:
o Context Switching Overhead: Frequent context switching may lead to
overhead, reducing system performance.
o Starvation: Low-priority processes might suffer starvation if high-priority
processes keep preempting them.

Non-Preemptive Scheduling:

 Definition: In non-preemptive scheduling, once a process starts executing, it runs to


completion (or until it voluntarily gives up the CPU, like in the case of I/O
operations). The CPU is not taken away from a process unless the process finishes its
execution or voluntarily yields control.
 Example: First Come First Serve (FCFS) and Shortest Job First (SJF) are non-
preemptive scheduling algorithms.
 Advantages:
o Lower Overhead: No context switching during process execution, making it
less expensive in terms of system resources.
o Predictability: It’s easier to predict the completion time of processes.
 Disadvantages:
o Poor Responsiveness: High-priority tasks may have to wait for long periods,
reducing responsiveness in interactive systems.
o Potential for Starvation: Long processes could prevent short processes from
getting CPU time.

e) FCFS Disk Scheduling Algorithm:

Given:

 Track Range: 0-199 tracks.


 Request Queue: 68, 172, 4, 178, 130, 40, 118, 136.
 Initial Head Position: 25.

Steps:

1. FCFS services the requests in the order they arrive.


2. Initial position: Head starts at track 25.

Track Movement:

 Move from 25 to 68 (abs distance = |68 - 25| = 43).


 Move from 68 to 172 (abs distance = |172 - 68| = 104).
 Move from 172 to 4 (abs distance = |4 - 172| = 168).
 Move from 4 to 178 (abs distance = |178 - 4| = 174).
 Move from 178 to 130 (abs distance = |130 - 178| = 48).
 Move from 130 to 40 (abs distance = |40 - 130| = 90).
 Move from 40 to 118 (abs distance = |118 - 40| = 78).
 Move from 118 to 136 (abs distance = |136 - 118| = 18).

Total Head Movement Calculation:


Total Head Movement=43+104+168+174+48+90+78+18=723\
text{Total Head Movement} = 43 + 104 + 168 + 174 + 48 + 90 + 78 +
18 = 723

So, the total head movement for FCFS disk scheduling is 723 tracks.

Q-5

a) Write a Note on Interrupts.

An interrupt is a mechanism used by the operating system and hardware to handle


asynchronous events. It temporarily halts the execution of the current process, saving its state,
and transfers control to a special function called the interrupt handler or interrupt service
routine (ISR). Once the interrupt is serviced, the system returns to the previously executing
process.

Types of Interrupts:

1. Hardware Interrupts: Generated by hardware devices such as keyboards, mice, or


timers. For example, pressing a key on the keyboard triggers a hardware interrupt.
2. Software Interrupts: Triggered by software, often used for system calls. For
instance, when a program requests a service from the operating system (e.g., file
operations), it causes a software interrupt.
3. External Interrupts: Generated by external hardware, such as peripheral devices.
4. Internal Interrupts: Generated by the CPU when errors like division by zero or
memory access violations occur.

Importance:

 Efficiency: Interrupts allow the CPU to respond promptly to external events, ensuring
that the system is responsive to various hardware devices or system events.
 Multitasking: Interrupts enable multitasking by allowing the system to shift attention
from one process to another based on the priority of events.

b) Explain Semaphores and Its Types in Detail.


A semaphore is a synchronization primitive used to control access to a shared resource by
multiple processes in a concurrent system. Semaphores are widely used in operating systems
for process synchronization and mutual exclusion to avoid race conditions.

Types of Semaphores:

1. Binary Semaphore (Mutex):


o A binary semaphore can only have two values: 0 or 1.
o It is used to implement mutual exclusion, where only one process can access a
resource at a time.
o Operations:
 P (Proberen or wait): Decreases the semaphore value. If it is 0, the
process is blocked until the semaphore becomes 1.
 V (Verhogen or signal): Increases the semaphore value. If any process
is blocked, it is unblocked.
2. Counting Semaphore:
o A counting semaphore can have a wider range of values and is used to control
access to a resource pool with multiple instances.
o It is typically used to manage the availability of resources like buffers,
printers, etc.
o Operations:
 P (wait): Decreases the value of the semaphore. If the value is greater
than 0, the process continues; if it is 0, the process waits.
 V (signal): Increases the value of the semaphore, which may unblock
waiting processes.

Example:

In a producer-consumer scenario, a counting semaphore can be used to track the number of


empty slots in a buffer and another to track the number of full slots.

Importance:

 Semaphores are crucial for ensuring that shared resources are used in a safe,
controlled manner, avoiding conflicts and ensuring synchronization between
processes.

c) Write a Short Note on Fragmentation.

Fragmentation refers to the condition where the storage space is inefficiently utilized,
resulting in small unused gaps scattered throughout the storage. This happens in both memory
and disk storage systems, making it difficult for large blocks of data to be allocated.

Types of Fragmentation:

1. External Fragmentation:
o Occurs when free memory or storage is broken into small, scattered pieces.
These gaps of free space are too small to accommodate large blocks of data,
even though the total free space might be sufficient.
o Common in systems using dynamic memory allocation or disk space
allocation where fixed-sized blocks of memory or data are continuously
allocated and deallocated.
o Solution: Compaction or garbage collection is typically used to rearrange
memory blocks, consolidating free space into larger contiguous areas.
2. Internal Fragmentation:
o Occurs when allocated memory or disk blocks are larger than the data they
store, leaving unused space within each block.
o Common in systems where memory is allocated in fixed-sized blocks (e.g., in
paging systems), resulting in unused portions within each allocated page or
block.
o Solution: Better memory allocation schemes like paging with dynamic
block sizes or using more flexible memory allocation strategies.

Example:

In memory management, if a program requests 80 bytes of memory and the system allocates
100 bytes (the next available block), the unused 20 bytes are considered internal
fragmentation. However, if there are multiple scattered free spaces across memory that are
too small to fit the program’s 80 bytes, it results in external fragmentation.

Importance:

Fragmentation reduces the overall efficiency of a system by wasting space, which can lead to
slower performance due to the need for additional memory or disk space management
techniques. Efficient memory management is crucial to minimize fragmentation and enhance
system performance.

OS PAPER-2

Q-1

a) Define Process.

A process is a program in execution. It is an active entity, which includes the program code,
its current activity, and the resources allocated to it, such as memory, CPU time, and
input/output devices. A process goes through various states like new, ready, running, waiting,
and terminated during its lifecycle.
b) What is Context Switch?

A context switch is the process of saving the state of a currently running process and loading
the state of the next process to be executed. It involves saving the process's context (such as
CPU registers, program counter, etc.) and restoring the context of the new process, enabling
multitasking in an operating system.

c) What is a Page Frame?

A page frame is a fixed-size block of physical memory in which a page of virtual memory is
stored. The operating system divides physical memory into page frames of the same size, and
each page of virtual memory is mapped to a page frame in physical memory.

d) List Various Operations on Files.

The common operations performed on files include:

1. Create: Creating a new file.


2. Read: Accessing the content of a file.
3. Write: Modifying the content of a file.
4. Delete: Removing a file from the storage.
5. Append: Adding new data to the end of an existing file.

e) What is Meant by Rotational Latency in Disk Scheduling?

Rotational latency refers to the time it takes for the desired disk sector to rotate into position
under the disk's read/write head. It depends on the disk's rotation speed and is a key factor in
determining disk access time, especially in mechanical hard drives.

f) Define Critical Section.

A critical section is a part of the code or a set of instructions that accesses shared resources
(e.g., memory, files, data) and must not be executed concurrently by more than one process.
Proper synchronization is required to prevent race conditions in the critical section.

g) State Belady’s Anomaly.


Belady's Anomaly refers to the counterintuitive phenomenon where increasing the number
of page frames in a system can lead to an increase in the number of page faults. It primarily
occurs in certain page replacement algorithms, such as FIFO.

h) List Any 4 Characteristics of Operating System.

1. Multitasking: The ability to run multiple processes concurrently.


2. Resource Management: Managing hardware resources such as CPU, memory, and
I/O devices.
3. Security: Protecting data and resources from unauthorized access.
4. User Interface: Providing an interface for user interaction, such as a command-line
or graphical interface.

i) Define Deadlock.

A deadlock is a situation in a multi-processing environment where two or more processes are


unable to proceed because each is waiting for the other to release a resource. This leads to a
state where no process can continue execution.

j) What is the Role of Operating System?

The operating system (OS) manages hardware resources, provides an interface for user
interaction, ensures security and process synchronization, and handles tasks like memory
management, file management, and process scheduling to ensure efficient and fair resource
allocation.

Q-2

a) ‘Operating system is like a manager of the computer


system’. Explain.

An Operating System (OS) functions similarly to a manager in a business organization.


Just as a manager coordinates the activities of employees, the OS coordinates all the
operations and resources of the computer system.

Roles of OS as a Manager:

1. Process Manager – Manages process creation, execution, and termination.


2. Memory Manager – Allocates and deallocates memory space as required by
processes.
3. File Manager – Handles creation, deletion, reading, and writing of files.
4. Device Manager – Controls and communicates with I/O devices.
5. Security Manager – Ensures secure access to resources.

Conclusion:
The OS acts as a bridge between user and hardware, efficiently managing tasks, resources,
and users, thus justifying its role as a manager.

b) What is scheduling? Compare short term scheduler with


medium term scheduler.

Scheduling is the process by which the operating system decides the order of execution of
processes in the CPU. It improves system performance and resource utilization.

Short-Term Scheduler
Feature Medium-Term Scheduler
(CPU Scheduler)

Selects a process from Suspends or resumes processes to


Function the ready queue to run control the degree of
next. multiprogramming.

Frequen Very frequent


Less frequent.
cy (milliseconds).

Fast, must make quick


Speed Not time-critical.
decisions.

Decides which process Swaps processes in and out of


Role
gets CPU. memory (swapping).

Conclusion:
The short-term scheduler handles CPU allocation, while the medium-term scheduler
manages memory and system load by controlling process suspension and resumption.

c) Draw and Explain Process Control Block (PCB).

Process Control Block (PCB) is a data structure maintained by the OS for every process. It
contains all the information about a process.

Diagram of PCB:

+-----------------------+
| Process ID (PID) |
+-----------------------+
| Process State |
+-----------------------+
| Program Counter |
+-----------------------+
| CPU Registers |
+-----------------------+
| Memory Management Info|
+-----------------------+
| Accounting Info |
+-----------------------+
| I/O Status Info |
+-----------------------+

Explanation of Fields:

1. Process ID (PID): Unique identifier for each process.


2. Process State: Ready, running, waiting, etc.
3. Program Counter: Address of the next instruction to execute.
4. CPU Registers: Stores temporary data like accumulators, index registers, etc.
5. Memory Info: Includes base and limit registers, page tables.
6. Accounting Info: Tracks CPU usage, time limits, etc.
7. I/O Info: Lists of I/O devices allocated to the process.

Conclusion:
PCB is essential for context switching and process management.

d) Compare Multiprogramming with Multiprocessing System.


Feature Multiprogramming Multiprocessing

Running multiple programs on a Running multiple processes


Definition single CPU by managing them simultaneously using two or
in memory. more CPUs.

CPU Count Single CPU. Two or more CPUs.

Only one process executes at a True parallel execution of


Execution
time; others wait. processes.

Complexit Less complex and cost- More complex and


y effective. expensive.

Performan Increases system


Increases CPU utilization.
ce performance and reliability.

Conclusion:
While multiprogramming maximizes CPU usage using one CPU, multiprocessing boosts
performance using multiple CPUs.
e) Draw and Explain the Process State Diagram.

Process State Diagram:

+---------+
| New |
+---------+
|
v
+---------+ +------------+
| Ready |<------->| Waiting |
+---------+ +------------+
|
v
+---------+
| Running |
+---------+
|
v
+---------+
| Terminate|
+---------+

Explanation of States:

1. New: Process is being created.


2. Ready: Process is ready to run and waiting for CPU allocation.
3. Running: Process is currently being executed.
4. Waiting (Blocked): Process is waiting for I/O or other events.
5. Terminated: Process has finished execution.

State Transitions:

 New → Ready: Process admitted by OS.


 Ready → Running: Short-term scheduler allocates CPU.
 Running → Waiting: I/O or event wait.
 Running → Ready: Time slice over (preemption).
 Waiting → Ready: I/O or event completed.
 Running → Terminate: Execution completed.

Conclusion:
This model helps in managing and tracking the lifecycle of a process efficiently.

Q-3
a) Compare Internal and External Fragmentation.
Aspect Internal Fragmentation External Fragmentation

Definiti Wasted memory within an Wasted memory outside


on allocated block. allocated blocks.

Variable-sized memory allocation


Cause Fixed-sized memory allocation.
and deallocation.

Allocating 10 KB block for a 7


Exampl Free memory scattered in small
KB process wastes 3 KB inside
e chunks; can't fit large process.
the block.

Solutio Use paging or smaller Use compaction or


n allocation units. paging/segmentation.

Conclusion:
Both types reduce memory efficiency, but occur due to different allocation strategies.

b) Consider the following set of processes with the length of the


CPU burst time in ms.
Proces Burst
s Time

P1 10

P2 1

P3 2

P4 1

P5 5

All processes arrive at time 0.

i) Gantt Chart using SJF (Shortest Job First):

Order of Execution (based on burst time):


P2 → P4 → P3 → P5 → P1

Gantt Chart:

| P2 | P4 | P3 | P5 | P1 |
0 1 2 4 9 19
ii) Calculate Turnaround Time (TAT) and Waiting Time (WT)

Turnaround Time = Completion Time - Arrival Time


Waiting Time = Turnaround Time - Burst Time

Proces Burst Completion TA W


s Time Time T T

P2 1 1 1 0

P4 1 2 2 1

P3 2 4 4 2

P5 5 9 9 4

P1 10 19 19 9

Average Turnaround Time = (1 + 2 + 4 + 9 + 19) / 5 = 7.0 ms


Average Waiting Time = (0 + 1 + 2 + 4 + 9) / 5 = 3.2 ms

c) Explain Semaphores and Its Types.

A Semaphore is a synchronization tool used to manage access to shared resources in


concurrent programming, preventing race conditions.

Types of Semaphores:

1. Binary Semaphore (Mutex):


o Values: 0 or 1.
o Used for mutual exclusion.
o Only one process can enter the critical section.
2. Counting Semaphore:
o Values: Any integer.
o Used when multiple instances of a resource are available.
o Keeps track of available units of a resource.

Operations:

 Wait (P): Decrements the semaphore value. If it's negative, the process is blocked.
 Signal (V): Increments the semaphore value. If processes are blocked, one is
unblocked.

Use Case Example: Producer-Consumer problem, Reader-Writer problem.


d) What is Deadlock? Explain Various Deadlock Handling
Techniques.

A deadlock is a condition where a group of processes are blocked, each waiting for a
resource held by the others, creating a circular wait situation.

Deadlock Handling Techniques:

1. Deadlock Prevention:
o Design the system in a way that one of the necessary conditions (mutual
exclusion, hold & wait, no preemption, circular wait) never holds.
o Example: Use resource hierarchy or preempt resources.
2. Deadlock Avoidance:
o Requires information about future requests.
o Banker's Algorithm is commonly used.
o System only grants resources if it leads to a safe state.
3. Deadlock Detection and Recovery:
o Allow deadlock to occur but detect it using a resource allocation graph or
wait-for graph.
o Recover by killing processes or preempting resources.
4. Ignore the Problem (Ostrich Algorithm):
o Used in systems where deadlocks are rare and cost of prevention is high.
o Example: Most desktop operating systems.

e) What Are the Different Types of Directory Structure? Explain.

The directory structure organizes files in a hierarchical or linear fashion to manage data
efficiently.

Types of Directory Structures:

1. Single-Level Directory:
o All files in the same directory.
o Simple but causes name conflicts and is hard to manage.
2. Two-Level Directory:
o Each user has a separate directory.
o Solves name conflicts but no grouping of files within user directories.
3. Tree-Structured Directory:
o Hierarchical directory structure.
o Allows subdirectories and better organization.
4. Acyclic Graph Directory:
o Allows sharing of files using links.
o Avoids cycles.
5. General Graph Directory:
o Allows full flexibility of file sharing with cycles.
o Requires garbage collection to handle cycles and dangling pointers.
Conclusion:
Directory structures enhance file management by providing logical organization, security,
and access control.

Q-4

a) Explain Linked Allocation in Files.

Linked Allocation is a method of file storage allocation where each file is a linked list of
disk blocks. The directory stores the pointer to the first block of the file. Each block
contains a pointer to the next block and the file data.

Advantages:

 No external fragmentation.
 Easy to grow files dynamically.

Disadvantages:

 Sequential access only, random access is inefficient.


 Extra space required for pointers.
 A single corrupted pointer can lose the remaining file.

Diagram:
Directory → [Block 1] → [Block 4] → [Block 7] → [Block 10] → NULL

b) Compare Paging and Segmentation.


Feature Paging Segmentation

Memory Divides memory into Divides memory into variable-


Division fixed-size pages. sized segments.

Logical
Page number + Offset Segment number + Offset
Address

Fragmentati Can cause internal Can cause external


on fragmentation. fragmentation.

Mainly supports uniform Supports logical data


Access Type
access. separation.

Used in most OS for Useful in programming with


Use Case
simplicity. modules/functions.
c) Apply FCFS Disk Scheduling and Calculate Total Head
Movement.

Given:

 Request queue: 84, 145, 89, 168, 93, 128, 100, 68


 Initial head position: 125

FCFS Order:

125 → 84 → 145 → 89 → 168 → 93 → 128 → 100 → 68

Head Movements:

 |125 - 84| = 41
 |84 - 145| = 61
 |145 - 89| = 56
 |89 - 168| = 79
 |168 - 93| = 75
 |93 - 128| = 35
 |128 - 100| = 28
 |100 - 68| = 32

Total Head Movement =

41 + 61 + 56 + 79 + 75 + 35 + 28 + 32 = 407 tracks

d) Explain File Structure with the Help of a Diagram.

File structures define how data is organized in a file. The common file structures are:

1. Byte Sequence:
o A stream of bytes with no structure.
o Used in UNIX and Linux systems.
o Application interprets the data.
2. Record Sequence:
o File is a sequence of fixed- or variable-size records.
o Useful for databases and tables.
3. Tree Structure:
o Records are organized in a tree or hierarchical format.
o Supports fast search and categorization.

Diagram Example:
+--------------+ +------------+ +-------------+
| File Header | --> | Record 1 | --> | Record 2 | --> ...
+--------------+ +------------+ +-------------+
Each structure is suited to different applications like text files, binary files, and databases.

e) Calculate Page Faults using FIFO – Page Frames = 4

Reference String: 9, 2, 3, 4, 2, 5, 2, 6, 4, 5, 2, 5, 4, 3, 4, 2, 3, 9, 2, 3

FIFO (First-In-First-Out):

We will simulate the FIFO queue and count page faults.

Step-by-step Simulation:

1. 9 → Page Fault
2. 2 → Page Fault
3. 3 → Page Fault
4. 4 → Page Fault
5. 2 → Hit
6. 5 → Page Fault (Evict 9)
7. 2 → Hit
8. 6 → Page Fault (Evict 2)
9. 4 → Hit
10. 5 → Hit
11. 2 → Page Fault (Evict 3)
12. 5 → Hit
13. 4 → Hit
14. 3 → Page Fault (Evict 4)
15. 4 → Page Fault (Evict 5)
16. 2 → Hit
17. 3 → Hit
18. 9 → Page Fault (Evict 6)
19. 2 → Hit
20. 3 → Hit

Total Page Faults = 10

Q-5

a) Spooling

Spooling (Simultaneous Peripheral Operations On-Line) is a process in which data is


temporarily held to be used and executed by a device, program, or system. It is commonly
used for managing input/output (I/O) operations, especially in printers and disk access.
Key Points:

 It allows multiple jobs to be queued for execution.


 Spooling uses buffers to store data until the device is ready.
 It increases efficiency by overlapping I/O and CPU operations.

Example:

When multiple documents are sent to a printer, they are stored in a spool (disk file or
memory) and printed one by one.

b) Dining Philosopher’s Problem

The Dining Philosopher’s Problem is a classic synchronization and concurrency problem


introduced by Edsger Dijkstra. It illustrates the challenge of avoiding deadlock and
ensuring resource sharing among concurrent processes.

Problem Setup:

 Five philosophers sit around a table with one chopstick between each.
 Each philosopher must pick up two chopsticks (shared resources) to eat.
 They alternate between thinking and eating.

Issues Involved:

 Deadlock: If all pick up one chopstick and wait.


 Starvation: Some may never get both chopsticks.

Solutions:

 Use semaphores or mutexes to ensure mutual exclusion.


 Allow only four philosophers at a time.
 Pick both chopsticks simultaneously.

c) Contiguous Memory Allocation

Contiguous Memory Allocation is a memory management technique where each process is


allocated a single contiguous block of memory in the main memory.

Features:

 Simple and easy to implement.


 Supports base and limit register to protect memory boundaries.
Disadvantages:

 Leads to external fragmentation.


 Difficult to allocate memory dynamically as processes enter and leave.

Example:

If a process needs 100 KB and memory has 120 KB free in pieces of 60 KB each, the process
can't be allocated memory despite enough total free space.

OS PAPER-3

Q-1

a) Define System Program.

System programs provide an environment for program development and execution. They
include file management, editors, compilers, loaders, etc.

b) What is Multiprogramming?

Multiprogramming is the technique of running multiple programs simultaneously on a


single CPU to maximize CPU utilization.

c) What is the Role of Valid and Invalid Bits in Demand Paging?

The valid bit indicates whether a page is in memory.

 Valid = 1: Page is in memory.


 Invalid = 0: Page is not in memory (causes page fault if accessed).

d) List Any Two Services of the Operating System.

1. Program Execution
2. File System Manipulation

e) Define the Waiting System.

A waiting system refers to a process that is in the waiting state, i.e., it is not ready for
execution until an event or resource becomes available.
f) What is the Purpose of CPU Scheduling?

CPU scheduling determines which process gets the CPU next, aiming to optimize CPU
utilization, throughput, and response time.

g) Difference Between Process Creation and Process


Termination.

 Process Creation: Involves initializing a new process, allocating memory and


resources.
 Process Termination: Involves deallocating resources and removing the process
from the system.

h) What are the Benefits of a Multiprocessor System?

1. Increased throughput
2. Fault tolerance and reliability
3. Faster processing

i) Why Do We Need an Operating System?

An operating system manages hardware and software resources, providing an interface


between users and hardware to run applications efficiently.

j) What is Fragmentation?

Fragmentation is the wastage of memory that occurs when memory blocks are not used
efficiently. It can be internal or external.

Q-2

a) Difference Between Batch Operating System and Real-Time


Operating System
Feature Batch OS Real-Time OS

Executes batches of jobs Responds immediately to input,


Definition
without user interaction. used in time-critical systems.
Feature Batch OS Real-Time OS

User No direct interaction Immediate and direct response to


Interaction during execution. inputs.

Response Can be slow due to queued


Fast and deterministic.
Time jobs.

Payroll, bank statement Industrial control, medical


Use Case
processing. systems.

b) What is a Process? Explain the Structure of the Process


Control Block (PCB)

Process: A process is a program in execution, consisting of program code, current activity,


and allocated resources.

Process Control Block (PCB): It is a data structure maintained by the OS to manage


information about a process.

Contents of PCB:

 Process ID: Unique identifier


 Process State: Ready, running, waiting, etc.
 Program Counter: Address of next instruction
 CPU Registers: Contents of all CPU registers
 Memory Management Info: Base, limit, page table
 I/O Status Info: Devices allocated to process
 Accounting Info: CPU used, job ID, etc.

c) What is Fragmentation? Explain Internal and External


Fragmentation with Examples

Fragmentation is the wastage of memory space due to inefficient allocation.

Internal Fragmentation:

Occurs when fixed-size memory blocks are allocated, and the process doesn't use the entire
block.
Example: If block size is 8KB and process uses 6KB, 2KB is wasted.
External Fragmentation:

Occurs when free memory is scattered in small blocks between allocated memory blocks.
Example: Multiple small free spaces (2KB, 3KB, 4KB) can't satisfy a request of 9KB even
though 9KB is available in total.

d) Preemptive SJF (Shortest Remaining Time First) Scheduling

Given:

Proces Arrival Burst


s Time Time

P1 0 ms 4 ms

P2 2 ms 5 ms

P3 5 ms 6 ms

P4 6 ms 2 ms

Gantt Chart:
0 2 6 8 14 20
| P1 | P2 | P4 | P2 | P3 |
Completion Times:

 P1 = 2
 P2 = 14
 P3 = 20
 P4 = 8

Turnaround Time (TAT) = Completion Time - Arrival Time

 P1 = 2 - 0 = 2
 P2 = 14 - 2 = 12
 P3 = 20 - 5 = 15
 P4 = 8 - 6 = 2

Average TAT = (2 + 12 + 15 + 2) / 4 = 7.75 ms

Waiting Time = TAT - Burst Time

 P1 = 2 - 4 = 0
 P2 = 12 - 5 = 7
 P3 = 15 - 6 = 9
 P4 = 2 - 2 = 0
Average Waiting Time = (0 + 7 + 9 + 0) / 4 = 4 ms

e) FIFO Page Replacement (3 Frames)

Reference String:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

Frames:

Initially empty → FIFO replaces the oldest page when full.

Page Faults occur at:


7, 0, 1, 2, 3, 4, 1, 7

Total Page Faults = 12

Q-3

a) Difference Between Multilevel Queue Scheduling and


Multilevel Feedback Queue Scheduling
Multilevel Queue Multilevel Feedback Queue
Feature
Scheduling Scheduling

Queue Processes remain in one Processes can move between


Movement fixed queue. queues.

Inflexible; process is Flexible; allows priority


Flexibility
assigned permanently. adjustment over time.

Scheduling Each queue may use Feedback based on process


Policy different scheduling policy. behavior (CPU/I/O-bound).

Example Foreground vs. background


Interactive vs. batch processes.
Usage jobs.

b) Banker's Algorithm

Given:

Proces Allocati Maximu


s on m

ABC ABC

P0 010 753
Proces Allocati Maximu
s on m

P1 200 322

P2 302 902

P3 211 222

P4 002 433

Available: A = 3, B = 3, C = 2

a. Need Matrix = Max - Allocation


Proces A B
s C

P0 743

P1 122

P2 600

P3 011

P4 431

b. Safe State Check (Using Banker's Algorithm):

Work = Available = (3,3,2)


Finish = [False, False, False, False, False]

1. P1 → Need (1,2,2) ≤ Work → Work = (5,3,2) → Finish P1 = True


2. P3 → Need (0,1,1) ≤ Work → Work = (7,4,3) → Finish P3 = True
3. P0 → Need (7,4,3) ≤ Work → Work = (7,5,3) → Finish P0 = True
4. P2 → Need (6,0,0) ≤ Work → Work = (10,5,5) → Finish P2 = True
5. P4 → Need (4,3,1) ≤ Work → Work = (10,5,7) → Finish P4 = True

All processes can finish → System is in a Safe State.


Safe Sequence: P1 → P3 → P0 → P2 → P4

c) Resource Allocation Graph (RAG) to Describe Deadlock

 RAG is a directed graph with:


o Processes (circles)
o Resources (squares)
o Edges:
 Request: Process → Resource
 Allocation: Resource → Process

Deadlock Condition:

If there exists a circular wait in the graph (e.g., P1 → R1 → P2 → R2 → P1), then deadlock
may occur.

Diagram (Textual Example):


P1 → R1 → P2 → R2 → P1 (cycle)

A cycle in RAG with single instance resources implies a deadlock.

d) Difference Between Physical Address and Virtual Address


Feature Virtual Address Physical Address

Generated Memory Management Unit


CPU during execution
by (MMU)

Logical view of memory (user Actual memory location


Used in
programs) (RAM)

Translatio Mapped to physical via page Final address used to access


n tables memory

Allows abstraction & memory


Flexibility No abstraction; direct access
protection

e) Critical Section Problem & Its Requirements

Critical Section: A part of the program where the process accesses shared resources
(variables, files, etc.).

Problem:

Multiple processes trying to enter their critical sections may lead to race conditions, data
inconsistency, or deadlocks.

Requirements:

1. Mutual Exclusion: Only one process in critical section at a time.


2. Progress: If no process is in the critical section, others should be allowed to enter
without unnecessary delay.
3. Bounded Waiting: A process must get access to its critical section within a finite
time (no starvation).

Q-4

a) FCFS Disk Scheduling – Total Head Movement

Given:

 Disk tracks: 0 to 99
 Initial head position: 49
 Previous request: 90 (direction not relevant in FCFS)
 Request queue (in FIFO order): 86, 47, 91, 77, 94, 50, 02, 75, 30

Calculate total head movement:

Steps:

1. From 49 → 86 → |49 - 86| = 37


2. 86 → 47 → 39
3. 47 → 91 → 44
4. 91 → 77 → 14
5. 77 → 94 → 17
6. 94 → 50 → 44
7. 50 → 2 → 48
8. 2 → 75 → 73
9. 75 → 30 → 45

Total Head Movement =


= 37 + 39 + 44 + 14 + 17 + 44 + 48 + 73 + 45
= 361 tracks

b) Role and Services of Operating System

Role of OS:
The Operating System (OS) acts as an intermediary between users and hardware. It
manages hardware resources, provides a user interface, and ensures that different programs
and users operate efficiently and securely.

Services provided by OS:

1. Process Management:
o Scheduling, creation, termination of processes
o Ensures synchronization and communication
2. Memory Management:
o Allocates and deallocates memory space as needed
o Maintains memory hierarchy, paging, segmentation
3. File System Management:
o Handles storage, retrieval, and naming of files
o Provides directories, permissions, file sharing
4. Device Management:
o Manages device communication via drivers
o Provides buffering, caching, spooling
5. Security and Protection:
o Prevents unauthorized access to resources
o Ensures data integrity and user authentication
6. User Interface:
o Provides command-line or graphical interface
o Helps users interact with the system easily

c) Contiguous Allocation: Advantages and Disadvantages

Contiguous Allocation:
Each file/process occupies a set of contiguous blocks in memory.

Advantages:

 Fast Access: Supports direct addressing using base + offset


 Simplicity: Easy to implement and manage
 Minimal Overhead: No need for pointer structures

Disadvantages:

 Fragmentation: Leads to both internal and external fragmentation


 Fixed Size: Difficult to resize a process/file after allocation
 Memory Wastage: Larger memory chunks may remain unused
 Compaction Overhead: Compaction may be needed to manage fragmentation

d) Deadlock and Its Conditions

Deadlock:
A situation where a group of processes are blocked, each waiting for a resource held by
another, such that no process can proceed.

Four Necessary Conditions for Deadlock:

1. Mutual Exclusion: Only one process can use a resource at a time.


2. Hold and Wait: A process holds at least one resource and waits for others.
3. No Preemption: Resources cannot be forcibly taken; must be released voluntarily.
4. Circular Wait: A set of processes waiting in a circular chain (P1 waits for P2, P2
waits for P3… and Pn waits for P1).

All four conditions must hold simultaneously for a deadlock to occur.

e) Segmentation – With Diagram

Segmentation:
A memory management technique where each process is divided into logical segments like
code, data, stack, etc. Each segment has its own base and limit.

Key Points:

 Logical address = (segment number, offset)


 Segment table stores base (start address) and limit (length)
 OS maps logical address to physical using segment table

Diagram:
Logical View (Process Segments): Segment Table (for mapping):
+-----------+ +--------------------------+
| Segment 0 | Code | Segment | Base | Limit |
| Segment 1 | Data | 0 | 1000 | 400 |
| Segment 2 | Stack | 1 | 1400 | 200 |
+-----------+ | 2 | 1600 | 300 |

If a logical address is (1, 50),


→ Physical address = 1400 (base of segment 1) + 50 = 1450

Segmentation helps in modularity, protection, and logical program division.

Q-5

a) Context Switching

Context switching is the process of storing the state of a currently running process so that the
CPU can switch to another process. This is essential in multitasking operating systems where
multiple processes share a single CPU.

Key Points:

 Involves saving the process control block (PCB) of the current process.
 The CPU loads the PCB of the next process to resume execution.
 Causes overhead as no useful work is done during the switch.
 Helps achieve concurrent execution and CPU utilization.
b) Deadlock

A deadlock occurs in a system when a group of processes are stuck in a state where each
process is waiting for a resource held by another, and none can proceed.

Four Necessary Conditions:

1. Mutual Exclusion
2. Hold and Wait
3. No Preemption
4. Circular Wait

Deadlock results in system inaction and may require detection, prevention, avoidance, or
recovery techniques to handle it.

c) Semaphore

A semaphore is a synchronization tool used to manage access to shared resources in a


concurrent system like a multi-process or multi-threaded environment.

Types:

1. Counting Semaphore – Can have any non-negative integer value, used for managing
access to multiple instances of a resource.
2. Binary Semaphore – Has only two values (0 and 1), acts like a lock.

Operations:

 wait(P) – Decrements the value. If it becomes negative, the process is blocked.


 signal(V) – Increments the value. If other processes are waiting, one is unblocked.

Semaphores help avoid race conditions and manage critical sections.

OS PAPER-4

Q-1

a) Least Recently Used (LRU) in Memory Management

LRU (Least Recently Used) is a page replacement algorithm that replaces the page that has
not been used for the longest period of time. It aims to improve the efficiency of memory
usage by keeping the most recently accessed pages in memory.
b) Context Switch

A context switch is the process of saving the state (context) of a currently running process
and loading the state of the next process to be executed. This occurs in multitasking systems
to allow efficient CPU utilization.

c) Page Frame

A page frame is a fixed-size block of physical memory that holds a page of data in a system
that uses paging. The size of the page frame matches the size of a page in virtual memory.

d) Various Properties of a File

1. Name: The unique identifier for a file.


2. Type: The format or kind of file (e.g., text, executable).
3. Size: The amount of space occupied by the file.
4. Location: The physical or logical address of the file on storage.

e) Seek Time in Disk Scheduling

Seek time is the time it takes for the disk's read/write head to move to the track where data is
stored or requested. It is a crucial factor in determining the overall disk I/O performance.

f) Compaction

Compaction is the process of rearranging the memory contents to eliminate fragmentation.


It involves moving processes in memory to create contiguous blocks of free space.

g) Belady's Anomaly

Belady's Anomaly occurs in certain page replacement algorithms, specifically FIFO, where
increasing the number of page frames can actually lead to an increase in the number of page
faults.

h) Four Characteristics of Operating System

1. Multitasking: The ability to run multiple processes concurrently.


2. Memory Management: Efficient allocation and deallocation of memory to processes.
3. File Management: Organization, storage, and retrieval of files.
4. Security and Protection: Safeguarding resources from unauthorized access.

i) Safe State

A safe state is a situation in which there exists at least one sequence of processes that can
complete without causing deadlock, meaning each process can eventually obtain the
resources it needs.

j) Starvation

Starvation occurs when a process is indefinitely postponed because the system always grants
resources to other processes. It is a type of resource allocation problem where a process
cannot proceed because it never gets the required resources.

Q-2

a) Operating System Structure

An Operating System (OS) is structured in different layers to manage hardware resources


and provide services to applications. The structure can vary depending on the OS design, but
commonly used structures include monolithic, layered, and microkernel.

Types of OS Structures:

1. Monolithic Structure:
o All OS services run in kernel mode, without strict separation.
o It is efficient but harder to maintain and modify because changes to one part
can affect the whole system.
o Example: Linux, early UNIX systems.
2. Layered Structure:
o The OS is divided into multiple layers, each providing specific services.
o Lower layers provide fundamental services, and higher layers provide user-
level services.
o Example: THE system, some aspects of modern UNIX.
3. Microkernel Structure:
o The kernel only provides essential services like communication and basic
process management.
o Additional services like file systems and device drivers run as user-level
processes.
o Example: Minix, modern versions of macOS and Windows.
4. Hybrid System:
o A combination of monolithic and microkernel designs, where some services
run in user space, while critical ones run in kernel space.
o Example: Windows NT.

b) What is Scheduling? Compare Short-term Scheduler with


Long-term Scheduler.

Scheduling refers to the method by which the operating system decides which process to
execute at any given time. It ensures that the CPU is efficiently utilized and that processes are
executed in a timely manner.

Short-term Scheduler (CPU Scheduler):

 Function: Decides which process will execute next on the CPU.


 Frequency: It operates frequently (milliseconds) and makes quick decisions.
 Criteria: Based on process priority or CPU burst time.
 Example: Round Robin, Shortest Job First.

Long-term Scheduler (Job Scheduler):

 Function: Decides which process should be admitted to the system (from the queue
of jobs waiting to enter the ready queue).
 Frequency: It runs less frequently (seconds or minutes) as compared to the short-term
scheduler.
 Criteria: Based on process type, memory requirements, and resource availability.
 Example: In a batch processing system, it might admit jobs based on resource
availability.

Comparison:

Aspect Short-term Scheduler Long-term Scheduler

Determines which process to Decides which process enters


Purpose
execute next. the ready queue.

Frequen Low frequency (seconds or


High frequency (milliseconds).
cy minutes).

Interacti Works closely with CPU Works with system-level


on scheduling algorithms. resource management.

Affects process mix and system


Impact Affects CPU utilization.
load.
c) Round Robin Scheduling with Example

Round Robin (RR) is a preemptive CPU scheduling algorithm where each process is
assigned a fixed time slot (quantum) in a circular order. If a process does not complete within
its quantum, it is placed at the end of the queue, and the next process gets the CPU.

Example:
Proces Arrival Burst Time Quantum
s Time Time =4

P1 0 8 4

P2 1 4 4

P3 2 9 4

P4 3 5 4

Execution Order:

1. P1 runs from time 0 to 4 (remaining burst time = 4).


2. P2 runs from time 4 to 8 (remaining burst time = 0, completes).
3. P3 runs from time 8 to 12 (remaining burst time = 5).
4. P4 runs from time 12 to 16 (remaining burst time = 1).
5. P1 runs again from time 16 to 20 (remaining burst time = 0, completes).
6. P3 runs from time 20 to 24 (remaining burst time = 1).
7. P4 runs from time 24 to 25 (completes).
8. P3 runs from time 25 to 26 (completes).

Gantt Chart:

| P1 | P2 | P3 | P4 | P1 | P3 | P4 | P3 |
0 4 8 12 16 20 24 25 26
Average Turnaround Time and Waiting Time:

 Turnaround Time (TAT) = Completion Time - Arrival Time


 Waiting Time (WT) = Turnaround Time - Burst Time

d) What are Semaphores? Explain the Types of Semaphores.

Semaphores are synchronization primitives used to manage access to shared resources in


concurrent programming. They help avoid race conditions and ensure that processes
coordinate correctly.
Types of Semaphores:

1. Binary Semaphore (Mutex):


o Takes only two values (0 and 1).
o Used as a lock to ensure mutual exclusion, allowing only one process to
access a critical section at a time.
o Operation: wait() decrements the value to 0 and signal() increments it
back to 1.
2. Counting Semaphore:
o Takes non-negative integer values.
o Used to manage access to a pool of resources. It allows multiple processes to
access resources simultaneously, as long as there are available resources.
o Operation: wait() decreases the value and signal() increases it.

Operations on Semaphores:

 wait(P): Decrements the semaphore value. If the value is negative, the process waits.
 signal(V): Increments the semaphore value. If there are processes waiting, one is
unblocked.

e) Contiguous Memory Allocation

Contiguous Memory Allocation is a memory management scheme where each process is


allocated a single contiguous block of memory. The process occupies a continuous portion of
memory, making it easier to access.

Diagram:
| Process P1 | Process P2 | Process P3 | Free Space |
|------------|------------|------------|------------|
| 0-10 | 11-20 | 21-30 | 31-50 |
Steps:

1. Each process is allocated a continuous block in memory.


2. Memory is managed by keeping track of which parts are occupied and free.
3. The OS maintains pointers or table entries for each process's starting address and
size.

Advantages:

 Simple to implement: No complex management needed.


 Fast Access: Direct addressing using the base address and offset.

Disadvantages:

 Fragmentation: Leads to both internal and external fragmentation.


 Memory Wastage: Difficulty in resizing processes after allocation.
Q-3

a) Critical Section Problem

The Critical Section Problem refers to the issue of managing the shared resources that
multiple processes or threads need to access concurrently. A critical section is a segment of a
program that accesses shared resources (e.g., variables, memory, files) that must not be
accessed simultaneously by more than one process or thread to avoid data inconsistency.

Conditions for Solving the Critical Section Problem:

1. Mutual Exclusion: Only one process can execute in the critical section at any time.
2. Progress: If no process is executing in the critical section and one or more processes
wish to enter, the selection of the process to execute next must be made without delay.
3. Bounded Waiting: A process should not be forced to wait indefinitely to enter the
critical section.

Various synchronization mechanisms like semaphores, locks, mutexes, and monitors are
used to solve this problem.

b) Non-Preemptive Shortest Job First (SJF) Scheduling


i) Gantt Chart using Non-Preemptive Shortest Job First (SJF)

Given the following processes with burst times and arrival times:

Proces Burst Arrival


s Time Time

P1 3 3

P2 3 6

P3 4 0

P4 5 2

Steps for SJF (Non-Preemptive):

1. P3 arrives first with the shortest burst time of 4.


2. After P3 finishes, P1 arrives with the next shortest burst time of 3.
3. Then P4 arrives, but it has a burst time of 5, which is the longest.
4. Finally, P2 executes.

Gantt Chart:

| P3 | P1 | P4 | P2 |
0 4 7 12 15
ii) Average Turnaround Time and Average Waiting Time

Turnaround Time (TAT) = Completion Time - Arrival Time


Waiting Time (WT) = Turnaround Time - Burst Time

 P3: TAT = 4 - 0 = 4, WT = 4 - 4 = 0
 P1: TAT = 7 - 3 = 4, WT = 4 - 3 = 1
 P4: TAT = 12 - 2 = 10, WT = 10 - 5 = 5
 P2: TAT = 15 - 6 = 9, WT = 9 - 3 = 6

Average Turnaround Time = (4 + 4 + 10 + 9) / 4 = 6.75


Average Waiting Time = (0 + 1 + 5 + 6) / 4 = 3

c) What is a Deadlock? How Can Deadlock Be Avoided?

A deadlock occurs in a multi-process system when two or more processes are unable to
proceed because each is waiting for a resource held by another. In simple terms, processes are
stuck in a state of mutual waiting, and none can proceed.

Conditions for Deadlock:

1. Mutual Exclusion: Resources are limited and can only be used by one process at a
time.
2. Hold and Wait: Processes hold at least one resource and wait for others.
3. No Preemption: Resources cannot be forcibly taken away from processes holding
them.
4. Circular Wait: A set of processes exists such that each process is waiting for a
resource held by the next process in the set.

Deadlock Prevention/Recovery:

 Prevention: Eliminate one of the four necessary conditions.


o Avoid Hold and Wait: Require processes to request all resources at once.
o Avoid Circular Wait: Impose a strict ordering on resource allocation.
 Detection and Recovery: Detect when deadlock has occurred and recover from it by
aborting or preempting processes.

d) File System Access Methods

The File System Access Methods define how a process can access data stored in a file.
These methods are crucial for reading, writing, and organizing files.
Types of Access Methods:

1. Sequential Access: Data is read or written in a sequential manner, from the beginning
to the end. This is the simplest access method. Example: Text files.
2. Direct (Random) Access: Data can be read or written at any location within the file.
The system calculates the address of the data using an index or pointer. Example:
Database files.
3. Indexed Access: An index is maintained that maps logical data addresses to physical
locations. This method allows efficient data retrieval. Example: File systems using
index blocks.

e) Paging in Memory Management

Paging is a memory management scheme that eliminates the need for contiguous allocation
of physical memory. It divides physical memory into fixed-size blocks called frames, and
divides logical memory into blocks of the same size called pages.

How Paging Works:

 The process is divided into pages, and physical memory is divided into frames.
 Each page is mapped to a frame, and a page table maintains this mapping.
 This scheme eliminates external fragmentation and allows non-contiguous
memory allocation.

Page Table:

The page table keeps track of the frame location for each page, providing a mapping from
logical pages to physical frames.

Advantages of Paging:

1. Eliminates external fragmentation.


2. Allows efficient use of memory.
3. Supports virtual memory, enabling larger applications than physical memory.

Disadvantages of Paging:

1. Internal fragmentation can still occur (if the last page is only partially used).
2. Extra overhead in maintaining page tables.

Diagram:

Logical Memory (Pages) -> Page Table -> Physical Memory (Frames)
Page 0 -> Frame 2 -> Frame 2
Page 1 -> Frame 4 -> Frame 4
Page 2 -> Frame 1 -> Frame 1
Page 3 -> Frame 0 -> Frame 0
Q-4

a) Shortest Seek Time First (SSTF) Disk Scheduling Algorithm

In SSTF (Shortest Seek Time First) disk scheduling, the disk head moves to the track that is
closest to its current position, minimizing the seek time.

Given:

 Total tracks = 200


 Request queue = [82, 170, 43, 140, 24, 16, 190]
 Initial head position = 50

Steps for SSTF:

1. The initial head position is 50. The closest request is 43 (distance = 7).
2. Move to track 43, the closest request.
3. The closest request to 43 is 24 (distance = 19).
4. Move to track 24.
5. The closest request to 24 is 16 (distance = 8).
6. Move to track 16.
7. The closest request to 16 is 82 (distance = 66).
8. Move to track 82.
9. The closest request to 82 is 140 (distance = 58).
10. Move to track 140.
11. The closest request to 140 is 170 (distance = 30).
12. Move to track 170.
13. The closest request to 170 is 190 (distance = 20).
14. Move to track 190.

Total Head Movement Calculation:

 From 50 to 43 = 7
 From 43 to 24 = 19
 From 24 to 16 = 8
 From 16 to 82 = 66
 From 82 to 140 = 58
 From 140 to 170 = 30
 From 170 to 190 = 20

Total Head Movement = 7 + 19 + 8 + 66 + 58 + 30 + 20 = 208

b) Job Control Block (JCB)

A Job Control Block (JCB) is used by the operating system to manage and control the
execution of a process or job. It contains important information about the process such as its
state, priority, CPU time, and resources.
Components of a JCB:

1. Job Identification: Job ID, job name.


2. Process State: Indicates the current state of the job (e.g., running, waiting,
completed).
3. CPU Time: The amount of time the job has used the CPU.
4. Priority: The priority level of the job.
5. Memory Requirements: The amount of memory needed by the job.
6. Input/Output Operations: Information about I/O operations required by the job.
7. Resource Allocation: Details of any resources allocated (e.g., devices).

Diagram:

+-------------------+
| Job Control Block |
+-------------------+
| Job ID |
| Process State |
| CPU Time |
| Priority |
| Memory Required |
| I/O Operations |
| Resource Allocation|
+-------------------+

c) Characteristics and Necessary Conditions for Deadlock


Characteristics of Deadlock:

1. Mutual Exclusion: At least one resource must be held in a non-shareable mode,


meaning only one process can use it at a time.
2. Hold and Wait: A process holding at least one resource is waiting to acquire
additional resources held by other processes.
3. No Preemption: Resources cannot be forcibly taken from processes holding them;
they must be released voluntarily.
4. Circular Wait: A set of processes exists such that each process is waiting for a
resource held by another process in the set.

Necessary Conditions for Deadlock:

1. Mutual Exclusion: Resources must be allocated to one process at a time.


2. Hold and Wait: Processes hold some resources while waiting for others.
3. No Preemption: Resources cannot be preempted; they can only be released
voluntarily.
4. Circular Wait: There is a circular chain of processes where each is waiting for a
resource held by the next process in the chain.
d) Page Faults Calculation for Different Page Replacement
Algorithms
Given:

 Page reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2


 Number of frames = 3
 Initially, all frames are empty.

i) Optimal Page Replacement Algorithm:

The Optimal page replacement algorithm replaces the page that will not be used for the
longest period in the future.

Steps:

1. Reference 4 → Fault (Load 4) [4]


2. Reference 7 → Fault (Load 7) [4, 7]
3. Reference 6 → Fault (Load 6) [4, 7, 6]
4. Reference 1 → Fault (Replace 4) [1, 7, 6]
5. Reference 7 → No Fault (7 is already in memory) [1, 7, 6]
6. Reference 6 → No Fault (6 is already in memory) [1, 7, 6]
7. Reference 1 → No Fault (1 is already in memory) [1, 7, 6]
8. Reference 2 → Fault (Replace 7) [1, 2, 6]
9. Reference 7 → Fault (Replace 6) [1, 2, 7]
10. Reference 2 → No Fault (2 is already in memory) [1, 2, 7]

Page Faults = 7

ii) FIFO (First In First Out) Page Replacement Algorithm:

In FIFO, the page that has been in memory the longest is replaced.

Steps:

1. Reference 4 → Fault (Load 4) [4]


2. Reference 7 → Fault (Load 7) [4, 7]
3. Reference 6 → Fault (Load 6) [4, 7, 6]
4. Reference 1 → Fault (Replace 4) [1, 7, 6]
5. Reference 7 → No Fault (7 is already in memory) [1, 7, 6]
6. Reference 6 → No Fault (6 is already in memory) [1, 7, 6]
7. Reference 1 → No Fault (1 is already in memory) [1, 7, 6]
8. Reference 2 → Fault (Replace 7) [1, 2, 6]
9. Reference 7 → Fault (Replace 6) [1, 2, 7]
10. Reference 2 → No Fault (2 is already in memory) [1, 2, 7]

Page Faults = 7
iii) LRU (Least Recently Used) Page Replacement Algorithm:

In LRU, the page that has not been used for the longest period of time is replaced.

Steps:

1. Reference 4 → Fault (Load 4) [4]


2. Reference 7 → Fault (Load 7) [4, 7]
3. Reference 6 → Fault (Load 6) [4, 7, 6]
4. Reference 1 → Fault (Replace 4) [1, 7, 6]
5. Reference 7 → No Fault (7 is already in memory) [1, 7, 6]
6. Reference 6 → No Fault (6 is already in memory) [1, 7, 6]
7. Reference 1 → No Fault (1 is already in memory) [1, 7, 6]
8. Reference 2 → Fault (Replace 7) [1, 2, 6]
9. Reference 7 → Fault (Replace 6) [1, 2, 7]
10. Reference 2 → No Fault (2 is already in memory) [1, 2, 7]

Page Faults = 7

e) Memory Management Through Fragmentation

Fragmentation occurs when memory is allocated in such a way that it is not fully utilized
due to the creation of small gaps or unused spaces.

Types of Fragmentation:

1. External Fragmentation: When free memory is scattered in small blocks across the
system and is too small to satisfy the memory requests. This can lead to inefficient
memory usage.
o Example: A system has 1000 units of free memory, but they are scattered
across various locations (e.g., 100, 200, 300 units), making it difficult to
allocate large processes.
2. Internal Fragmentation: When allocated memory may be larger than needed,
leaving unused memory within a partition.
o Example: Allocating 500 units of memory for a process that only needs 450
units results in 50 units of unused memory within the allocated partition.

Diagram:

| Allocated | Free | Allocated | Free | Allocated |


| 500 units | 200 | 300 units | 50 | 100 units |

Fragmentation reduces the efficiency of memory usage, and compaction (shifting memory
contents to remove gaps) can help mitigate external fragmentation.

Q-5
a) Shortest Seek Time First (SSTF)

Shortest Seek Time First (SSTF) is a disk scheduling algorithm that selects the disk I/O
request that requires the least movement of the disk arm from its current position. This
algorithm minimizes the seek time by prioritizing the request closest to the current position of
the disk head.

 Working: The disk controller scans the request queue and chooses the request with
the shortest distance from the current disk head position. After servicing this request,
the process repeats, selecting the next closest request.
 Advantages:
1. Reduces the average seek time.
2. More efficient than First-Come-First-Served (FCFS).
 Disadvantages:
1. May lead to starvation, where some requests (especially those far from the
current position) may never be serviced if closer requests keep coming.
2. It is harder to implement in real-time systems where time constraints are
critical.

Example: If the disk head is at track 50 and the request queue contains [82, 170, 43, 140, 24,
16, 190], the disk will first move to track 43 (as it is the closest to 50), then to 24, and so on.

b) Linked Allocation for File System

Linked allocation is a file allocation method where each file is stored as a linked list of
blocks scattered across the disk. Each block contains a pointer to the next block in the file.
This method is simple and efficient but requires additional space for pointers.

 Working: Each file block contains a pointer to the next block in the sequence. The
last block of the file points to null, indicating the end of the file. This approach doesn't
require contiguous space for the file, so it helps in handling fragmented files.
 Advantages:
1. Efficient use of space: Files can be stored in non-contiguous blocks, allowing
efficient use of fragmented disk space.
2. No external fragmentation: The file doesn't need a contiguous block of
space, so there's no risk of fragmentation in allocation.
 Disadvantages:
1. Performance issues: Accessing a file requires following the chain of pointers,
which is slower than direct access methods (like contiguous allocation).
2. Overhead: Requires additional space for storing pointers in each block,
leading to increased storage overhead.

Example: If a file consists of three blocks, Block 1 stores a pointer to Block 2, and Block 2
stores a pointer to Block 3. Block 3 points to null, marking the end of the file.
c) Address Binding in Case of Memory Management

Address binding refers to the process of mapping logical addresses (generated by a program)
to physical addresses (actual locations in memory). This is an essential part of memory
management in an operating system, as it allows programs to access physical memory in a
consistent and efficient way.

 Types of Address Binding:


1. Compile-time Binding: The mapping of logical addresses to physical
addresses is done at compile time. The program is linked to a fixed memory
address, and any changes to the address space require recompiling the
program.
 Advantages: Simple and fast.
 Disadvantages: Inflexible, as the program must always be loaded into
the same memory location.
2. Load-time Binding: The program's logical addresses are mapped to physical
addresses when the program is loaded into memory. This is more flexible than
compile-time binding.
 Advantages: Allows programs to be loaded at any available memory
location.
 Disadvantages: Slightly slower than compile-time binding.
3. Execution-time Binding: The binding occurs when the program is actually
running. The program can be moved during execution, and the address
mapping is handled dynamically by the operating system.
 Advantages: Provides maximum flexibility and allows memory to be
used efficiently.
 Disadvantages: More complex and slower than other methods.
 Example: When a program is compiled, the compiler generates a logical address. If
the program is loaded into memory at a different location than expected, the operating
system uses address binding to translate the logical address into the correct physical
address.

You might also like