0% found this document useful (0 votes)
39 views6 pages

CSM FOS Ses-2

Page and disk scheduling algorithms are used in operating systems to efficiently manage memory and disk access. Page replacement algorithms determine which memory pages to remove to make space for new pages. Disk scheduling algorithms decide the order of requests to minimize disk seek time. Thrashing occurs when a process spends more time paging than executing due to insufficient memory, degrading system performance. Deadlocks are resolved through prevention methods that restrict resource allocation or detection and recovery techniques like terminating processes in deadlock cycles.

Uploaded by

Charan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views6 pages

CSM FOS Ses-2

Page and disk scheduling algorithms are used in operating systems to efficiently manage memory and disk access. Page replacement algorithms determine which memory pages to remove to make space for new pages. Disk scheduling algorithms decide the order of requests to minimize disk seek time. Thrashing occurs when a process spends more time paging than executing due to insufficient memory, degrading system performance. Deadlocks are resolved through prevention methods that restrict resource allocation or detection and recovery techniques like terminating processes in deadlock cycles.

Uploaded by

Charan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Fundamentals of Operating Systems Sessional – 2 Questions

1. Explain page replacement algorithms with example problems.


Page replacement algorithms are techniques used in operating systems to manage memory in a virtual
memory environment. These algorithms decide which pages to evict (remove) from memory when a
new page needs to be loaded but there is no free space available. The objective is to minimize the
number of page faults (instances when a requested page is not in memory) and maximize system
performance. Let's explore some common page replacement algorithms and solve example problems
to understand how they work.
i. FIFO (First-In-First-Out): This algorithm replaces the oldest page in memory, following a queue-
like structure.
Example—Let's consider a scenario where the memory has three frames and the reference string is: 1,
2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. Calculate the number of page faults using the FIFO algorithm. Frame
status after each reference:
1 (_)
1, 2 (_)
1, 2, 3 (_)
2, 3, 4 (_)
2, 3, 4 (1)
2, 3, 4 (1)
3, 4, 5 (2)
3, 4, 5 (2)
3, 4, 5 (2)
3, 4, 5 (2)
3, 4, 5 (2)
Total page faults: 6
ii. Optimal Page Replacement: This algorithm replaces the page that will not be used for the longest
time in the future.
Example—Using the same reference string and memory configuration as above, calculate the number
of page faults using the optimal page replacement algorithm. Frame status after each reference:
1 (_)
1, 2 (_)
1, 2, 3 (_)
2, 3, 4 (_)
2, 3, 4 (1)
2, 3, 4 (1)
2, 3, 5 (4)
2, 3, 5 (4)
2, 3, 5 (4)
2, 3, 5 (4)
2, 3, 5 (4)
Total page faults: 5
iii. LRU (Least Recently Used): This algorithm replaces the page that has not been used for the
longest time.
Example—Using the same reference string and memory configuration, calculate the number of page
faults using the LRU algorithm. Frame status after each reference:
1 (_)
1, 2 (_)
1, 2, 3 (_)
2, 3, 4 (_)
2, 3, 4 (1)
3, 4, 2 (1)
4, 2, 5 (3)
4, 2, 1 (5)
4, 2, 1 (5)
4, 2, 1 (5)
4, 2, 1 (5)
Total page faults: 7
These examples showcase how different page replacement algorithms make decisions based on the
order of page accesses to minimize page faults. Real-world scenarios often involve larger reference
strings and more frames, making the choice of algorithm crucial for optimizing system performance.
2. Explain disk scheduling algorithms with examples.
Disk scheduling algorithms are used to determine the order in which read and write requests to a disk
are serviced. These algorithms play a crucial role in optimizing disk access and minimizing the seek
time, which is the time taken for the disk arm to move to the desired track.
i. FCFS (First-Come, First-Served): In the FCFS algorithm, disk requests are serviced in the order
they arrive in the queue. Let's consider disk requests at track positions: 98, 183, 37, 122, 14, 124, 65,
67. Disk Head Initial Position: 53

Request Seek Time

98 45

183 85

37 146

122 65

14 39

124 110

65 59

67 2

Total Seek Time: 45 + 85 + 146 + 65 + 39 + 110 + 59 + 2 = 551


ii. SCAN (Elevator) Algorithm: The SCAN algorithm (also known as the elevator algorithm)
services requests in one direction until the end of the disk, then reverses and services requests in the
opposite direction. Using the same disk requests and initial position as in the FCFS example. Disk
Head Initial Position: 53 Direction: Left to Right

Request Seek Time

65 12

67 2

98 31

122 24

124 2

183 59

37 146

14 23

Total Seek Time: 12 + 2 + 31 + 24 + 2 + 59 + 146 + 23 = 299


iii. C-SCAN (Circular SCAN) Algorithm: The C-SCAN algorithm services requests in one
direction, but instead of reversing at the end, it jumps back to the beginning of the disk. Using the
same disk requests and initial position as in the previous examples. Disk Head Initial Position: 53
Direction: Left to Right

Request Seek Time

65 12

67 2

98 31

122 24

124 2

183 59

37 146

14 23

Total Seek Time: 12 + 2 + 31 + 24 + 2 + 59 + 146 + 23 = 299


iv. LOOK Algorithm: The LOOK algorithm is similar to SCAN but changes direction when there
are no more requests in the current direction. Using the same disk requests and initial position as in
the previous examples. Disk Head Initial Position: 53 Direction: Left to Right

Request Seek Time

65 12

67 2

98 31

122 24

124 2

183 59

37 146

14 23

Total Seek Time: 12 + 2 + 31 + 24 + 2 + 59 + 146 + 23 = 299


These examples demonstrate how different disk scheduling algorithms determine the order in which
disk requests are serviced, impacting the total seek time and overall efficiency of the disk access. The
choice of algorithm depends on the specific characteristics of the workload and system requirements.
3. Explain thrashing.
If a process cannot maintain its minimum required number of frames, then it must be swapped out,
freeing up frames for other processes. This is an intermediate level of CPU scheduling. But what
about a process that can keep its minimum, but cannot keep all of the frames that it is currently using
on a regular basis? In this case it is forced to page out pages that it will need again in the very near
future, leading to large numbers of page faults. A process that is spending more time paging than
executing is said to be thrashing.
Cause of Thrashing:
 Early process scheduling schemes would control the level of multiprogramming allowed based
on CPU utilization, adding in more processes when CPU utilization was low.
 The problem is that when memory filled up and processes started spending lots of time waiting
for their pages to page in, then CPU utilization would lower, causing the schedule to add in even
more processes and exacerbating the problem! Eventually the system would grind to a halt.
 Local page replacement policies can prevent one thrashing process from taking pages away from
other processes, but it still tends to clog up the I/O queue, thereby slowing down any other
process that needs to do even a little bit of paging (or any other I/O for that matter.)
4. Explain deadlock handling.
i. Deadlock Prevention or Avoidance: Deadlock prevention and avoidance strategies are designed to
stop the system from entering a deadlock state by controlling the way resources are allocated to
processes. These strategies focus on eliminating one or more of the four conditions necessary for a
deadlock (mutual exclusion, hold and wait, no preemption, and circular wait).
a) Mutual Exclusion: By ensuring that only one process can hold a resource at a time, the circular
wait condition is avoided. However, this can be too restrictive in some cases, limiting system
concurrency.
b) Hold and Wait: Processes are required to request and obtain all necessary resources upfront
before execution begins. This eliminates the possibility of a process holding resources while
waiting for others, which helps prevent circular wait.
c) No Preemption: Resources that are already allocated cannot be forcibly taken away from
processes. This strategy aims to prevent resource contention and the need for processes to be
terminated to release resources.
d) Circular Wait Avoidance: Resources are assigned a unique numerical value, and processes are
required to request resources in ascending order. This way, circular waits are prevented as
processes cannot lock resources in a circular fashion.
ii. Deadlock Detection and Recovery: This approach acknowledges that deadlocks might occur, and
instead of preventing them, it focuses on identifying and resolving them once they happen.
a) Resource Allocation Graph (RAG): In a resource allocation graph, processes are represented as
nodes, and resource types are represented as resources. Edges in the graph indicate requests and
assignments. If a cycle is found in the graph, a deadlock is present, and processes in the cycle can
be terminated or have resources preempted.
b) Wait-Die and Wound-Wait Schemes: These schemes are often used in database systems to
handle deadlocks between transactions. In the wait-die scheme, older processes can wait for
resources held by younger processes, but younger processes are aborted if they try to access
resources held by older ones. The wound-wait scheme reverses this, allowing younger processes
to wait for resources held by older ones.
c) Process Termination: When a deadlock is detected, one or more processes involved in the
deadlock can be terminated. This releases the resources held by these processes, allowing other
processes to continue. The choice of which process to terminate might be based on factors like
priority or execution time.
iii. Ignoring the Problem:
Sometimes, the rare occurrence of deadlocks might not warrant the significant overhead and
complexity of prevention or detection strategies. In such cases, the system might opt to simply let the
deadlocks happen and then restart or reboot the system when they occur. This approach can be viable
if the impact of deadlocks is minimal and restarting the system doesn't cause substantial disruption.
5. Explain virtual memory.
Virtual memory is a technique that allows the execution of processes that are not completely in
memory. One major advantage of this scheme is that programs can be larger than physical memory.
Further, virtual memory abstracts main memory into an extremely large, uniform array of storage,
separating logical memory as viewed by the user from physical memory. This technique frees
programmers from the concerns of memory-storage limitations.
Virtual memory also allows processes to share files easily and to implement shared memory. In
addition, it provides an efficient mechanism for process creation. The memory-management
algorithms talked about how to avoid memory fragmentation by breaking process memory
requirements down into smaller bites (pages), and storing the pages non-contiguously in memory.
However the entire process still had to be stored in memory somewhere.
The requirement that instructions must be in physical memory to be executed seems both necessary
and reasonable; but it is also unfortunate, since it limits the size of a program to the size of physical
memory. In practice, most real processes do not need all their pages, or at least not all at once, for
several reasons:
a) Error handling code is not needed unless that specific error occurs, some of which are quite rare.
b) Arrays are often over-sized for worst-case scenarios, and only a small fraction of the arrays are
actually used in practice.
c) Certain features of certain programs are rarely used.

Virtual memory involves the separation of logical memory as perceived by users from physical
memory. This separation allows an extremely large virtual memory to be provided for programmers
when only a smaller physical memory is available. Virtual memory makes programming much easier,
because the programmer would not worry about the amount of physical memory available.



You might also like