OS Lab Programs 508
OS Lab Programs 508
: Page |1
3. Priority Scheduling:
Each process is assigned a priority, and the CPU is allocated to the process with the highest priority.
Priority can be predefined based on process characteristics or dynamic, changing during execution
based on various factors. This algorithm can be either preemptive or non-preemptive.
Algorithms:
1) First Come First Serve:
Program:
def fcfs_scheduling(processes, arrival_time, burst_time):
n = len(processes)
waiting_time = [0] * n
completion_time = [0] * n
turnaround_time = [0] * n
current_time = 0
avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")
def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []
for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 5 5 0 5
p1 1 3 8 4 7
p2 2 2 10 6 8
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 5 5 0 5
p1 1 3 8 4 7
p2 2 2 10 6 8
Program:
def sjf_scheduling(processes, arrival_time, burst_time):
n = len(processes)
waiting_time = [0] * n
completion_time = [0] * n
turnaround_time = [0] * n
remaining_time = burst_time.copy()
current_time = 0
completed_processes = 0
if min_index == -1:
current_time += 1
continue
avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")
def get_inputs():
n = int(input("Enter the number of processes: "))
for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 8 0 8
p1 1 4 12 7 11
p2 2 9 26 15 24
p3 3 5 17 9 14
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 8 0 8
p1 1 4 12 7 11
p2 2 9 26 15 24
p3 3 5 17 9 14
current_time = 0
completed_processes = 0
# Update times
completion_time[current_process] = current_time + burst
turnaround_time[current_process] = completion_time[current_process] -
arrival_time[current_process]
waiting_time[current_process] = turnaround_time[current_process] - burst_time[current_process]
current_time += burst
completed_processes += 1
avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")
def get_inputs():
n = int(input("Enter the number of processes: "))
for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
pr = int(input(f"Enter the priority of {processes[i]} (lower number means higher priority): "))
arrival_time.append(at)
burst_time.append(bt)
priority.append(pr)
Process Arrival Time Burst Time PriorityCompletion Time Waiting Time Turnaround Time
p0 0 8 3 8 0
8
p1 1 4 1 12 7
11
p2 2 9 4 26 15
24
p3 3 5 2 17 9
14
Executed Output:
Enter the number of processes: 4
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the priority of p0 (lower number means higher priority): 3
Enter the arrival time of p1: 1
Process Arrival Time Burst Time PriorityCompletion Time Waiting Time Turnaround Time
p0 0 8 3 8 0
8
p1 1 4 1 12 7
11
p2 2 9 4 26 15
24
p3 3 5 2 17 9
14
n = len(processes)
remaining_time = burst_time.copy()
waiting_time = [0] * n
turnaround_time = [0] * n
completion_time = [0] * n
current_time = 0
queue = []
process_indices = list(range(n))
if not queue:
# If no processes are in the queue, advance time
current_time += 1
continue
if remaining_time[current_process] == 0:
completion_time[current_process] = current_time
turnaround_time[current_process] = completion_time[current_process] -
arrival_time[current_process]
waiting_time[current_process] = turnaround_time[current_process] - burst_time[current_process]
else:
# If process is not completed, put it back in the queue
queue.append(current_process)
avg_waiting_time = sum(waiting_time) / n
def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []
for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)
quantum = int(input("Enter the time quantum for Round Robin scheduling: "))
return processes, arrival_time, burst_time, quantum
Sample Output:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 4
Enter the arrival time of p1: 1
Enter the burst time of p1: 5
Enter the arrival time of p2: 2
Enter the burst time of p2: 2
Enter the time quantum for Round Robin scheduling: 3
Gantt chart:
| p0 3 | p1 6 | p2 8 | p0 9 | p1 12 |
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 4 9 5 9
p1 1 5 12 6 11
p2 2 2 8 4 6
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 4 9 5 9
p1 1 5 12 6 11
p2 2 2 8 4 6
current_time = 0
completed_processes = 0
process_queue = []
if not process_queue:
current_time += 1
continue
avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")
def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []
for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 19 11 19
p1 1 4 5 0 4
p2 2 9 27 16 25
p3 3 5 14 6 11
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 19 11 19
p1 1 4 5 0 4
p2 2 9 27 16 25
p3 3 5 14 6 11
Aim:
To write a program for the implementation of semaphores
Description:
Semaphores are a synchronization mechanism used in concurrent programming to control access to
shared resources and coordinate the execution of multiple processes or threads. They were introduced
by Edsger W. Dijkstra in 1965 and have since become a fundamental concept in operating systems and
parallel computing.
A semaphore is a simple integer variable, often referred to as a "counting semaphore." It can take
on non- negative integer values and supports two main operations:
1. **Wait (P) Operation**: When a process (or thread) wants to access a shared resource, it must first
perform a "wait" operation on the semaphore. If the semaphore value is greater than zero, the
process decrements the semaphore value and continues its execution, indicating that the resource is
available. If the semaphore value is zero, the process is blocked (put to sleep) until the semaphore
value becomes positive again.
2. **Signal (V) Operation**: After a process finishes using a shared resource, it performs a "signal"
operation on the semaphore. This operation increments the semaphore value, indicating that the
resource is now available for other processes or threads to use. If there were blocked processes waiting
for the semaphore to become positive (due to previous "wait" operations), one of them will be
awakened and granted access to the resource.
Semaphores help prevent race conditions and ensure that critical sections of code (regions of code that
access shared resources) are mutually exclusive, meaning only one process or thread can access the
shared resource at a time.
However, it's essential to use semaphores carefully to avoid potential issues like deadlocks (circular
waiting) or race conditions. More advanced synchronization mechanisms, such as mutexes and
condition variables, are often used in modern programming languages and libraries to manage
concurrency more effectively.
Program:
import time
import threading
# Semaphore functions
def semaphore_wait(semaphore):
while semaphore[0] <= 0:
pass
semaphore[0] -= 1
return semaphore
def semaphore_signal(semaphore):
semaphore[0] += 1
return semaphore
In shared memory, a region of memory is mapped into the address space of multiple processes or
threads. These processes/threads can then read from or write to the shared memory region just like
accessing regular memory. This shared memory area acts as a shared buffer, facilitating the exchange of
data between different processes without the need for copying data between them.
However, shared memory also requires careful synchronization to avoid data inconsistencies and race
conditions. Developers need to use synchronization primitives like semaphores, mutexes, or atomic
operations to ensure that multiple processes or threads access the shared memory in a coordinated
and controlled manner. Proper synchronization helps maintain data integrity and prevent conflicts
when multiple entities attempt to modify the shared data concurrently.
To use shared memory, operating systems provide APIs for creating and managing shared memory
regions. Developers need to be cautious while using shared memory to prevent data corruption and
ensure proper synchronization to avoid race conditions and data inconsistencies.
Program:
import multiprocessing
import time
def main():
# Create a shared integer with initial value 0
shared_data = multiprocessing.Value('i', 0)
if __name__ == "__main__":
main()
Sample Output:
Worker process: 1
Worker process: 2
Worker process: 3
Worker process: 4
Worker process: 5
Worker process: 6
Worker process: 7
Worker process: 8
Worker process: 9
Worker process: 10
Final shared data value: 10
Executed Output:
Worker process: 1
Worker process: 2
Worker process: 3
Worker process: 4
Worker process: 5
Worker process: 6
Worker process: 7
Worker process: 8
Worker process: 9
Worker process: 10
Final shared data value: 10
Result:
Thus a python program for the implementation of Shared memory successfully executed.
Implementation of IPC:
Aim:
To write a program for the implementation of Inter Process Communication (IPC)
Description:
Inter-Process Communication (IPC) facilitates communication and data sharing among concurrent
processes on a computer. Shared memory allows direct access to a common memory area for high-
speed data exchange, while message passing involves sending messages through channels like pipes or
sockets. Synchronization tools like semaphores and mutexes prevent conflicts and ensure data integrity.
IPC finds applications in parallel computing, client-server systems, and real-time applications. Proper
design and implementation are essential to handle complexities and maintain security in IPC
mechanisms.
Inter-Process Communication (IPC) enables data exchange and coordination between concurrent
processes in a computer system. It uses shared memory or message passing channels for
communication. Synchronization mechanisms like semaphores ensure proper resource access. IPC is
crucial for parallel computing, client-server models, and process coordination. However, it requires
careful design to avoid complexities and security issues.
Program:
import multiprocessing
import time
def producer(queue):
"""Function to be run by the producer process."""
for i in range(5):
item = f"item-{i}"
print(f"Producer putting {item} into queue")
queue.put(item) # Put item into the queue
time.sleep(1) # Simulate work
def consumer(queue):
"""Function to be run by the consumer process."""
while True:
item = queue.get() # Get item from the queue
if item is None: # Sentinel value to end the consumer process
break
print(f"Consumer got {item} from queue")
time.sleep(2) # Simulate work
def main():
# Create a queue for IPC
queue = multiprocessing.Queue()
if __name__ == "__main__":
main()
Sample Output:
Producer putting item-0 into queue
Producer putting item-1 into queue
Producer putting item-2 into queue
Producer putting item-3 into queue
Producer putting item-4 into queue
Consumer got item-0 from queue
Consumer got item-1 from queue
Consumer got item-2 from queue
Consumer got item-3 from queue
Consumer got item-4 from queue
Both producer and consumer processes have completed.
Executed Output:
Producer putting item-0 into queue
Producer putting item-1 into queue
Producer putting item-2 into queue
Producer putting item-3 into queue
Producer putting item-4 into queue
Consumer got item-0 from queue
Consumer got item-1 from queue
Consumer got item-2 from queue
Consumer got item-3 from queue
Consumer got item-4 from queue
Both producer and consumer processes have completed.
Result:
Thus a python program for the implementation of IPC successfully executed.
1. **Resource Allocation Model**: The algorithm operates under the assumption that the system has
a fixed number of resources of different types (e.g., memory, CPU, printers) and multiple processes
compete for these resources.
2. **Maximum Claim**: Each process declares its maximum possible resource requirements
(maximum claim) before it begins its execution. This information is known to the system.
3. **Available Resources**: The system maintains a record of the available resources for each
resource type at any given time.
4. **Safety and Request Matrices**: The algorithm uses two matrices: the safety matrix and the
request matrix. The safety matrix is used to assess the system's ability to allocate resources safely to
processes without causing deadlock. The request matrix stores the resource requests made by each
process during its execution.
6. **Safe State**: A system state is considered safe if there exists a sequence of resource allocations
where each process can complete its execution and release resources, allowing other processes to
complete without getting stuck in a deadlock.
8. **Dynamic Resource Requests**: The Banker's algorithm allows processes to make multiple
resource requests during their execution. It checks each request's safety before granting it.
The Banker's algorithm is a preventive measure against deadlocks, ensuring that resource allocations are
done in a manner that avoids deadlock scenarios. It provides a way for processes to request resources
safely, considering the system's current resource availability and avoiding situations that could lead to
deadlock.
Program:
import numpy as np
for _ in range(n):
for i in range(n):
if not finish[i] and all(need[i] <= work):
work += allocation[i]
finish[i] = True
safe_sequence.append(i)
break
else:
return False, []
available += release
allocation[i] -= release
need[i] += release
print(f"Process {i} released resources. New state:")
print(f"Available: {available}")
print(f"Allocation: {allocation}")
print(f"Need: {need}")
Available: [2 2]
Allocation: [[0 1]
[2 0]
[3 0]]
Need: [[7 4]
[1 2]
[6 0]]
System is in a safe state after Process 0. Safe sequence: [0]
while True:
progress_made = False
for i in range(n):
if not finish[i] and all(need[i] <= work):
# If process i can finish, update work and finish arrays
work += allocated[i]
finish[i] = True
progress_made = True
print(f"Process {i} can finish; updated work: {work}")
if not progress_made:
break
print("\nDetecting deadlock...\n")
detect_deadlock(allocated, max_claim, available)
if __name__ == "__main__":
main()
Sample Output:
Enter the number of processes: 3
Enter the number of resources: 2
Detecting deadlock...
Executed Output:
Enter the number of processes: 3
Enter the number of resources: 2
Detecting deadlock...
Result:
Thus a python program for the implementation of dead lock detection algorithm successfully
executed.
3. **Thread Creation and Management**: Threads are created and managed by the operating system
or the programming language's runtime environment. They share the same memory space within a
process, making communication and data sharing between threads easier.
5. **Race Conditions**: Race conditions occur when multiple threads access shared resources in an
uncontrolled manner, leading to unpredictable and potentially incorrect behavior. Synchronization
mechanisms help prevent race conditions by allowing only one thread to access the shared resource at a
time.
6. **Deadlocks**: Deadlocks can occur when two or more threads are each waiting for a resource that
is held by another thread, resulting in a circular dependency. Properly designed synchronization can
help avoid deadlocks by ensuring that threads release resources in a coordinated manner.
def main():
num_threads = int(input("Enter the number of threads: "))
increments_per_thread = int(input("Enter the number of increments per thread: "))
threads = []
if __name__ == "__main__":
main()
Sample Output:
Enter the number of threads: 3
Enter the number of increments per thread: 5
Thread 0 is incrementing counter from 0 to 1
Thread 1 is incrementing counter from 1 to 2
Thread 2 is incrementing counter from 2 to 3
Thread 0 is incrementing counter from 3 to 4
Thread 1 is incrementing counter from 4 to 5
Thread 2 is incrementing counter from 5 to 6
Thread 0 is incrementing counter from 6 to 7
Thread 1 is incrementing counter from 7 to 8
Thread 2 is incrementing counter from 8 to 9
Thread 0 is incrementing counter from 9 to 10
Thread 1 is incrementing counter from 10 to 11
Thread 2 is incrementing counter from 11 to 12
Thread 0 is incrementing counter from 12 to 13
Thread 1 is incrementing counter from 13 to 14
Thread 2 is incrementing counter from 14 to 15
Thread 0 is incrementing counter from 15 to 16
Thread 1 is incrementing counter from 16 to 17
Thread 2 is incrementing counter from 17 to 18
Write a program for Implementation of memory allocation methods for fixed partitions:
a) First fit
b) best fit
c)worst fit
Aim:
To Write a program for Implementation of memory allocation methods for fixed partitions:
a) First fit b) best fit c) worst fit
Description:
Memory allocation methods for fixed partitions are used in operating systems to manage memory in a
system with a fixed number of memory partitions of different sizes. These methods determine how
processes are allocated to these fixed partitions based on their size requirements.
1. **First Fit**:
- In the first-fit memory allocation method, when a new process arrives and needs memory, the
system searches the memory partitions sequentially from the beginning.
- The first partition that is large enough to accommodate the process is allocated to it.
- This method is relatively simple and efficient in terms of time complexity since it stops searching
once a suitable partition is found.
- However, it may lead to fragmentation, where small blocks of unused memory are scattered
across the memory, making it challenging to allocate larger processes in the future.
2. **Best Fit**:
- In the best-fit memory allocation method, when a new process arrives, the system searches
for the smallest partition that can hold the process size.
- Among all the partitions that are large enough to accommodate the process, the one with the
smallest size is chosen for allocation.
- This method helps in reducing fragmentation, as it tries to fit processes into the smallest possible
available partitions.
- However, it may be slightly slower than the first-fit method, as it needs to search the entire
list of partitions to find the best fit.
3. **Worst Fit**:
- In the worst-fit memory allocation method, when a new process arrives, the system searches
for the largest partition available that can hold the process size.
- The largest partition among all the partitions that can accommodate the process is allocated to it.
- This method may lead to more fragmentation compared to first-fit and best-fit, as larger
partitions are used for smaller processes, leaving behind smaller unused spaces.
- It is less commonly used than first-fit and best-fit because it does not efficiently utilize memory space.
In summary, memory allocation methods for fixed partitions, such as first-fit, best-fit, and worst-fit,
determine how processes are assigned to available memory partitions based on their size requirements.
Each method has its advantages and disadvantages, affecting fragmentation and memory utilization in
the system. The choice of the allocation method depends on the specific requirements and
characteristics of the system.
Program:
def first_fit(partitions, processes):
allocation = [-1] * len(processes)
for i, process in enumerate(processes):
for j, partition in enumerate(partitions):
def main():
num_partitions = int(input("Enter the number of partitions: "))
partitions = [int(input(f"Enter size of partition {i}: ")) for i in range(num_partitions)]
# First Fit
allocation_first_fit = first_fit(partitions_copy, processes)
print_allocation(allocation_first_fit, processes, "First Fit")
if __name__ == "__main__":
main()
Sample Output:
Enter the number of partitions: 3
Enter size of partition 0: 100
Enter size of partition 1: 500
Enter size of partition 2: 200
Enter the number of processes: 4
Enter size of process 0: 212
Enter size of process 1: 417
Enter size of process 2: 112
Enter size of process 3: 426
In the paging technique, the physical memory is divided into fixed-size blocks called "frames," and the
logical memory used by processes is divided into fixed-size blocks called "pages." These pages are usually
of the same size as the frames. The size of a page is typically a power of 2, such as 4 KB or 8 KB.
When a process needs to access a specific memory address, the virtual address generated by the
CPU is divided into two parts: a "page number" and an "offset." The page number is used as an
index to access a page table, which is a data structure maintained by the operating system to keep
track of the mapping between virtual pages and physical frames.
The page table provides the corresponding physical frame number for the given page number. The
offset is used to determine the exact location within the physical frame where the data is stored.
If a page is not currently present in physical memory (i.e., it's not loaded into a frame), the CPU
generates a page fault. The operating system then retrieves the required page from the secondary
storage (e.g., hard disk) and loads it into an available frame in physical memory. The page table is
updated to reflect the new mapping between the virtual page and the physical frame.
1. **Simplified Address Translation:** With paging, the CPU only needs to perform a single
level of translation, from virtual addresses to physical addresses, making the address translation
process more efficient.
3. **Protection and Isolation:** Each page is assigned specific access permissions, allowing the
operating system to enforce memory protection and isolate processes from one another.
4. **Memory Sharing:** Multiple processes can share the same physical page if they need access to the
same data, such as code libraries or read-only data.
Paging, however, requires the overhead of maintaining the page table, and page faults can lead to
performance penalties due to the need to fetch data from secondary storage. To mitigate these
issues, modern processors often incorporate hardware support, such as Translation Lookaside
Buffers (TLBs), to speed up address translation and reduce the impact of page faults.
Program:
import random
# Constants
PAGE_SIZE = 4 # Size of a page in bytes
# Initialize memory
memory = [-1] * NUM_PAGES # -1 indicates an empty frame
def page_number(address):
"""Returns the page number for a given address."""
return address // PAGE_SIZE
def frame_number(page):
"""Returns the frame number where a page is stored."""
return memory.index(page) if page in memory else -1
def main():
page_table = {} # Dictionary to store the page table
if __name__ == "__main__":
main()
Sample Output:
Accessing memory addresses: [6, 1, 3, 7, 14, 5, 12, 11, 2, 9]
Executed Output:
Accessing memory addresses: [6, 1, 3, 7, 14, 5, 12, 11, 2, 9]
1. FIFO (First-In-First-Out):
FIFO is one of the simplest page replacement algorithms. It works based on the principle of the first
page that entered the memory will be the first one to be replaced. In other words, the page that has
been in memory the longest will be evicted. This algorithm uses a queue data structure to keep track of
the order in which pages were brought into memory. However, FIFO may suffer from the "Belady's
Anomaly," where increasing the number of frames can lead to more page faults.
Each page replacement algorithm has its advantages and disadvantages. The best choice of algorithm
depends on the specific use case, workload characteristics, and available hardware resources. Real-world
implementations often involve a trade-off between algorithm complexity and performance.
Algorithms:
FIFO page replacement algorithms:
Program:
from collections import deque
PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE
# Initialize memory
memory = [-1] * NUM_PAGES
page_queue = deque() # To keep track of pages for FIFO
def frame_number(page):
return memory.index(page) if page in memory else -1
def main_fifo():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_fifo(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")
if __name__ == "__main__":
main_fifo()
Sample Output:
Accessing memory addresses: [9, 5, 8, 13, 3, 14, 2, 6, 11, 7]
LFU:
Program:
from collections import defaultdict
n = int(input("Enter the number of pages: "))
capacity = int(input("Enter the number of page
frames: ")) pages = [0]*n
print("Enter the
pages: ") for i in
range(n):
pages[i] =
int(input()) pf = 0
v = []
mp =
defaultdict(int)
for i in
range(n):
if pages[i] not in v:
if len(v) ==
capacity:
mp[v[0]] =
mp[v[0]] - 1
v.pop(0)
v.append(pa
ges[i])
mp[pages[i]]
+=1 pf = pf +
1
else:
mp[pages[i]]
+=1
v.remove(pa
ges[i])
v.append(pa
ges[i])
k = len(v) - 2
while k>=0 and
mp[v[k]]>mp[v[k+1]]: v[k],
v[k+1] = v[k+1], v[k]
k-=1
print("The number of page faults for LFU algorithm is:",pf)
Sample Output:
Enter the number of pages: 7
Enter the number of page frames: 3
Enter the pages:
7
0
1
2
0
3
0
The number of page faults for LFU algorithm is: 5
Executed Output:
LRU:
Program:
from collections import OrderedDict
PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE
# Initialize memory
memory = [-1] * NUM_PAGES
page_order = OrderedDict() # To keep track of pages for LRU
def main_lru():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_lru(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")
if __name__ == "__main__":
main_lru()
Sample Output:
Executed Output:
PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE
# Initialize memory
memory = [-1] * NUM_PAGES
def main_optimal():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_optimal(address, page_table, addresses)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")
if __name__ == "__main__":
main_optimal()
Sample Output:
Accessing memory addresses: [7, 12, 3, 0, 11, 8, 7, 1, 14, 6]
Page fault at address 7 (Page 1)
Page 1 loaded into frame 0.
Page fault at address 12 (Page 3)
Page 3 loaded into frame 1.
Page fault at address 3 (Page 0)
Page 0 loaded into frame 2.
Page fault at address 0 (Page 2)
Page 2 loaded into frame 3.
Page fault at address 11 (Page 2)
Page 2 replaced with page 11 in frame 0.
Page 11 loaded into frame 0.
Page fault at address 8 (Page 2)
Page 2 replaced with page 8 in frame 1.
Page 8 loaded into frame 1.
Page fault at address 7 (Page 0)
Page 0 replaced with page 7 in frame 2.
Page 7 loaded into frame 2.
Page fault at address 1 (Page 1)
Page 1 replaced with page 1 in frame 3.
Page 1 loaded into frame 3.
Page fault at address 14 (Page 3)
Page 3 replaced with page 14 in frame 0.
Page 14 loaded into frame 0.
Page fault at address 6 (Page 2)
Page 2 replaced with page 6 in frame 1.
Page 6 loaded into frame 1.
PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE
# Initialize memory
memory = [-1] * NUM_PAGES
page_frequency = defaultdict(int) # Dictionary to keep track of page usage frequency
memory[frame_to_replace] = page
page_table[page] = frame_to_replace
page_frequency[page] += 1
print(f"Page {page} loaded into frame {page_table[page]}.")
def main_mfu():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_mfu(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")
if __name__ == "__main__":
main_mfu()
Sample Output:
Accessing memory addresses: [6, 12, 2, 0, 8, 6, 3, 12, 7, 1]
Page fault at address 6 (Page 1)
Page 1 loaded into frame 0.
Page fault at address 12 (Page 3)
Page 3 loaded into frame 1.
Page fault at address 2 (Page 0)
Page 0 loaded into frame 2.
Page fault at address 0 (Page 2)
Page 2 loaded into frame 3.
Page fault at address 8 (Page 2)
Page 2 replaced with page 8 in frame 3.
Page 8 loaded into frame 3.
Page fault at address 6 (Page 1)
Page 1 found in frame 0.
Page fault at address 3 (Page 0)
Page 0 replaced with page 3 in frame 1.
Page 3 loaded into frame 1.
Page fault at address 7 (Page 1)
Page 1 replaced with page 7 in frame 2.
Page 7 loaded into frame 2.
Page fault at address 1 (Page 0)
Page 0 replaced with page 1 in frame 3.
Page 1 loaded into frame 3.
Result:
Thus python programs to implement page replacement algorithms are successfully executed.