Unit - 3 Os-1
Unit - 3 Os-1
Dr. M. SINTHUJA
MSRIT
Some issues happens during this process they are address binding,
relocation, address translation, dynamic loading linking, allocation
deallocation of memory etc.
Memory Management Strategies/Background
Address Binding like book binding, all the instructions, data if u want to keep in
main memory then assign address in the MM where u keep the instruction is
called as address binding.
Compile time: if u know the memory location in advance then the absolute code
can be executed
LOad time: Can change the location and execute if u dont know the location that
is relocatable code
Execution time: only at the time of runtime u ll know the location of the
instruction of data.
Logical Vs Physical Address Space
Example AAdhar: Adhar no is hidden in latest id Virtual id is visible instead on
giving aadhar no to buy SIM u can use VID(virtual id). But in the AAdhar system
aadhar no only will be stored.
If u give VID who will convert the VID to aadhar. Memory Management
Example: VID is logical address it is generated by CPU
AAdhar id is my Physical address. Physical address means RAM locationla eruka
address
How the conversion takes place by memory management unit. It gets the logical
address from CPU and converts to the physical address.
Process execution will always doesn’t know the physical address. Process
generate logical address process knows only logical address.
RAM knows only physical address.
Set of logical address is known as LAS and set of physical address is known as PAS
This is called as dynamic relocation using relocation register
Key Points of Dynamic Relocation
SWAPPING
It is used in order to implement multiprogramming.
When the P1 completes its execution it is transferred to the secondary memory (transferred to SM) (OS transfers the process from MM to SM bcz the size of MM is sma
So the next process enter the MM.
Scenario 2: IF Process1 is waiting for the I/O operation at that time CPU is idle it doesn’t need CPU for Process1 so OS
transfers that process1 from main memory to the secondary memory
Scenario 3: If the P1 tries to access OS area or any of the process protection violation occurs that is trap or interrupt. SO P1
Should be terminated as it tries to access other process. THe terminated process will be transferred from MM to SM.
Memory Allocation
5MB allocated
contiguously is
Contiguous Memory
Allocation
Contiguous Memory Allocation
1. Internal Fragmentation
2. External Fragmentation
Internal Fragmentation
When a process is allocated to a memory block, and if the process is smaller
than the amount of memory requested, a free space is created in the given
memory block. Due to this, the free space of the memory block is unused,
which causes internal fragmentation.
To solve this problem,, going for the paging concept.
SEGMENTATION
A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all of the exact sizes are called segments.
Logical address
PAGING
Paging size and frame size should be equal
As continuous memory is not required in MM for paging it is called as non contiguous memory allocation
Paging is non contiguous memory allocation.
ADvantage : External fragmentation not required
To prevent external fragmentation compaction is required. Here no external fragmentation is required so no compaction is required
For each page no u will get corresponding frame number in
page table
OS will maintain free frame list here in this example 14.,13,18,20,15
are free space blue color are occupied now new process is entering
Page 0,1,2,3 so all the new process are assigned with the free space
in frame no 13,14,18,20 only frame no 15 is free now that will be
added to the free frame list.
Simultaneously entities can be searched is associative memory.
Translation look - aside buffer (TLB) is associative memory
In associative memory information is stored in the form of tags. Each tag is represented in the form of pair.
First component is key and second component is value pair.
If the page table or no is in TLB then u can
access directly the data/frame no to read or
If the page table not in TLB then Hit miss so here 2
write so 1 single memory is required so 100ns memory access is required so 200ns.
Hit ratio is given 0.8 so convert to percentage 80%. What it means? 80% of time u
get page table in TLB.
Understand the concept: ‘p’-->page table in TLB means Hit means fetch the
frame, if not miss then u have to see the page table and then access frame
Memory Protection
or Multilevel paging
Not optimised
Multilevel pages so partition page no into 12 & 10
If two level is not enough can go to three level
Virtual Memory Management
Physical address Logical or virtual address
In one instruction multiple pages can be accessed it sometime may result page fault.
TTo fetch and decode instructions there is possibility of multiple page fault. Locality of
reference later.
Demand paging requires hardware support.
So page table with V/I bit, secondary mem, start the instruction
Copy-on-Write
C
Page Replacement
Over head means Swap in swap out from MM to reduce Page replacement
algm
STEPS IN PAGE REPLACEMENT :
❖ Find the location of the desired page on the disk.
F1 7
CPU demanding for page 7. It is not in memory it is called as
F2 page fault.
As it is not in MM the CPU will transfer the control to OS from
F3 OS to Disk and from disk getting the 7th page and place it in
the MM.
Page fault 1,
FIFO ALGORITHM
7 7
CPU demanding for page 0. It is not in memory it is called as
F1 0 page fault.
F2
F3
Page fault 1, 2
FIFO ALGORITHM
7 7 7
CPU demanding for page 1. It is not in memory it is called as
F1 0 0 page fault.
F2 1
F3 Page fault 1, 2, 3
FIFO ALGORITHM
7 7 7 2 2
Page 0 is already existing in MM it is called as
F1 0 0 0 0 Hit. No changes
F2 1 1 1
F3 hit
FIFO ALGORITHM
7 7 7 2 2 2 2
Demanded is Page 0 it is not in MM it is called
F1 0 0 0 0 3 3 as page fault. So Replace 1 with 0 (FIFO)
F2 1 1 1 1 0
F3
hit
1 1 1 1 0 0 0
hit
Page fault 1, 2,3,4,5,6,7,8
FIFO ALGORITHM
0 0 0 3 3 3 2 2 2
0
1 1 1 1 0 0 0 3 3
hit
Page fault 1, 2,3,4,5,6,7,8,9,10
FIFO ALGORITHM
0 0 0 3 3 3 2 2 2 2
0
1 1 1 1 0 0 0 3 3 3
hit Page fault 1, 2,3,4,5,6,7,8,9,10
hit
FIFO ALGORITHM
1. Counter (it will keep track of all the pages & how many
times the page is used/unused with the details)
2. Stack (recently used will on the top of stack and least used
will bottom of the stack) Linked list is used (double linked list)
Practice questions
In the LRU algorithm, the page that has not been used for the longest period is replaced when a new page is needed. We track the
usage of the pages and replace the least recently used one when a page fault occurs.
Steps:
1. 1: Fault → [1]
2. 2: Fault → [1, 2]
3. 3: Fault → [1, 2, 3]
4. 4: Fault → [1, 2, 3, 4]
5. 2: Hit → [1, 2, 3, 4] (no change)
6. 1: Hit → [1, 2, 3, 4] (no change)
7. 5: Fault → [2, 3, 4, 5] (1 is replaced)
8. 6: Fault → [3, 4, 5, 6] (2 is replaced)
9. 2: Fault → [4, 5, 6, 2] (3 is replaced)
10. 1: Fault → [5, 6, 2, 1] (4 is replaced)
11. 2: Hit → [5, 6, 2, 1] (no change)
12. 3: Fault → [6, 2, 1, 3] (5 is replaced)
13. 7: Fault → [2, 1, 3, 7] (6 is replaced)
14. 6: Fault → [1, 3, 7, 6] (2 is replaced)
15. 3: Hit → [1, 3, 7, 6] (no change)
16. 2: Fault → [1, 7, 6, 2] (3 is replaced)
17. 1: Hit → [1, 7, 6, 2] (no change)
18. 2: Hit → [1, 7, 6, 2] (no change)
19. 3: Fault → [7, 6, 2, 3] (1 is replaced)
20. 6: Hit → [7, 6, 2, 3] (no change)
● Page faults: 14
● Page hits: 6
Optimal Page Replacement Algorithm:
The Optimal page replacement algorithm replaces the page that will not be used for the longest time in the future. This is a
theoretical algorithm and is typically used to analyze the performance of page replacement strategies.
Steps:
1. 1: Fault → [1]
2. 2: Fault → [1, 2]
3. 3: Fault → [1, 2, 3]
4. 4: Fault → [1, 2, 3, 4]
5. 2: Hit → [1, 2, 3, 4] (no change)
6. 1: Hit → [1, 2, 3, 4] (no change)
7. 5: Fault → [1, 2, 3, 5] (4 is replaced)
8. 6: Fault → [1, 2, 5, 6] (3 is replaced)
9. 2: Hit → [1, 2, 5, 6] (no change)
10. 1: Hit → [1, 2, 5, 6] (no change)
11. 2: Hit → [1, 2, 5, 6] (no change)
12. 3: Fault → [1, 2, 5, 3] (6 is replaced)
13. 7: Fault → [1, 2, 5, 7] (5 is replaced)
14. 6: Fault → [1, 2, 7, 6] (3 is replaced)
15. 3: Fault → [1, 2, 7, 3] (5 is replaced)
16. 2: Hit → [1, 2, 7, 3] (no change)
17. 1: Hit → [1, 2, 7, 3] (no change)
18. 2: Hit → [1, 2, 7, 3] (no change)
19. 3: Hit → [1, 2, 7, 3] (no change)
20. 6: Fault → [1, 2, 7, 6] (7 is replaced)
● Page faults: 12
● Page hits: 8
LRU Approximation Page Replacement
0 4 0 6
1 2 1 7
1 6 0 4
Based on these two values of each page it will decide which place to to be replaced by new
one.
(0,0) →( Reference bit 0 & Modify bit 0) best place to replace
(0,1) —> Not quite as good
(1,0) —>May be used again soon
(1,1) —-> MAy be used again soon & need to be written on the disk
The best is (0,0)
COUNTING BASED PAGE REPLACEMENT
Page Buffering Algorithm
Page Buffering Algorithm
● As an add-on to any previous algorithm.
● A pool of free frames is maintained.
● When a page fault occurs, the desired page is read into a
free frame from the pool. The victim frame is later
swapped out if necessary and put into the free frames pool.
Advantage / disadvantage
● Plus - Process is put back to ready queue faster.
● Minus - less pages are in use overall.
● VAX/VMS version - basic FIFO replacement with a free
frame pool. A victim is put into the pool but the original
virtual address is kept. When a page fault occurs, we first
look in the pool. If we find the page there - no need for
disk operation.
Allocation of Frames
How to allocates frame? What is the minimum no of frames and maximum number
of frames to be allocated?
which is not sufficient and wastage of frames in P1. To overcome moving for
proportional allocation.
Proportional Allocation
Based on the size of the process, the available frames will be allocated.
So wastage of frames.
Disadvantage: