0% found this document useful (0 votes)
9 views36 pages

Co2 - Special Class

The document provides an overview of memory management techniques including virtual memory, swapping, segmentation, and paging. It explains key concepts such as demand paging, thrashing, and the use of translation lookaside buffers (TLB) for efficient address translation. Additionally, it discusses free space management methods and page replacement algorithms, highlighting their advantages and disadvantages.

Uploaded by

ksrinithya2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views36 pages

Co2 - Special Class

The document provides an overview of memory management techniques including virtual memory, swapping, segmentation, and paging. It explains key concepts such as demand paging, thrashing, and the use of translation lookaside buffers (TLB) for efficient address translation. Additionally, it discusses free space management methods and page replacement algorithms, highlighting their advantages and disadvantages.

Uploaded by

ksrinithya2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Memory Management : Virtual Memory, Swapping, Paging,

Segmentation, Free Space Management, Translation Look Aside


Buffer, Demand Paging, Thrashing , Page Replacement Algorithms

Dr. VIJAYA CHANDRA JADALA


What is Memory Virtualization ?
Virtual Memory is a technique of
executing program instructions that
may not fit entirely in the system
memory.
OS virtualizes its physical memory.
OS provides an illusion memory space
per each process.
It seems to be seen like each process
uses the whole memory .
Virtual Memory is a technique of executing
Program instructions that may not fit entirely
in the system memory.
SWAPPING
• SWAPPING is the Process of
temporarily removing
inactive programs from the
main memory of a
computer system.
• Swapping is a mechanism in
which a process can be
swapped temporarily out of
main memory (or move) to
secondary storage( Disk)
and make that memory
available to other processes.
At some later time, the
system swaps back the
process from the secondary
Segmentation
• A Segment can be defined as a logical grouping of Instructions such as a
subroutine, array or a data area.
• Every Program (job) is a collection of these segments.
• Each job is divided into several segments of different sizes, one for each
module that contains pieces that perform related functions
• Each Segment is actually a different logical address space of the Program
• Segmentation is a memory management scheme which supports the
programmer’s view of memory. Programmers never think of their programs
as a linear array of words. Rather, they think of their programs as a
collection of logically related entities such as subroutines or procedures,
functions, global or local data areas, stacks etc.,
A process is divided into Segments. The chunks that a program
is divided into which are not necessarily all the same sizes are
called segments. Segmentation gives user’s view of the process
which paging does not give. Here the user’s view is mapped to
physical memory.
There are types of segmentation:
1.Virtual memory segmentation –
Each process is divided into a number of
segments, not all of which are resident at
any one point in time.
2.Simple segmentation –
Each process is divided into a number of
segments, all of which are loaded into
memory at run time, though not necessarily
contiguously.
There is no simple relationship between
logical addresses and physical addresses in
segmentation. A table stores the information
about all such segments and is called
Segment Table.
Segment Table – It maps two-dimensional
Logical address into one-dimensional
Physical address. It’s each table entry has:
•Base Address: It contains the starting
Advantages of Segmentation –
•No Internal fragmentation.
•Segment Table consumes less space in
comparison to Page table in paging.
Disadvantage of Segmentation –
•As processes are loaded and removed from
the memory, the free memory space is broken
into little pieces, causing External
fragmentation.
Paging
• Paging splits up address space
into fixed-sized unit called a
page.
• With paging, physical memory
is also split into some number
of pages called a page frame.
• Page table per process is
needed to translate the virtual
address to physical address.
Paging
• Paging is a storage mechanism
used to retrieve processes
from the secondary storage
into the main memory in the
form of pages.
• The main idea behind the
paging is to divide each
process in the form of pages.
The main memory will also be
divided in the form of frames.
Advantages Of Paging
• Easy to use memory management algorithm
• No need for external Fragmentation
• Swapping is easy between equal-sized pages and page frames
• Allocating memory is easy and cheap

Dis-Advantages Of Paging
• May cause Internal fragmentation
• Page tables consume additional memory.
• Multi-level paging may lead to memory reference overhead.
Example: A Simple Paging
• 128-byte physical memory with 16 bytes page frames
• 64-byte address space with 16 bytes pages
0 re s e rv e d fo r O S

page frame 0 of
reserved for OS physical
16 (u n u se d )

memory
(unused) page frame 1
32
0
(page 0 of page 3 of AS page frame 2
16 the address 48
space)
(page 1) page 0 of AS page frame 3
32 64 (u n u se d )

(page 2)
(unused) page frame 4
48
80
(page 3)
64 page 2 of AS page frame 5
96
A Simple 64-byte Address Space
(u n u se d )

(unused) page frame 6


112 page 1 of AS

page 1 of AS page frame 7


128
64-Byte Address Space Placed In Physical Memory
11
Address Translation
• Two components in the virtual address
• VPN: virtual page number
• Offset: offset within the page
VPN offset
Va 5 Va 4 Va 3 Va 2 Va 1 Va 0

Va5 Va4 Va3 Va2 Va1 Va0

• Example: virtual address 21 in 64-byte address space


VPN offset

0 1 0 1 0 1

0 1 0 1 0 1

12
Example: Address Translation
• The virtual address 21 in 64-byte address space
VPN offset

0 1 0 1 0 1

Virtual
0 1 0 1 0 1
Address

Address
Translation

Address
Translation

1 1 0 1 0 1
1

Physical
1 1 1 0 1 0 1
Address

PFN offset

13
Paging Segmentation
1. In Paging Scheme, the main
memory is partitioned into 1. In Segmentation scheme, the
frames(or blocks). main memory is partitioned into
segments.
2. The logical address space is
divided into pages by the 2. The logical address space is
compiler or memory divided into segments as
management unit (MMU) specified by the programmer.
3. This Scheme suffer from 3. This scheme suffers from
internal Fragmentation or Page external fragmentation.
Breaks. 4. The OS maintains the particulars
4. The OS maintains a free frame of available memory.
list; there is no need to search
for free frame.
Paging Segmentation

5. The Operating System maintains 5. The Operating System maintains a


a Page map Table for mapping segment map table for mapping
between frames and pages. purpose.
6. This scheme does not support 6. This scheme supports user view of
the users view of memory memory
7. Processor uses the page 7. Processor uses the segment
number and displacement to number and displacement to
calculate absolute address (p, d) calculate the absolute address (s, d)
Demand Paging
• It is scheme in which a page is not loaded into
the main memory from the secondary memory,
until it is needed.
• So, in demand paging, pages are loaded only on
demand and not in advance.
• The advantage here is that now lesser I/O is
needed, less memory is needed, faster
response and more users serviced now.
Pure Demand Paging
• Pure Demand paging is the form of demand paging
in which not even a single page is loaded into
memory initially.
• Therefore, the very first instance instruction cause
a page fault in this case.
• This type of demand paging may significantly
decrease the performance of a computer system
by generally increasing the effective access time of
memory.
Performance of Demand Paging
1. Page fault rate, p, is given as – 0 <= p <= 1.0
• That is if P = 0 then no page faults
• If P = 1 then every reference is a fault
2. Effective Access Time (EAT) is given as follows –
EAT = (1-p) * memory access + p(page fault overhead) + (Swap page out) +
(swap page in ) + (Restart overhead)
Performance of Pure
Demand Paging
EAT = (1-P) * m1 + p * page-fault time
Translation Look Aside Buffer
• A translation lookaside buffer (TLB) is a memory cache that
stores recent translations of virtual memory to physical
addresses for faster retrieval.
• When a virtual memory address is referenced by a program, the
search starts in the CPU. First, instruction caches are checked.
• If the required memory is not in these very fast caches, the
system has to look up the memory’s physical address.
• At this point, TLB is checked for a quick reference to the
location in physical memory.
TLB
• Part of the chip’s memory-management unit(MMU).
• A hardware cache of popular virtual-to-physical address translation.

Logical TLB
Address Lookup MMU TLB Hit Physical
TLB Address
popular v to p
TLB Miss
Page 0
CPU Page Table
Page 1
all v to p entries
Page 2

Page n
Address Translation with MMU
Physical Memory

© 2022 KL University – The contents of this presentation are an


intellectual and copyrighted property of KL University. ALL 20
RIGHTS RESERVED
Translation Look Aside Buffer
• When an address is searched in the TLB and not found, the
physical memory must be searched with a memory page crawl
operation.
• As virtual memory addresses are translated, values referenced are
added to TLB.
• When a value can be retrieved from TLB, speed is enhanced
because the memory address is stored in the TLB on processor.
• Most processors include TLBs to increase the speed of virtual
memory operations through the inherent latency-reducing proximity
as well as the high-running frequencies of current CPU’s.
Translation Look Aside Buffer
• TLBs also add the support required for multi-user computers to
keep memory separate, by having a user and a supervisor mode
as well as using permissions on read and write bits to enable
sharing.
• TLBs can suffer performance issues from multitasking and code
errors. This performance degradation is called a cache thrash.
• Cache thrash is caused by an ongoing computer activity that fails
to progress due to excessive use of resources or conflicts in the
caching system.
Free Space Management
• A file system is responsible to allocate the free blocks to the file therefore it has
to keep track of all the free blocks present in the disk. There are mainly two
approaches by using which, the free blocks in the disk are managed.
1. Bit Vector
• In this approach, the free space list is implemented as a bit map vector. It
contains the number of bits where each bit represents each block.
• If the block is empty then the bit is 1 otherwise it is 0. Initially all the blocks are
empty therefore each bit in the bit map vector contains 1.
• As the space allocation proceeds, the file system starts allocating blocks to the
files and setting the respective bit to 0.
2. Linked List
• It is another approach for free space management. This approach suggests
linking together all the free blocks and keeping a pointer in the cache which
points to the first free block.
• Therefore, all the free blocks on the disks will be linked together with a pointer.
Whenever a block gets allocated, its previous free block will be linked to its next
free block.
Free Space With Chunks Allocated
[virtual address: 16KB]
s iz e : 100

size: 100
8 bytes header m a g ic : 1 2 3 4 5 6 7

magic: 1234567

100 bytes still allocated


■■■

■ ■ ■

s iz e : 100

m a g ic : 1 2 3 4 5 6 7
size: 100
magic: 1234567
sptr
100 bytes still allocated
■■■

■ ■ ■

(but about to be freed)


s iz e : 100

m a g ic : 1 2 3 4 5 6 7
size: 100
magic: 1234567

100 bytes still allocated


■■■

■ ■ ■

head s iz e : 3764

n e x t: 0
size: 3764
next: 0

The free 3764-byte chunk


■■■

■ ■ ■

Free Space With Three Chunks Allocated

© 2022 KL University – The contents of this presentation are an


intellectual and copyrighted property of KL University. ALL 26
RIGHTS RESERVED
Free Space With free()
[virtual address: 16KB]
 Example: free(sptr)
s iz e : 100

m a g ic : 1 2 3 4 5 6 7
size: 100
magic: 1234567

• The 100 bytes chunks is back into the


■■■

■ ■ ■
100 bytes still allocated
free list. head s iz e : 100

size: 100
• The free list will start with a small
n e x t: 16708

next: 16708
sptr
chunk. ■■■

■ ■ ■
(now a free chunk of
memory)
• The list header will point the small chunk s iz e : 100

size: 100
• The free() function is used to m a g ic : 1 2 3 4 5 6 7

magic: 1234567
deallocate memory while it is ■■■

100 bytes still allocated


allocated using malloc(),
■ ■ ■

calloc() and realloc(). The


s iz e : 3764

size: 3764
syntax of the free is simple. We
n e x t: 0

next: 0

simply use free with the ■■■

■ ■ ■ The free 3764-byte chunk


pointer. Then it can clean up
the memory.
© 2022 KL University – The contents of this presentation are an
intellectual and copyrighted property of KL University. ALL 27
RIGHTS RESERVED
Thrashing
• If the number of frames allocated to a low-priority process falls below the
minimum number then we must suspend that process execution.
• We should then page out its remaining pages, freeing all its allocated frames.
• So, now swapping is required. We can find some process in a system that does
not have “enough” frames.
• It is technically possible to reduce the number of allocated frames to the
minimum, there is some(larger) number of pages in active use.
• If the Process does not have this number of frames then it will quickly page fault
again and again.
• The process continues to fault, replacing pages for which It then faults and brings
back in right away. Such a process spend more time in paging than executing.
• This high paging activity is called as Trashing. A process is said to be trashing if it
is spending more time in paging then in execution.
Page Replacement Algorithms – FIFO
FIFO is one of the simplest page replacement algorithms. A
FIFO page replacement algorithm associates with each page
the time when that page was brought into memory. At the
point when a page must be replaced, the most experienced or
oldest page is selected.
Consider the following reference string: 0, 2, 1, 6, 4, 0, 1, 0, 3, 1, 2, 1. Using FIFO page replacement algorithm

So, total number of page faults = 9. Given memory capacity (as number of pages it can
hold) and a string representing pages to be referred, write a function to find number of
page faults.
Optimal Page Replacement Algorithm

The Optimal Page Replacement algorithm has the lowest page fault
rate of all algorithms. The criteria of this algorithm is “ Replace a
page that will not be used for the longest period of time”
(The Longest Time in Feature)
LRU Page Replacement Algorithms

page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and


Frame set is 4
Least Frequently Used Page Replacement Algorithms –
LFU

The Least Frequently used algorithm “Selects a page for


replacement, if the page has not been used often in the past”
or “Replace page that page has smallest count”
Most Frequently Used Page Replacement Algorithms –
MFU

The Most Frequently used algorithm “Selects a page for


replacement, if the page has been used often in the past” or
“Replace page that page has highest count”
Belady’s Anomaly
Belady’s Anomaly is the phenomenon of increasing the
number of page faults on increasing the number of
frames in main memory. “Algorithms suffer from Belady’s
Anomaly” does not mean that always the number of page
faults will increase on increasing the number of frames in
main memory. This unusual behavior is observed only
sometimes.

You might also like