0% found this document useful (0 votes)
7 views98 pages

MM PDF

This document covers various aspects of memory management in operating systems, including types of memory, memory hierarchy, and memory allocation techniques. It discusses concepts such as paging, virtual memory, and different partitioning methods for multiprogramming. Additionally, it explains memory management strategies using bitmaps and linked lists to track memory usage.

Uploaded by

mdchaudhary37
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views98 pages

MM PDF

This document covers various aspects of memory management in operating systems, including types of memory, memory hierarchy, and memory allocation techniques. It discusses concepts such as paging, virtual memory, and different partitioning methods for multiprogramming. Additionally, it explains memory management strategies using bitmaps and linked lists to track memory usage.

Uploaded by

mdchaudhary37
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 98

3140702

Operating System

Unit - 6
Memory
Managemen
t
Topics to be covered
 Basic Memory Management: Definition
 Logical and Physical address map
 Memory allocation
 Paging
 Virtual Memory: Basics of Virtual Memory
 Hardware and control structures – Locality of reference
 Page fault
 Working Set
 Dirty page/Dirty bit – Demand paging ( Concepts only)
 Page Replacement Algorithms

Unit – 6: Memory Management 2 LDCE,IT


What is Memory?
 Computer memory is any physical device capable of storing
information temporarily or permanently.
 Types of memory
1. Random Access Memory (RAM), is a volatile memory that loses
its contents when the computer or hardware device loses power.
2. Read Only Memory (ROM), is a non-volatile memory,
sometimes abbreviated as NVRAM, is a memory that keeps its
contents even if the power is lost.
 Computer uses special ROM called BIOS (Basic Input Output
System) which permanently stores the software needed to access
computer hardware such as hard disk and then load an operating
system into RAM and start to execute it.

Unit – 6: Memory Management 2


3 LDCE,IT
What is Memory? (cont…)
3. Programmable Read-Only Memory (PROM), is a memory chip
on which you can store a program. But once the PROM has been
used, you cannot wipe it clean and use it to store something
else. Like ROMs, PROMs are non-volatile. E.g CD-R
4. Erasable Programmable Read-Only Memory (EPROM), is a
special type of PROM that can be erased by exposing it to
ultraviolet light. E.g CD-RW
5. Electrically Erasable Programmable Read-Only Memory
(EEPROM), is a special type of PROM that can be erased by
exposing it to an electrical charge. E.g Pendrive

Unit – 6: Memory Management 2


4 LDCE,IT
What is Memory Hierarchy?
 The hierarchical arrangement of storage in current
computer architectures is called the memory hierarchy.

• Faster
Register • Expensive
• Less Capacity

L1 Cache

L2 Cache

Main Memory

Local secondary storage • Slower


• Cheaper
Remote secondary storage • More Capacity

Unit – 6: Memory Management 2


5 LDCE,IT
Memory abstraction
 The hardware and OS memory manager makes you see
the memory as a single contiguous entity.
 How do they do that?
• Abstraction
 Is abstraction necessary?

Unit – 6: Memory Management 2


6 LDCE,IT
No memory abstraction
 In this model the memory presented to the programmer
was simply a single block of physical memory
• having a set of addresses from 0 to some maximum
• with each address corresponding to a cell some
number of bits, commonly eight.
containing

0xFFF… OS in ROM 0xFFF… Driver in ROM 0xFFF…

User
Progra User
m User Progra
Progra m
m
OS in RAM 0 0 OS in RAM 0

Even with no abstraction, we can have several setups!

Unit – 6: Memory Management 2


7 LDCE,IT
No memory abstraction
 When program execute instruction like
• MOV REGISTER1, 1000
 If at the same time another program execute same instruction
then value of first program will be overwrite.
 So only one process at a time can be running.

0xFFF… OS in ROM 0xFFF… Driver in ROM 0xFFF…


User
Progra User
m User Progra
Progra m
m
OS in RAM 0 0 OS in RAM 0

Even with no abstraction, we can have several setups!

Unit – 6: Memory Management 2


8 LDCE,IT
No memory abstraction
 What if we want to run multiple programs?
• OS saves entire memory on disk
• OS brings next program
• OS runs next program
 We can use swapping to run multiple programs concurrently.
0xFFF…
User Swapped out
Program
H
a
r
OS in RAM d
0
D
• The process of bringing in each
i process in its entirely in to
s
memory, running it for a while
k and then putting it back on the
Swapped
disk is called swapping.
in
Unit – 6: Memory Management 2
9 LDCE,IT
Ways to implement swapping system
 Two different ways to implement swapping system
1. Multiprogramming with fixed partitions
2. Multiprogramming with dynamic partitions

Unit – 6: Memory Management 10 LDCE,IT


Multiprogramming with Fixed partitions
• Here memory is divided into fixed sized partitions.
Partition 4
• Size can be equal or unequal for different partitions.
• Generally unequal partitions are used for
better utilizations. Partition 3

• Each partition can accommodate exactly one process,


means only single process can be placed in
one partition. Partition 2

• The partition boundaries are not movable. Partition 1


OS

Unit – 6: Memory Management 11 LDCE,IT


Multiprogramming with Fixed partitions
• Whenever any program needs to be loaded
Partition 4
memory,
in a free partition big enough to hold the
program is found. This partition will be allocated to
that program or process. Partition 3
• If there is no free partition available of required size,
then the process needs to wait. Such process will be
put in a queue. Partition 2

Partition 1
OS

Unit – 6: Memory Management 11


12 LDCE,IT
Multiprogramming with Fixed partitions
 There are two ways to maintain queue
1. Using Multiple Input Queues.
2. Using Single Input Queue.
Partition - 4
Partition - 4
Single
Partition - 3 Partition - 3
Input
Multiple Queue
Input
Queues

Partition - 2 Partition - 2

Partition - Partition - 1
1

Operating Operating
System System

Unit – 6: Memory Management 11


13 LDCE,IT
Multiprogramming with Dynamic partitions
• Here, memory is shared among operating system and
various simultaneously running processes.
• Initially, the entire available memory is treated as a
single free partition.
User
• Process is allocated exactly as much memory Progra
as it requires. m
• Whenever any process enters in a system, a chunk of
free memory big enough to fit the process is found
and allocated. The remaining unoccupied space is
treated as another free partition. OS

Unit – 6: Memory Management 11


14 LDCE,IT
Multiprogramming with Dynamic partitions
• If enough free memory is not available to fit the
process, process needs to wait until required memory
becomes available.
• Whenever any process gets terminate, it releases the
space occupied. If the released free space User
Progra
contiguous
is to another free partition, both the free m
partitions are merged together in to single free
partition.
• Better utilization of memory than fixed sized size
partition. OS

Unit – 6: Memory Management 11


15 LDCE,IT
Multiprogramming with Dynamic partitions

Unit – 6: Memory Management 11


16 LDCE,IT
Memory compaction
 When swapping creates multiple holes in memory, it is possible
to combine them all in one big hole by moving all the processes
downward as far as possible. This techniques is known as memory
compaction.
 It requires lot of CPU time.

Process B

Process B

Process A Process A

Operating Operating
System System

Unit – 6: Memory Management 11


17 LDCE,IT
Multiprogramming without memory abstraction
0 32764
.
.
We can use static Using absolute CMP 16412

Program 2
relocation at program address is wrong 16408
load time here 16404
16400
Program 1 Program 2 16392
16388
JMP 28 16384
0 16380 0 16380 0 16380
. . .
. . .
ADD 28 CMP 28 ADD 28

Program 1
MOV 24 24 MOV 24
20 20 20
16 16 16
8 8 8
4 4 4
JMP 24 0 JMP 28 0 JMP 24 0
Unit – 6: Memory Management 11
18 LDCE,IT
Static relocation
When program was loaded at address 16384, 0 32764
.
the constant 16384 was added to every
.
program address during the load process. CMP 16412
• Slow 16408
• Required extra information from program 16404
16400
Program 1 Program 2 16392
16388
JMP 16412 16384
0 16380 0 16380 0 16380
. . .
. . .
ADD 28 CMP 28 ADD 28
MOV 24 24 MOV 24
20 20 20
16 16 16
8 8 8
4 4 4
JMP 24 0 JMP 28 0 JMP 24 0
Unit – 6: Memory Management 19 LDCE,IT
Base and Limit register
• An address space is set of addresses that a 0 32764
.
process can use to address memory. .
• An address space is a range of valid CMP 16412
16408
addresses in memory that are available for a 16404
program or process. 16400
• Two registers: Base and Limit 16392
16388
1. Base register: Start address of a program JMP 16412 16384
in physical memory. 0 16380
.
2. Limit register: Length of the program. .
 For every memory access ADD 28
MOV 24
 Base is added to the address 20
 Result compared to Limit 16
 Only OS can modify Base and Limit register. 8
4
JMP 24 0

Unit – 6: Memory Management 20 LDCE,IT


Dynamic relocation
 Steps in dynamic relocation
1. Hardware adds relocation register (base) to virtual address to
get a physical address
2. Hardware compares address with limit register; address must be
less than or equal limit
3. If test fails, the processor takes an address trap and ignores the
physical address.

Base Limit
Registe Registe
r r
Logical Physical Ye
CPU + <= MEMORY
Address s
Address No

Trap: Address error

Unit – 6: Memory Management 21 LDCE,IT


Managing free memory
 Two ways to keep track of memory usage (free memory)
1. Memory management with Bitmaps
2. Memory management with Linked List

Unit – 6: Memory Management 21


22 LDCE,IT
Memory management with Bitmaps
A B C D E …….
0 8 16 24 32

1 1 1 1 1 0 0 0 • With bitmap, memory is divided into


1 1 1 1 1 1 1 1
allocation units.
1 1 0 0 1 1 1 1 • Corresponding to each allocation unit there is a
1 1 1 1 1 0 0 0 bit in a bitmap.
: : • Bit is 0 if the unit is free and 1 if unit is occupied.
• The size of allocation unit is an important design
issue, the smaller the size, the larger the bitmap.

Unit – 6: Memory Management 21


23 LDCE,IT
Memory management with Bitmaps
A B C D E …….
0 8 16 24 32

1 1 1 1 1 0 0 0 • The main problem is that when it has been


1 1 1 1 1 1 1 1 decided to bring a k unit process, the memory
1 1 0 0 1 1 1 1 manager must search the bitmap to find a run of
1 1 1 1 1 0 0 0 k consecutive 0 bits in the map.
: : • Searching a bitmap for a run of a given length is a
slow operation.

Unit – 6: Memory Management 21


24 LDCE,IT
Memory management with Linked List
P 0 5 H 5 3 P 8 6 P 14 4

H 18 2 P 20 6 P 26 3 H 29 3 X

Hole Starts Length Process


at 18 2

• Another way to keep track of memory is to maintain a linked list


of allocated and free memory segments, where segment either
contains a process or is an empty hole between two processes.
• Each entry in the list specifies a hole (H) or process (P), the
address at which it starts the length and a pointer to the next
entry.
• The segment list is kept sorted by address.

Unit – 6: Memory Management 21


25 LDCE,IT
Memory management with Linked List
P 0 5 H 5 3 P 8 6 P 14 4

H 18 2 P 20 6 P 26 3 H 29 3 X

Hole Starts Length Process


at 18 2

• Sorting this way has the advantage that when a


process terminates or is swapped out, updating the list is
straightforward.
• A terminating process normally has two neighbors (except when
it is at the very top or bottom of memory).

Unit – 6: Memory Management 21


26 LDCE,IT
Memory management with Linked List
P 0 5 H 5 3 P 8 6 P 14 4

H 18 2 P 20 6 P 26 3 H 29 3 X

Neighbors
Before P terminate After P terminate
A P B P is replaced by H A B

P is replaced by H
A P A
and two H are merged

P is replaced by H
P B B
and two H are merged

P is replaced by H
P
and three H are merged
Unit – 6: Memory Management 21
27 LDCE,IT
Memory allocation algorithms
 Four memory allocation algorithms are as follow
1. First fit
2. Next fit
3. Best fit
4. Worst fit

Unit – 6: Memory Management 21


28 LDCE,IT
First fit
 Search starts from the starting location of the memory.
 First available hole that is large enough to hold the
process is selected for allocation.
 The hole is then broken up into two pieces, one for process and
another for unused memory.
 Example: Processes of 212K, 417K, 112K and 426K arrives in order.
100k 500k 200k 300k 600k

100k 500k 200k 300k 600k

• Here process of size 426k will not get any partition for allocation.

Unit – 6: Memory Management 21


29 LDCE,IT
First fit
 Fastest algorithm because it searches as little as possible.
 Memory loss is higher, as very large hole may be selected
for small process.
 Here process of size 426k will not get any partition for allocation.

100k 500k 200k 300k 600k

100k 200k 300k

Unit – 6: Memory Management 30 LDCE,IT


Next fit
 It works in the same way as first fit, except that it keeps the track
of where it is whenever it finds a suitable hole.
 The next time when it is called to find a hole, it starts searching
the list from the place where it left off last time.
 Processes of 212K, 417K, 112K and 426K arrives in order.

100k 500k 200k 300k 600k

100k 288k 200k 300k 71k


212k 417k 112k

• Here process of size 426k will not get any partition for allocation.

Unit – 6: Memory Management 31 31 LDCE,IT


Next fit
 Search time is smaller.
 Memory manager must have to keep track of last allotted hole to
process.
 It gives slightly worse performance than first fit.
 Here process of size 426k will not get any partition for allocation.

100k 500k 200k 300k 600k

100k 288k 200k 300k 71k


212k 417k 112k

Unit – 6: Memory Management 31


32 LDCE,IT
Best fit
 Entire memory is searched here.
 The smallest hole, which is large enough to hold the process, is
selected for allocation.
 Processes of 212K, 417K, 112K and 426K arrives in order.

100k 500k 200k 300k 600k

100k 83k 88k 88k 174k


417k 112k 212k 426k

Unit – 6: Memory Management 31


33 LDCE,IT
Best fit
 Search time is high, as it searches entire memory every time.
 Memory loss is less.

100k 500k 200k 300k 600k

100k 83k 88k 88k 174k


417k 112k 212k 426k

Unit – 6: Memory Management 31


34 LDCE,IT
Worst fit
 Entire memory is searched here also. The largest hole, which is
largest enough to hold the process, is selected for allocation.
 Processes of 212K, 417K, 112K and 426K arrives in order.

100k 500k 200k 300k 600k

100k 83k 200k 300k 276k


417k 212k 112k

• Here process of size 426k will not get any partition for allocation.

Unit – 6: Memory Management 31


35 LDCE,IT
Worst fit
 Search time is high, as it searches entire memory every time.
 This algorithm can be used only with dynamic partitioning.
 Here process of size 426k will not get any partition for allocation.

100k 500k 200k 300k 600k

100k 83k 200k 300k 276k


417k 212k 112k

Unit – 6: Memory Management 31


36 LDCE,IT
Virtual Memory
 Memory is hardware that your computer uses to load the
operating system and run programs.
 Computer consists of one or more RAM chips that each have
several memory modules.
 The amount of real memory in a computer is limited to the
amount of RAM installed. Common memory sizes are 1GB, 2GB,
and 4GB.

Unit – 6: Memory Management 31


37 LDCE,IT
Virtual Memory
 Because your computer has a finite amount of RAM, it is possible
to run out of memory when too many programs are running at
one time.
 This is where virtual memory comes in.
 Virtual memory increases the available memory of your
computer by enlarging the "address space," or places in memory
where data can be stored.
 It does this by using hard disk space for additional memory
allocation.
 However, since the hard drive is much slower than the RAM, data
stored in virtual memory must be mapped back to real memory
in order to be used.

Unit – 6: Memory Management 31


38 LDCE,IT
Virtual Memory
• Each program has its own address space, RAM
which is broken up into pages.
RAM
• Each page is a contiguous range
of addresses. HDD Another
• These pages mapped onto the
are Process’
s
physical all
program, memory but,not required
pages are run the
to RAM Memory
to memory.
be present in the physical
• The operating system keeps those parts
of the program currently in use in main HDD
RAM
memory, and the rest on the disk.

Virtual
Address space HDD
Unit – 6: Memory Management 39 LDCE,IT
Virtual Memory
• In a system using virtual memory, the RAM
physical memory is divided into page
RAM
frames and the virtual address space is
divided in into equally-sized partitions Another
HDD
called pages.
Process’
• Virtual memory works fine in a s
Memory
multiprogramming system, with bits and RAM
pieces of many programs in memory at
once.
HDD
RAM

Virtual
Address space HDD
Unit – 6: Memory Management 40 LDCE,IT
Paging
 Paging is a storage mechanism used to retrieve processes from
the secondary storage (Hard disk) into the main memory (RAM)
in the form of pages.
 The main idea behind the paging is to divide each process in the
form of pages. The main memory will also be divided in the form
of frames.
 One page of the process is to be stored in one of the frames of the
memory.
 The pages can be stored at the different locations of the memory
but the priority is always to find the contiguous frames or holes.

Unit – 6: Memory Management 41 LDCE,IT


Paging
 Pages of the process are brought into the main memory only
when they are required otherwise they reside in the secondary
storage.
 The sizes of each frame must be equal. Considering the fact that
the pages are mapped to the frames in Paging, page size needs to
be as same as frame size.
 Different operating system defines different frame sizes.

Unit – 6: Memory Management 41


42 LDCE,IT
Paging

40K – 44K Page 11


36K – 40K Page 10 Virtual page
Virtual 32K – 36K Page 9
Address 28K – 32K Page 8
Space
24K – 28K Page 7
Physical
20K – 24K Page 6 Page frame Frame 6 20K – 24K
16K – 20K Page 5 Frame 5 16K – 20K Memory
Frame 4 12K – 16K Address
12K – 16K Page 4
8K – 12K Page 3 Frame 3 8K – 12K
4K – 8K Page 2 Frame 2 4K – 8K
0K – 4K Page 1 Frame 1 0K – 4K

Unit – 6: Memory Management 41


43 LDCE,IT
Paging
Process 4 Frame 12 Process 3
Page 3 Frame 11 Page 3
Page 2 Frame 10 Page 2
Page 1 Frame 9 Page 1
Frame 8
Process 1 and process 4 are Frame 7
moved out from memory and Frame 6
process 5 enters into
memory. Frame 5
Process 1 Frame 4 Process 2
Page 3 Frame 3 Page 3
Page 2 Frame 2 Page 2
Page 1 Frame 1 Page 1

Unit – 6: Memory Management 41


44 LDCE,IT
Paging
Process 5 Frame 12 Process 3
Page 6 Frame 11 Page 3
Page 5 Frame 10 Page 2
Page 4 Frame 9 Page 1
Page 3 Frame 8
Page 2 Frame 7
Page 1 Frame 6
Frame 5
Frame 4 Process 2
Frame 3 Page 3
We have 6 non contiguous
frames available in the memory Frame 2 Page 2
and paging provides the Frame 1 Page 1
flexibility of storing the process
at the different places.

Unit – 6: Memory Management 41


45 LDCE,IT
Paging

40K – 44K
36K – 40K Virtual page
Virtual 32K – 36K 5
Address 28K – 32K
Space
24K – 28K
Physical
20K – 24K 3 Page frame 5 20K – 24K
16K – 20K 4 4 16K – 20K Memory
3 12K – 16K Address
12K – 16K 0
8K – 12K 2 8K – 12K
4K – 8K 1 1 4K – 8K
0K – 4K 2 0 0K – 4K

Unit – 6: Memory Management 41


46 LDCE,IT
Paging
 Size of Virtual Address Space is greater than that of main
memory, so instead of loading entire address space in to memory
to run the process, MMU copies only required pages into main
memory.
 In order to keep the track of pages and page frames, OS
maintains a data structure called page table.

Unit – 6: Memory Management 41


47 LDCE,IT
Logical Address vs Physical Address
 Logical Address is generated by CPU while a program is running.
The logical address is virtual address as it does not exist physically
therefore it is also known as Virtual Address.
 Physical Address identifies a physical location of required data in
a memory. The user never directly deals with the physical address
but can access by its corresponding logical address.
 The hardware device called Memory-Management Unit is used
for mapping (converting) logical address to its corresponding
physical address.

Unit – 6: Memory Management 41 LDCE,IT


Conversion of virtual address to physical address
 When virtual memory is used, the virtual address is presented to
an MMU (Memory Management Unit) that maps the virtual
addresses onto the physical memory addresses.
40K – 44K X
36K – 40K X Virtual page
Virtual 32K – 36K 5
Address 28K – 32K X
Space
24K – 28K X
Physical
20K – 24K 3 Page frame 5 20K – 24K
16K – 20K 4 4 16K – 20K Memory
3 12K – 16K Address
12K – 16K 0
8K – 12K X 2 8K – 12K
4K – 8K 1 1 4K – 8K
0K – 4K 2 0 0K – 4K

Unit – 6: Memory Management 41


49 LDCE,IT
Conversion of virtual address to physical address
 We have a computer generated 16-bit addresses, from 0 up
to 44K. These are the virtual addresses.

40K – 44K X With 44 KB of virtual address


36K – 40K X space , we get 11 virtual pages and
Virtual page
24 KB of physical memory, we get
Virtual 32K – 36K 5 6 page frames.
Address 28K – 32K X
Space
24K – 28K X
Physical
20K – 24K 3 Page frame 5 20K – 24K
16K – 20K 4 4 16K – 20K Memory
3 12K – 16K Address
12K – 16K 0
8K – 12K X 2 8K – 12K
4K – 8K 1 1 4K – 8K
0K – 4K 2 0 0K – 4K

Unit – 6: Memory Management 50 LDCE,IT


Conversion of virtual address to physical address
 However, only 24KB of physical memory is available, so although
44KB programs can be written, they cannot be loaded in to
memory in their entirety and run.
40K – 44K X
36K – 40K X Virtual page
Virtual 32K – 36K 5
Address 28K – 32K X
Space
24K – 28K X
Physical
20K – 24K 3 Page frame 5 20K – 24K
16K – 20K 4 4 16K – 20K Memory
3 12K – 16K Address
12K – 16K 0
8K – 12K X 2 8K – 12K
4K – 8K 1 1 4K – 8K
0K – 4K 2 0 0K – 4K

Unit – 6: Memory Management 51 LDCE,IT


Conversion of virtual address to physical address
 A complete copy of a program’s core image, up to 44 KB, must be
present on the disk.
 Only required pages are loaded in the physical memory.
40K – 44K X
36K – 40K X Virtual page
Virtual 32K – 36K 5
Address 28K – 32K X
Space
24K – 28K X
Physical
20K – 24K 3 Page frame 5 20K – 24K
16K – 20K 4 4 16K – 20K Memory
3 12K – 16K Address
12K – 16K 0
8K – 12K X 2 8K – 12K
4K – 8K 1 1 4K – 8K
0K – 4K 2 0 0K – 4K

Unit – 6: Memory Management 51 52 LDCE,IT


Conversion of virtual address to physical address
 A complete copy of a program’s core image, up to 44 KB, must be
present on the disk.
 Only required pages are loaded in the physical memory.
40K – 44K X
36K – 40K X Transfers between RAM and disk
Virtual page are always in units of a page.
Virtual 32K – 36K 5
Address 28K – 32K X
Space
24K – 28K X
Physical
20K – 24K 3 Page frame 5 20K – 24K
16K – 20K 4 4 16K – 20K Memory
3 12K – 16K Address
12K – 16K 0
8K – 12K X 2 8K – 12K
4K – 8K 1 1 4K – 8K
0K – 4K 2 0 0K – 4K

Unit – 6: Memory Management 51


53 LDCE,IT
Internal operation of the MMU
Outgoing physical address (24580)
1 1 0 0 0 0 0 0 0 0 0 0 1 0 0

15 000 0
14 000 0
13 000 0
12 000 0
11 111 1
10 000 0
9 101 1 12 bit offset
Page 8 000 0
copied directly from
Tabl 7 000 0
input to output
e 6 000 0
Virtual page=2 5 011 1
is used as an 4 100 1
index into the 3 000 1
2 110 1 110
page table
1 001 1 Present/
0 010 1
Absent bit
0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0

Incoming logical address (8196)


Unit – 6: Memory Management 51
54 LDCE,IT
Conversion of virtual address to physical address
 The virtual address is split into a virtual page number (high order
bits) and an offset (low-order bits).
 With a 16-bit address and a 4KB page size, the upper 4 bits could
specify one of the 11 virtual pages and the lower 12 bits would
then specify the byte offset (0 to 4095) within the selected page.
 The virtual page number is used as an index into the Page table.

Unit – 6: Memory Management 51


55 LDCE,IT
Conversion of virtual address to physical address
 If the present/absent bit is 0, it is page-fault; a trap to the
operating system is caused to bring required page into main
memory.
 If the present/absent bit is 1, required page is there with main
memory and page frame number found in the page table is copied
to the higher order bit of the output register along with the offset.
 Together Page frame number and offset creates physical address.
 Physical Address = Page frame Number + offset of virtual
address.

Unit – 6: Memory Management 51


56 LDCE,IT
Page table
 Page table is a data structure which translates virtual address
into equivalent physical address.
 The virtual page number is used as an index into the Page table to
find the entry for that virtual page and from the Page table
physical page frame number is found.
 Thus the purpose of page table is to map virtual pages onto page
frames.

Unit – 6: Memory Management 51


57 LDCE,IT
Page table structure
Page frame number
Protection

Present/ absent

 Page frame Number: It gives the frame number in which the


current page you are looking for is present.
 Present/Absent bit: Present or absent bit says whether a
particular page you are looking for is present or absent. If it is not
present, that is called Page Fault. It is set to 0 if the corresponding
page is not in memory. Sometimes this bit is also known as
valid/invalid bits.
 The Protection bits: Protection bit says that what kind of
protection you want on that page. In the simplest form, 0 for
read/write and 1 for read only.
Unit – 6: Memory Management 51
58 LDCE,IT
Page table structure
Page frame number
Caching Reference Modified Protection Present/ absent
Disabled d

 Modified bit: Modified bit says whether the page has been
modified or not. If the page in memory has been modified, it
must be written back to disk. This bit is also called as dirty bit as it
reflects the page’s state.
 Referenced bit: A references bit is set whenever a page is
referenced, either for reading or writing. Its value helps
operating system in page replacement algorithm.
 Cashing Disabled bit: This feature is important for pages that
maps onto device registers rather than memory. With this bit
cashing can be turned off.

Unit – 6: Memory Management 51


59 LDCE,IT
Demand Paging
 In paging system, processes are started up with none of their
pages in memory.
 When CPU tries to fetch the first instruction, it gets page fault,
other page faults for global variables and stack usually follow
quickly.
 After a while, the process has most of the pages it needs in main
memory and it has few page faults.
 This strategy is called demand paging because pages are loaded
only on demand, not in advance.

Unit – 6: Memory Management 60 LDCE,IT


Definition
s Working Set: The set of pages that a process is currently using is
known as working set.
 Thrashing: A program causing page faults every few instructions
is said to be thrashing.
 Pre-paging: Many paging systems try to keep track of each
process‘s working set and make sure that it is in memory before
letting the process run.
 Loading pages before allowing processes to run is called pre-
paging.

Unit – 6: Memory Management 61 LDCE,IT


Issues in Paging
 In any paging system, two major issues must be faced:
1. The mapping from virtual address to physical address must be
fast.
2. If the virtual address space is large, the page table will be large.

Unit – 6: Memory Management 61


62 LDCE,IT
Mapping from virtual address to physical address must be fast

Logical Address
Physical Address
P D F3 D

P2

performance is
reduced by half
CPU
For every instruction
Memory reference Memory
occur two time
Use a hardware
Page Table
TLB (Translation
Lookaside Buffer) Page
Frame
P1 F2
P2 F3
P3 F1
Unit – 6: Memory Management 61 LDCE,IT
63
Mapping from virtual address to physical address must be fast

Logical Address
Physical Address
P D F1
F3 D
P3 P2
TLB
Page
CPU Frame Hardware
P1 F2
Memory
P2 F3
Page
Instruction 1
Instruction 2 Page Table
Instruction 3 Page
… Frame Data Structure
… P1 F2 Inside memory
Instruction 100
P2 F3
P3 F1
Unit – 6: Memory Management 61 LDCE,IT
64
Mapping from virtual address to physical address using TLB
 Steps in TLB hit:
1. CPU generates virtual address.
2. It is checked in TLB (present).
3. Corresponding frame number is retrieved, which now tells where in the
main memory page lies.

 Steps in Page miss:


1. CPU generates virtual address.
2. It is checked in TLB (not present).
3. Now the page number is matched to page table residing in main
memory.
4. Corresponding frame number is retrieved, which now tells where in the
main memory page lies.
5. The TLB is updated with new Page Table Entry (if space is not there, one
of the replacement technique comes into picture i.e either FIFO, LRU or
MFU etc).

Unit – 6: Memory Management 61


65 LDCE,IT
Virtual address space is large, the page table will be large
 Two different ways to deal with large page table problems:
1. Multilevel Page Table
2. Inverted Page Table

Unit – 6: Memory Management 61


66 LDCE,IT
Multilevel Page Table
Logical Address
PT1 PT2 Offset Physical Address
F202 D
10 bit 10 bit 12
2 P2 bits Page Frame
P1 F101

CPU Page Pointer P2 F102


1
Memory
2 Page
3 Frame
P1
P2 F201
F202

Page Frame
P1 F301
P2 F302

Unit – 6: Memory Management 61


67 LDCE,IT
Inverted Page Table
Current process ID - 245
Process IDs do not match.
Page No 2 Offset So follow chaining
pointer
Frame No Page No Process ID Pointer
Hash 1 2 211
Function
2
3
4 2 245
5
6

Process IDs match.


So frame no added to
Frame No 4 Offset
physical address.
Physical Address
Unit – 6: Memory Management 61
68 LDCE,IT
Inverted Page Table
 Each entry in the page table contains the following fields.
1. Page number – It specifies the page number range of the logical
address.
2. Process id – An inverted page table contains the address space
information of all the processes in execution.
Since two different processes can have similar set of virtual
addresses, it becomes necessary in Inverted Page Table to store
a processID of each process to identify it’s address space
uniquely.
This is done by using the combination of PID and Page Number.
So this Process Id acts as an address space identifier and
ensures that a virtual page for a particular process is mapped
correctly to the corresponding physical frame.

Unit – 6: Memory Management 61


69 LDCE,IT
Inverted Page Table
 Each entry in the page table contains the following fields.
3. Control bits – These bits are used to store extra paging-related
information. These include the valid bit, dirty bit, reference bits,
protection and locking information bits.
4. Chained pointer – It may be possible sometime that two or more
processes share a part of main memory.
In this case, two or more logical pages map to same Page Table
Entry then a chaining pointer is used to map the details of these
logical pages to the root page table.

Unit – 6: Memory Management 70 LDCE,IT


Page replacement algorithms
 Following are different types of page replacement algorithms
1. Optimal Page Replacement Algorithm
2. FIFO Page Replacement Algorithm
3. The Second Chance Page Replacement Algorithm
4. The Clock Page Replacement Algorithm
5. LRU (Least Recently Used) Page Replacement Algorithm
6. NRU (Not Recently Used)

Unit – 6: Memory Management 71 LDCE,IT


Optimal Page Replacement Algorithm
 The moment a page fault occurs, some set of pages will be in the
memory.
 One of these pages will be referenced on the very next instruction.
 Other pages may not be referenced until 10, 100, or perhaps 1000
instructions later.
 Each page can be labeled with the number of instructions that
will be executed before that page is first referenced.
 The optimal page algorithm simply says that the page with the
highest label should be removed.
 The only problem with this algorithm is that it is unrealizable.
 At the time of the page fault, the operating system has no way of
knowing when each of the pages will be referenced next.

Unit – 6: Memory Management 71


72 LDCE,IT
Optimal Page Replacement Algorithm
 Page Reference String:
• 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 0, 2, 0, 1, 7, 0, 1
• Three frames

Page
Requests 7 0 1 2 0 3 0 4 2 3 0 3 2 0 2 0 1 7 0 1
Frame 1 7 7 7 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1
Frame 2 0 0 0 0 0 0 4 4 4 0 0 0 0 0 0 0 0 0 0
Frame 3 1 1 1 3 3 3 3 3 3 3 3 3 3 3 3 7 7 7
Page F F F F F F F F F
Faults
(9)

Unit – 6: Memory Management 71


73 LDCE,IT
FIFO Page Replacement Algorithm
 The first in first out page replacement algorithm is the simplest
page replacement algorithm.
 The operating system maintains a list of all pages currently in
memory, with the most recently arrived page at the tail and least
recent at the head.
 On a page fault, the page at head is removed and the new page is
added to the tail.
 When a page replacement is required the oldest page in memory
needs to be replaced.
 The performance of the FIFO algorithm is not always good
because it may happen that the page which is the oldest is
frequently referred by OS.
 Hence removing the oldest page may create page fault again.

Unit – 6: Memory Management 71


74 LDCE,IT
FIFO Page Replacement Algorithm
 Page Reference String:
• 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
• Three frames

Page
Requests 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
Frame 1 7 7 7 2 2 2 2 4 4 4 0 0 0 0 0 0 0 7 7 7
Frame 2 0 0 0 0 3 3 3 2 2 2 2 2 1 1 1 1 1 0 0
Frame 3 1 1 1 1 0 0 0 3 3 3 3 3 2 2 2 2 2 1
Page F F F F F F F F F F F F F F F
Faults
(15)

Unit – 6: Memory Management 71


75 LDCE,IT
Second Chance Page Replacement Algorithm
 It is modified form of the FIFO page replacement algorithm.
 It looks at the front of the queue as FIFO does, but instead of
immediately paging out that page, it checks to see if its
referenced bit is set.
• If it is not set (zero), the page is swapped out.
• Otherwise, the referenced bit is cleared, the page is inserted at
the back of the queue (as if it were a new page) and this process
is repeated.
Page loaded 1 0 1 0 1 Most recently
first loaded page
A B C D E

0 1 0 1 0
Remove B C D E A Add at end
1 0 1 0 1
C D E A F

Unit – 6: Memory Management 71


76 LDCE,IT
Second Chance Page Replacement Algorithm
 If all the pages have their referenced bit set, on the second
encounter of the first page in the list, that page will be swapped
out, as it now has its referenced bit cleared.
 If all the pages have their reference bit cleared, then second
chance algorithm degenerates into pure FIFO.
1 1 1 1 1
A B C D E

0 0 0 0 0
A B C D E

0 0 0 0 1
B C D E F

Unit – 6: Memory Management 71


77 LDCE,IT
Clock Page Replacement Algorithm
 In second chance algorithm pages are constantly moving around
on its list. So it is not working efficiently.
 A better approach is to keep all the page frames on a circular list
in the form of a clock.
1
A
1 0
F B

A hand points to
1 1
the oldest page.
E C
0
D

Unit – 6: Memory Management 71


78 LDCE,IT
Clock Page Replacement Algorithm
 When a page fault occurs, the page being pointed to by the hand
is inspected.

1
A
1 0
F B

1 1
E C
0
D

Unit – 6: Memory Management 71


79 LDCE,IT
Clock Page Replacement Algorithm
 If its R bit is 0, the page is evicted, the new page is inserted into
the clock in its place, and the hand is advanced one position.

1
A
1 1
F G
B
0

1 1
E C
0
D

Unit – 6: Memory Management 80 LDCE,IT


Clock Page Replacement Algorithm
 If R is 1, it is cleared and the hand is advanced to the next page.

1
A
1 0
F B

1 0
E 1
0 C
D
This process is repeated until
a page is found with R = 0.
Unit – 6: Memory Management 81 LDCE,IT
LRU (Least Recently Used) Page Replacement Algorithm
 A good approximation to the optimal algorithm is based on the
observation that pages that have been heavily used in last few
instructions will probably be heavily used again in next few
instructions.
 When page fault occurs, throw out the page that has been used
for the longest time. This strategy is called LRU (Least Recently
Used) paging.
 To fully implement LRU, it is necessary to maintain a linked list of
all pages in memory, with the most recently used page at the
front and the least recently used page at the rear.
 The list must be updated on every memory reference.
 Finding a page in the list, deleting it, and then moving it to the
front is a very time consuming operations.

Unit – 6: Memory Management 81


82 LDCE,IT
LRU (Least Recently Used) Page Replacement Algorithm
 Page Reference String:
• 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
• Three frames
Page
Requests 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
Frame 1 7 7 7 2 2 2 2 4 4 4 0 0 0 1 1 1 1 1 1 1
Frame 2 0 0 0 0 0 0 0 0 3 3 3 3 3 3 0 0 0 0 0
Frame 3 1 1 1 3 3 3 2 2 2 2 2 2 2 2 2 7 7 7
Page F F F F F F F F F F F F
Faults
(12)

Unit – 6: Memory Management 81


83 LDCE,IT
NRU (Not Recently Used) Page Replacement Algorithm
 NRU makes approximation to replace the page based on R
(referenced) and M (modified) bits.
 When a process is started up, both page bits for all pages are set
to 0 by operating system.
 Periodically, the R bit is cleared, to distinguish pages that have not
been referenced recently from those that have been.
 When page fault occurs, the operating system inspects all the
pages and divide them into 4 categories based on current values
of their R and M bits
1. Case 0 : not referenced, not modified
2. Case 1 : not referenced, modified
3. Case 2 : referenced, not modified
4. Case 3 : referenced, modified
 The (Not Recently Used) algorithm removes a page
random
NRU from
at the lowest numbered nonempty class.

Unit – 6: Memory Management 81


84 LDCE,IT
NRU (Not Recently Used) Page Replacement Algorithm
 For example if,
1. Page-0 is of class-2 (referenced, not modified)
2. Page-1 is of class-1 (not referenced, modified)
3. Page-2 is of class-0 ( not referenced, not modified)
4. Page-3 is of class-3 (referenced, modified)
 So lowest class page-2 needs to be replaced by NRU.

Unit – 6: Memory Management 81


85 LDCE,IT
Belady’s Anomaly
 Page Reference String: Page Faults of 3 Frame > Page Faults of 4 Frame
• 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 Belady’s Anomaly

Page Requests 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 4 4 4 5 5 5 5 5 5
Three Frames

Frame 2 2 2 2 1 1 1 1 1 3 3 3
Frame 3 3 3 3 2 2 2 2 2 4 4
Page Faults (9) F F F F F F F F F

Page Requests 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 5 5 5 5 4 4
Four Frames

Frame 2 2 2 2 2 2 2 1 1 1 1 5
Frame 3 3 3 3 3 3 3 2 2 2 2
Frame 4 4 4 4 4 4 4 3 3 3
Page Faults (10) F F F F F F F F F F

Unit – 6: Memory Management 81


86 LDCE,IT
Belady’s Anomaly
 Belady’s anomaly is the phenomenon in which increasing the
number of page frames results in an increase in the number of
page faults for certain memory access patterns.
 This phenomenon is commonly experienced when using the first-
in first-out (FIFO) page replacement algorithm.
 In FIFO, the page fault may or may not increase as the page
frames increase, but in Optimal and stack-based algorithms like
LRU, as the page frames increase the page fault decreases.

Unit – 6: Memory Management 81


87 LDCE,IT
Sum
 A computer has four page frames. The time of loading, time of last
access and the R and M bit for each page given below. Which
page FIFO, LRU and NRU will replace.
Page Loaded Last Ref. R M
0 126 280 1 0
1 230 265 0 1
2 140 270 0 0
3 110 285 1 1

 FIFO:- Page which is arrived first needs to be removed first, so


page-3 needs to replace.

Unit – 6: Memory Management 81


88 LDCE,IT
Sum
 A computer has four page frames. The time of loading, time of last
access and the R and M bit for each page given below. Which
page FIFO, LRU and NRU will replace.

Page Loaded Last Ref. R M


0 126 280 1 0
1 230 265 0 1
2 140 270 0 0
3 110 285 1 1

 LRU:- When page fault occurs, throw out the page that has been
used for the longest time.
 Page page-1 is not used for the long time from all four, so LRU
suggest replacing page-1.

Unit – 6: Memory Management 81


89 LDCE,IT
Sum
 A computer has four page frames. The time of loading, time of last
access and the R and M bit for each page given below. Which page
FIFO, LRU and NRU will replace.
Page Loaded Last Ref. R M
0 126 280 1 0
1 230 265 0 1
2 140 270 0 0
3 110 285 1 1
 NRU:-
• Page-0 is of class-2 (referenced, not modified)
• Page-1 is of class-1 (not referenced, modified)
• Page-2 is of class-0 ( not referenced, not modified)
• Page-3 is of class-3 (referenced, modified)
• So lowest class page-2 needs to be replaced by NRU

Unit – 6: Memory Management 90 LDCE,IT


Segmentatio
n job is divided into several segments of different sizes, one for
 Segmentation is a memory management technique in which each

each module that contains pieces that perform related functions.


 Each segment is actually a different logical address space of the
program.
 When a process is to be executed, its corresponding
segmentation are loaded into non-contiguous memory though
every segment is loaded into a contiguous block of available
memory.
 Segmentation memory management works very similar to paging
but here segments are of variable-length where as in paging
pages are of fixed size.

Unit – 6: Memory Management 91 LDCE,IT


Segmentatio
n Process P Segment Map Table
Sr Size Memory Physical Memory
Segment 1 Address
1 300 100
100
200
2 500
300
400
Segment 2 NM 400
3 100
900 500
N X 600
700
Segment 3 800
900
1000
Segment N
1100

Unit – 6: Memory Management 91


92 LDCE,IT
Segmentatio
n functions, data structures, and so on.
 A program segment contains the program's main function, utility

 The operating system maintains a segment map table for every


process.
 Segment map table contains list of free memory blocks along
with segment numbers, their size and corresponding memory
locations in main memory.
 For each segment, the table stores the starting address of the
segment and the length of the segment.
 A reference to a memory location includes a value that identifies a
segment and an offset.

Unit – 6: Memory Management 91


93 LDCE,IT
Paging VS Segmentation
Paging
Segmentation
Paging was invented to get large address Segmentation was invented to allow
space without having to buy more programs and data to be broken up into
physical memory. logically independent address space and
to add sharing and protection.
The programmer does not aware that The programmer is aware that
paging is used. segmentation is used.
Procedure and data cannot be Procedure and data can be distinguished
distinguished and protected separately. and protected separately.
Change in data or procedure requires Change in data or procedure requires
compiling entire program. compiling only affected segment not
entire program.
Sharing of different procedures not Sharing of different procedures
available. available.

Unit – 6: Memory Management 91


94 LDCE,IT
Questions asked in GTU
1. Explain multiprogramming with fixed partition.
2. Explain link list method for dynamic memory management.
3. How free space can be managed by OS.
4. Explain Swapping and Fragmentation in detail.
5. What is Paging? Explain paging mechanism in MMU
with example.
6. Explain TLB and Virtual Memory.
7. Discuss demand paging.
8. Define following Terms: Thrashing
9. List different Page Replacement Algorithms? Discuss it in terms of
page faults.
10. What is Belady’s anomaly? Explain with suitable example.

Unit – 6: Memory Management 91


95 LDCE,IT
Questions asked in GTU
11. Given six Partition of 300KB, 600KB, 350KB, 200KB, 750KB and
125KB(in order), how would the first-fit, best-fit and worst-fit
algorithms places processes of size 115 KB, 500KB, 358KB, 200KB
and 375KB(in order)? Which algorithm is efficient for the use of
memory?
12. Calculate the page fault rates for below reference string in case of
FIFO and Optimal page replacement algorithm. Assume the
memory size is 4 page frames and all frames are initially empty.
0,2,1,6,4,0,1,0,3,1,2,1
13. Consider (70120304230321201701) page reference string: How
many page fault would occur for following page replacement
algorithm. Consider 3 frames and 4 frames.
1. FIFO 2. LRU 3. Optimal

Unit – 6: Memory Management 91


96 LDCE,IT
Questions asked in GTU
14. Consider the following reference string. Calculate the page fault
rates for FIFO and OPTIMAL page replacement algorithm. Assume
the memory size is 4 page frame.
1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2
OR
Consider the following page reference string:
1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2 With four Frames How
many page faults would occur for the FIFO, Optimal page
replacement algorithms? which algorithm is efficient? (assume all
frames are initially empty)

Unit – 6: Memory Management 91


97 LDCE,IT

You might also like