Memory Management-Virtual memory.pptx
Memory Management-Virtual memory.pptx
Paging, Segmentation.
Virtual memory-Demand Paging, Page replacement algorithm,
Memory Management
• Background
• Swapping
• Contiguous Memory Allocation
• Paging
• Structure of the Page Table
• Segmentation
Background
• Memory is an array of bytes, each with its own address.
P1
P2 Size of range
P3
Hardware address protection with base and limit registers
300040 4309490
300040
4309490 120900
Absolute Address
Memory
Binding of Instructions and Data to Memory
• Address binding of instructions and data to memory addresses can happen at
three different stages
• Compile time: If memory location known a priori, absolute code can be
generated;
• If you know at compile time, where the process will reside in memory, then absolute code can
be generated.
• Load time: Compiler must generate relocatable code if memory location is not
known at compile time. Binding is delayed until load time.
• If it is not known at compile time where the process will reside in memory, then the compiler
must generate relocatable code. Final binding is delayed until load time.
• Execution time: Binding delayed until run time if the process can be moved
during its execution from one memory segment to another. Need hardware
support for address maps (e.g., base and limit registers)
• If the process can be moved during its execution from one memory segment to another, then
binding must be delayed until run time.
Logical vs. Physical Address Space
● The concept of a logical address space that is bound to a separate physical
address space is central to proper memory management
● Logical address – generated by the CPU; also referred to as virtual address
● Physical address – address seen by the memory unit
● Logical and physical addresses are the same in compile-time and load-time
address-binding schemes; logical (virtual) and physical addresses differ in
execution-time address-binding scheme
● Logical address space is the set of all logical addresses generated by a program
● Physical address space is the set of all physical addresses generated by a
program
Memory-Management Unit (MMU)
● Hardware device that at run time maps virtual to physical address
● To start, consider simple scheme where the value in the relocation register is
added to every address generated by a user process at the time it is sent to
memory
● Base register now called relocation register
● The user program deals with logical addresses; it never sees the real physical
addresses
● Execution-time binding occurs when reference is made to location in
memory
● Logical address bound to physical addresses
Dynamic relocation using a relocation register
Process1
Process2
Process 3
Process
:
:
Process n
375
User
processes
14375
14500
Relocation Register:
Protection
Dynamic address binding
Change OS space dynamically.
Contiguous Allocation
• Main memory usually into two partitions:
• Resident operating system, usually held in low memory with
interrupt vector
• User processes that held in high memory
• Relocation registers used to protect user processes from each other,
and from changing operating-system code and data
• Base register contains value of smallest physical address
• Limit register contains range of logical addresses – each logical
address must be less than the limit register
• MMU maps logical address dynamically
Contiguous Allocation
• Two Partitioning Methodologies:
• Fixed size partition
• Variable size partition
OS
OS
P1
P1
P2
P2
P3
P3
P4
P5
OS OS OS OS
process 8 process 10
Best-Fit: Worst-Fit:
212K is put in 300K partition. 212K is put in 600K partition.
417K is put in 500K partition. 417K is put in 500K partition.
112K is put in 200K partition. 112K is put in 388K partition.
426K is put in 600K partition. 426K must wait.
• The main idea behind the paging is to divide each process in the form of pages. The main memory will
also be divided in the form of frames.
• One page of the process is to be stored in one of the frames of the memory. The pages can be stored
at the different locations of the memory but the priority is always to find the contiguous frames or
holes.
• Pages of the process are brought into the main memory only when they are required otherwise they
reside in the secondary storage.
• Paging is a memory-management scheme that permits the physical address space of
a process to be non contiguous.
• Paging eliminates the need for contiguous allocation of physical memory.
• Paging avoids external fragmentation and the need for compaction.
Paging
• The basic method for implementing paging
involves:
• Breaking physical memory into fixed-sized blocks
called frames
• Breaking logical memory into blocks of the same
size called pages.
• When a process is to be executed, its pages are
loaded into any available memory frames from
their source (a file system or the backing store).
• The backing store is divided into fixed-sized blocks
that are of the same size as the memory frames.
Paging
• The hardware support for paging
is illustrated in Figure 8.7.
• Every address generated the CPU
is divided into two parts:
• A page number {p) and a page
offset (d).
• The page number is used as an
index into a page table.
• The page table contains the base
address of each page in physical
memory.
• This base address is combined
with the page offset to define the
physical memory address that is
sent to the memory unit.
Paging
• The paging model of memory is shown in Figure 8.8.
• The page size (like the frame size) is defined by the
hardware.
• The size of a page is typically a power of 2, varying
between 512 bytes and 16 MB per page, depending on
the computer architecture.
• The selection of a power of 2 as a page size makes the
translation of a logical address into a page number and
page offset particularly easy.
• If the size of the logical address space is 2m,
and a page size is 2n addressing units (bytes or
words t then the high-order m- n bits of a
logical address designate the page number, and
the n low-order bits designate the page offset.
Address Translation Scheme
• Address generated by CPU is divided into:
• Page number (p) – used as an index into a page table which contains base address of
each page in physical memory
• Page offset (d) – combined with base address to define the physical memory address
that is sent to the memory unit
• For given logical address space 2m and page size 2 n
• Thus, the logical address is as follows:
• p is an index into the page table and d is the displacement within the page.
• As a concrete (although minuscule) example, consider the memory in Figure 8.9.
page page
number offset
p d
m-n n
High order bits Low order bits
• Here, in the logical address, n= 2 and m = 4.
• Using a page size of 4 bytes and a physical memory of 32
bytes (8 pages), we show how the
user's view of memory can be mapped into physical
memory.
• Logical address 0 is page 0, offset 0. Indexing into the
page table, we find that page 0 is in frame 5. Thus, logical
address 0 maps to physical address 20 [= (5 x 4) + 0].
• Logical address 3 (page 0, offset 3) maps to physical
address 23 [= (5 x 4) + 3].
• Logical address 4 is page 1, offset 0; according to the page
table, page 1 is mapped to frame 6. Thus, logical address
4 maps to physical address 24 [= (6 x 4) + O].
• Logical address 13 maps to physical address 9.
• You may have noticed that paging itself is a form of
dynamic relocation.
• Every logical address is bound by the paging hardware to
some physical address. Using paging is similar to using a
table of base (or relocation) registers, one for each frame
of memory.
•Advantages of Paging
•Paging mainly allows to storage of parts of a single process in a non-contiguous
fashion.
•With the help of Paging, the problem of external fragmentation is solved.
•Paging is one of the simplest algorithms for memory management.
•Disadvantages of Paging
•In Paging, sometimes the page table consumes more memory.
•Internal fragmentation is caused by this technique.
•There is an increase in time taken to fetch the instruction since now two memory
accesses are required.
Translation Look Aside Buffer
• h/w cache memory Translation look-aside buffer (TLB)
•Fast lookup hardware cache memory
•The percentage of times that a particular page number is found in the TLB is called the hit ratio.
•An 80-percent hit ratio means that we find the desired page number in the TLB 80 percent of the time.
•If it takes 20 nanoseconds to search the TLB and 100 nanoseconds to access memory, then a
mapped-memory access takes 120 nanoseconds when the page number is in the TLB.
• If we fail to find the page number in the TLB (20 nanoseconds), then we must first access memory for
the page table and frame number (100 nanoseconds) and then access the desired byte in memory (100
nanoseconds), for a total of 220 nanoseconds.
•To find the effective memory-access time, we weight each case by its probability:
•effective access time = 0.80 x 120 + 0.20 x 220 = 140 nanoseconds.
•In this example, we suffer a 40-percent slowdown in memory-access time (from 100 to 140
nanoseconds).
Problem on EAT
• Consider a paging system with the page table stored in
memory.
• 1. If a memory reference takes 100 nanoseconds, how long
does a paged memory reference take?
• 2. If TLB Hit Ratio is 80 percent,TLB search time is 20
nanoseconds,
• a. find memory access time if page is in TLB.
• b. find memory access time if page is not in TLB.
• c. Find effective access time
• d. Find EAT if TLB hit ratio 98%.
TLB Hit ratio 80% = .80 TLB miss ratio 20% = .20
• solution
• page size = 1 KB = 1024 B
• Page number Offset
• 3085/1024 = 3 3085 mod 1024 = 13
• 42095/1024 =41 42095 mod 1024 = 111
• 215201/1024 =210 215201 mod 1024 =161
• 650000/1024 = 634 650000 mod 1024 =784
• 2000001/1024= 1953 2000001 mod 1024 = 129
Problem 2
• Assuming a 1-KB page size, what
are the page numbers and Logical Page # Offset
offsets for the following address address (decimal) (decimal)
references (provided as decimal (decimal)
numbers):
• a. 2375
2375 2 327
• b. 19366
• c. 30000 19366 18 934
• d. 256 30000 29 304
• E. 16385 256 0 256
16385 16 1
Problem 3
• Consider a logical address space of 256 pages with a 4-KB page size, mapped onto a
physical memory of 64 frames.
• a. How many bits are required in the logical address?
• b. How many bits are required in the physical address?
Logical Physical
8
n=256 (2 ) Address Space
Page size 4096 (212)
Address n=64 (26)
Space (Frames)
Frame size 4096 (212)
(pages)
4096 bytes 0 0
: :
: :
: 64
:
:
:
256
solution
• Size of logical address space
= 2 m = # of pages(n) x page size
= 256 × 4096
= 28 x 212 = 220
Number of required bits in logical adderss = 20
as
• Q1) Consider a logical address space of 64 pages of 1024 words each, mapped onto
a physical memory of 32 frames.
• a. How many bits are required in the logical address?
• b. How many bits are required in the physical address?
• Q2) Consider a logical address space of 32 pages of 1024 words each, mapped onto
a physical memory of 16 frames.
• a. How many bits are required in the logical address?
• b. How many bits are required in the physical address?
• ) Logical memory = 64 pages= 2⁶ pages
[0] 4 T
[1] 2 T
[2] - F
[3] 3 T
Give the physical address for each of the following logical addresses or
write "pagefault" if the reference would cause a page fault:
a)56 b)5000 c)6500
Solution
• a) 56
page# = 56 DIV 2048 = 0
offset = 56 MOD 2048 = 56
physical address = 4 * 2048 + 56 = 8248
• b) 5000
page# = 5000 DIV 2048 = 2
offset = 5000 MOD 2048 = 904
page fault
• c) 6500
page# = 6500 DIV 2048 = 3
offset = 6500 MOD 2048 = 356
physical address = 3 * 2048 + 356 = 6500
Virtual Memory Management :
• Virtual memory – separation of user logical memory from physical memory.
• Only part of the program needs to be in memory for execution
• Logical address space can therefore be much larger than physical address space
• Allows address spaces to be shared by several processes
• Allows for more efficient process creation
• Virtual memory can be implemented via:
• Demand paging
• Demand segmentation
• Way of storing process in
memory Fig 9.2 virtual address space
• Demand paging
• Advantages
• Large virtual memory.
• More efficient use of memory.
• There is no limit on degree of multiprogramming.
• Disadvantages
• Number of tables and the amount of processor overhead for handling
page interrupts are greater than in the case of the simple paged
management techniques.
Page Fault
• If there is a reference to a page, first reference to that page will trap
to operating system:
page fault
1. Operating system looks at another table to decide:
• Invalid reference ⇒ abort
• Just not in memory
2. Get empty frame
3. Swap page into frame
4. Reset tables
5. Set validation bit = v
6. Restart the instruction that caused the page fault
•.
Page Replacement
• If we increase our degree of multiprogramming, we are memory.
• If we run six processes, each of which is ten pages in size but uses only five pages, we have higher CPU
utilization and throughput, ten frames to spare.
• It is possible, however, that each of these processes, for a particular data set, may suddenly try to use
all ten of its pages, resulting in a need for sixty frames when only forty are available.
• The operating system could instead swap out a process, freeing all its frames and reducing the level
of multiprogramming.
• Over-allocation of memory manifests itself as follows. While a user process is executing, a page fault
occurs. The operating system determines where the desired page is residing on the disk but then finds
that there are no free frames on the free-frame list; all memory is in use (Figure 9.9).
• The operating system has several options at this point. It could terminate the user process. However,
demand paging is the operating system's attempt to improve the computer system's utilization and
throughput. Users should not be aware that their processes are running on a paged system-paging
should be logically transparent to the user. So this option is not the best choice.
• The operating system could instead swap out a process, freeing all its frames and reducing the level of
multiprogramming. This option is a good one in certain circumstances, and we consider it further in
Section 9.6. Here, we discuss the most common solution:Page replaceent
9.9 Need For Page Replacement
Page Replacement
Page replacement takes the following approach.
If no frame is free, we find one that is not currently being used and free it. We
can free a frame by writing its contents to swap space and changing the page
table (and all other tables) to indicate that the page is no longer in memory
(Figure 9.10). We can now use the freed frame to hold the page for which the
process faulted.
We modify the page-fault service routine to include page replacement:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithnc to select a
victim frame
c. Write the victim frame to the disk; change the page and frame tables
accordingly.
3. Read the desired page into the newly freed frame; change the page and
frame tables.
Page Replacement
• Notice that, if no frames are free, two page transfers (one out and one in) are required.
• This situation effectively doubles the page-fault service time and increases the effective
access time accordingly.
• We can reduce this overhead by using a modify bit or dirty bit (or When this scheme is
used, each page or frame has a modify bit associated with it in the hardware.
• The modify bit for a page is set by the hardware whenever any word or byte in the page is
written into, indicating that the page has been modified. When we select a page for
replacement, we examine its modify bit.
• If the bit is set, we know that the page has been modified since it was read in from the
disk. In this case, we must write the page to the disk.
• If the modify bit is not set, however, the page has not been modified since it was read into
memory.
• In this case, we need not write the memory page to the disk: it is already there. This
technique also applies to read-only pages (for example, pages of binary code). Such pages
cannot be modified; thus, they may be discarded when desired. This scheme can
significantly reduce the time required to service a page fault, since it reduces I/O time by
one-half if the page has not been modified.
• We evaluate an algorithm by running it on a particular string of memory references and
computing the number of page faults. The string of memory references is called a
reference string.
Page Replacement Algorithms
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
1 7 7 7 2 2 2 4 4 4 0 0 0 7 7 7
2 - 0 0 0 3 3 3 2 2 2 1 1 1 0 0
3 - - 1 1 1 0 0 0 3 3 3 2 2 2 1
out 7 0 1 2 3 0 4 2 3 0 1 2
x x x x √ x x x x x x √ √ x x √ √ x x x
15 Page Faults
FIFO Illustrating Belady’s Anomaly
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
1 - 7 7 7 2 2 2 2 2 7
2 - - 0 0 0 0 4 0 0 0
3 - - - 1 1 3 3 3 1 1
out 7 1 0 4 3 2
x x x x √ x √ x √ √ x √ √ x √ √ √ X √ √
9 Page Faults
Least Recently Used (LRU)
❖ The LRU stands for the Least Recently Used.
❖ Least Recently Used page replacement algorithm keeps track of page usage over a
short period of time.
❖ It works on the concept that pages that have been highly used in the past are likely
to be significantly used again in the future.
❖ It removes the page that has not been utilized in the memory for the longest time.
❖ LRU is the most widely used algorithm because it provides fewer page faults than
the other methods.
❖ Use past knowledge rather than future
❖ Replace page that has not been used in the most amount of time
❖ e.g 12 faults – better than FIFO but worse than OPT
❖ Generally good algorithm and frequently used
Least Recently Used (LRU)
Least Recently Used Page Replacement
LRU chooses the page that has not been used for the longest period of time.
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
1 - 7 7 7 2 2 4 4 4 0 1 1 1
2 - - 0 0 0 0 0 0 3 3 3 0 0
3 - - - 1 1 3 3 2 2 2 2 2 7
out 7 1 2 3 0 4 0 3 2
x x x x √ x √ X X x x √ √ X √ X √ X √ √
12 Page Faults
Least Recently Used (LRU) Algorithm
Least Recently Used (LRU):
• Advantages –
• It is open for full analysis.
• In this, we replace the page which is least recently used, thus free
from Belady’s Anomaly.
• Easy to choose page which has faulted and hasn’t been used for a
long time.
• Disadvantages –
• It requires additional Data Structure to be implemented.
• Hardware assistance is high.
• Consider the following page reference string:
• 4 ,7, 6, 1, 7, 6, 1, 2, 7, 2
• How many page faults would occur for the optimal page replacement
algorithm, assuming three frames and all frames are initially empty.
• Consider the following page reference string:
• 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
• How many page faults would occur for the optimal page replacement
algorithm, assuming three frames and all frames are initially empty.
• Q. Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2.
• The number of frames in the memory is 3. Find out the number of
page faults respective to: