0% found this document useful (0 votes)
44 views76 pages

Memory Management: Virtual Memory

Virtual memory uses the hard disk to simulate additional RAM. Demand paging is a memory allocation scheme that loads pages into memory only as they are needed, allowing jobs to run with less physical memory. It uses tables to track which pages are in memory, modified, and recently used to determine which pages to swap out using policies like FIFO, LRU, or MRU. When a new page is needed, one must be swapped to disk to open a frame for the new page.

Uploaded by

Yram Sanchez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views76 pages

Memory Management: Virtual Memory

Virtual memory uses the hard disk to simulate additional RAM. Demand paging is a memory allocation scheme that loads pages into memory only as they are needed, allowing jobs to run with less physical memory. It uses tables to track which pages are in memory, modified, and recently used to determine which pages to swap out using policies like FIFO, LRU, or MRU. When a new page is needed, one must be swapped to disk to open a frame for the new page.

Uploaded by

Yram Sanchez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Chapter 3

Memory Management:
Virtual Memory
Objectives
You will be able to describe: (General)
■ The basic functionality of the memory allocation

methods covered in this chapter: paged, demand


paging, segmented, and segmented/demand paged
memory allocation
■ The influence that these page allocation methods have

had on virtual memory


■ The difference between a first-in first-out page

replacement policy, a least-recently-used page


replacement policy, and a clock page replacement
policy

2
Objectives (continued)
You will be able to describe: (General)
■ The mechanics of paging and how a memory allocation

scheme determines which pages should be swapped


out of memory
■ The concept of the working set and how it is used in

memory allocation schemes


■ The impact that virtual memory had on

multiprogramming
■ Cache memory and its role in improving system

response time

3
What is virtual memory?
■ It is a memory created by using the hard disk to
simulate additional random-access memory

4
What are the reasons of
developing the virtual memory?
■ Disadvantages of early schemes:
■ Required storing entire program in memory

■ Fragmentation

■ Overhead due to relocation

■ Evolution of virtual memory helps to:


■ Remove the restriction of storing programs

contiguously
■ Eliminate the need for entire program to reside in

memory during execution

5
Memory Allocation
■ Page Memory Allocation
■ Demand Page Memory Allocation
■ Segmented Memory Allocation
■ Segmented/Demand Page Memory Allocation

6
Objectives:
You will be able to describe: (1)
■ Define page memory allocation

■ Familiarize the architecture, algorithm and tools used

by the page memory allocation


■ Compute the displacement and page number of the

line in the memory using the page memory allocation


■ Patience and accuracy in solving a problem

7
Paged Memory Allocation
■ Divides each incoming job into pages of equal size
■ Works well if page size, memory block size (page
frames), and size of disk section (sector, block) are
all equal
■ Before executing a program, Memory Manager:
■ Determines number of pages in program

■ Locates enough empty page frames in main

memory
■ Loads all of the program’s pages into them

8
Paged Memory Allocation
(continued)

Figure 3.1: Paged memory allocation scheme for a


job of 350 lines
9
Paged Memory Allocation
(continued)
■ Memory Manager requires three tables to keep track of
the job’s pages:
■ Job Table (JT) contains information about

■ Size of the job


■ Memory location where its PMT is stored
■ Page Map Table (PMT) contains information about
■ Page number and its corresponding page frame memory
address
■ Memory Map Table (MMT) contains
■ Location for each page frame
■ Free/busy status

10
Paged Memory Allocation
(continued)

Table 3.1: A Typical Job Table


(a) initially has three entries, one for each job in process.
When the second job (b) ends, its entry in the table is
released and it is replaced by (c), information about the next
job that is processed

11
Paged Memory Allocation
(continued)

Job 1 is 350 lines


long and is divided
into four pages of
100 lines each.

Figure 3.2: Paged


Memory Allocation
Scheme

12
Paged Memory Allocation
(continued)
■ Displacement (offset) of a line: Determines how
far away a line is from the beginning of its page
■ Used to locate that line within its page frame

■ How to determine page number and displacement of


a line:
■ Page number = the integer quotient from the

division of the job space address by the page size


■ Displacement = the remainder from the page

number division

13
Paged Memory Allocation
(continued)
■ Steps to determine exact location of a line in
memory:
■ Determine page number and displacement of a line

■ Refer to the job’s PMT and find out which page

frame contains the required page


■ Get the address of the beginning of the page frame

by multiplying the page frame number by the page


frame size
■ Add the displacement (calculated in step 1) to the

starting address of the page frame

14
Example
■ Determine the exact location of the Line 215 of Job 1 if
the memory size is 100 bytes and 1 bytes is equal to 1
line of code. Assuming the PMT has the following
information as shown in the table below:

Page No Page Frame No


0 4
1 2
2 7
3 9

15
Chapter 3
Memory Management:
Virtual Memory

Demand Page Memory Allocation


Review: Paged Memory
Allocation
■ Advantages:
■ Allows jobs to be allocated in noncontiguous
memory locations
■ Memory used more efficiently; more jobs can fit
■ Disadvantages:
■ Address resolution causes increased overhead

■ Internal fragmentation still exists, though in last


page
■ Requires the entire job to be stored in memory
location
■ Size of page is crucial (not too small, not too
large)
17
Objectives:
■ Define demand page memory allocation
■ Familiarize the architecture, algorithm and tables
used by the page memory allocation
■ Determine the success and failure ratio of the
swapping algorithm

18
What is Demand Paging?
■ It is a memory allocation schemes that load the pages
into memory only as they are needed, allowing jobs to
be run with less main memory
■ Uses the virtual memory (hard disk) as a temporary
storage of the jobs to be processed

19
What makes it possible to use load
only the demand page of the
program into the memory?
■ Programs are written sequentially so not all pages are
necessary at once. For example:
■ User-written error handling modules are processed

only when a specific error is detected


■ Mutually exclusive modules

■ Certain program options are not always accessible

20
Mutually exclusive modules
■ void rectangleType::GetInput()
■ {
■ cout<<"Enter length:";
■ cin>>length;
■ cout<<"Enter width:";
■ cin>>width;
■ }
■ double rectangleType::ComputeResult()
■ {
■ return length*width;
■ }
■ double rectangleType::DisplayResult() const
■ {
■ cout<<"The area of the rectangle is " << ComputeResult();
■ }

21
Architecture of Demand Page
■ Requires use of a high-speed direct access storage
device that can work directly with CPU
■ A space in the hard disk used for virtual memory(2GB)
■ Swapping algorithms (predefined policies)
■ Table for monitoring the process

22
Demand Paging (continued)
■ The OS depends on following tables:
■ Job Table

■ Page Map Table with 3 new fields to determine

■ If requested page is already in memory


■ If page contents have been modified
■ If the page has been referenced recently
■ Used to determine which pages should remain in main
memory and which should be swapped out
■ Memory Map Table

23
Demand Paging (continued)

Total job pages are 15,


and the number of total
available page frames is
12.

Figure 3.5: A typical


demand paging scheme

24
Demand Paging (continued)
■ Swapping Process:
■ To move in a new page, a resident page must be

swapped back into secondary storage; involves


■ Copying the resident page to the disk (if it was modified)
■ Writing the new page into the empty page frame
■ Requires close interaction between hardware
components, software algorithms, and policy
schemes

25
Page Replacement Policies
and Concepts
■ Policy that selects the page to be removed; crucial
to system efficiency. Types include:
■ First-in first-out (FIFO) policy: Removes

page that has been in memory the longest


■ Least-recently-used (LRU) policy: Removes

page that has been least recently accessed


■ Most recently used (MRU) policy

■ Least frequently used (LFU) policy

26
Demand Paging (continued)
■ Page fault handler: The section of the operating
system that determines
■ Whether there are empty page frames in memory

■ If so, requested page is copied from secondary storage


■ Which page will be swapped out if all page frames
are busy
■ Decision is directly dependent on the predefined policy
for page removal

27
Demand Paging (continued)
■ Thrashing : An excessive amount of page swapping
between main memory and secondary storage
■ Operation becomes inefficient

■ Caused when a page is removed from memory but is


called back shortly thereafter
■ Can occur across jobs, when a large number of jobs
are vying for a relatively few number of free pages
■ Can happen within a job (e.g., in loops that cross
page boundaries)
■ Page fault: a failure to find a page in memory

28
Demand Paging (continued)
■ Advantages:
■ Job no longer constrained by the size of physical

memory (concept of virtual memory)


■ Utilizes memory more efficiently than the previous

schemes
■ Disadvantages:
■ Increased overhead caused by the tables and the

page interrupts

29
Page Replacement Policies and
Concepts (continued)

Figure 3.7: FIFO Policy

30
Page Replacement Policies and
Concepts (continued)

Figure 3.8: Working of a FIFO algorithm for a job with


four pages (A, B, C, D) as it’s processed
by a system with only two available page
frames
31
Page Replacement Policies and
Concepts (continued)

Figure 3.9: Working of an LRU algorithm for a job with


four pages (A, B, C, D) as it’s processed
by a system with only two available page
frames
32
Page Replacement Policies and
Concepts (continued)
■ Efficiency (ratio of page interrupts to page requests)
is slightly better for LRU as compared to FIFO
■ FIFO anomaly: No guarantee that buying more
memory will always result in better performance
■ In LRU case, increasing main memory will cause
either decrease in or same number of interrupts
■ LRU uses an 8-bit reference byte and a bit-shifting
technique to track the usage of each page currently
in memory

33
FIFO Example

Page Requested: B A C A B D B A C D
Page Frame 1:
Page Frame 2:
Interrupt:
Time Snapshot:

Success Ratio: =

Failure Ratio: =

34
FIFO Example
Page Requested: A B A C A B D B A C D
Page Frame 1:
Page Frame 2:
Interrupt:
Time Snapshot: 1 2 3 4 5 6 7 8 9 10 11

Success Ratio: =

Failure Ratio: =

35
LRU Example
Page Requested: A B A C A B D B A C D
Page Frame 1:
Page Frame 2:
Interrupt:
Time Snapshot: 1 2 3 4 5 6 7 8 9 10 11

Success Ratio: =

Failure Ratio: =

36
LRU Exercises
Page Requested: A B A C A B D E A C A B D
Page Frame 1:
Page Frame 2:
Page Frame 3:
Interrupt:
Time Snapshot: 1 2 3 4 5 6 7 8 9 10 11 12 13

Success Ratio: =
Failure Ratio: =

37
Page Replacement Policies and
Concepts (continued)
• Initially, leftmost bit of its reference byte is set to 1, all bits
to the right are set to zero
• Each time a page is referenced, the leftmost bit is set to 1
• Reference bit for each page is updated with every time tick

Figure 3.11: Bit-shifting technique in LRU policy


38
The Mechanics of Paging
■ Status bit: Indicates if page is currently in memory
■ Referenced bit: Indicates if page has been
referenced recently
■ Used by LRU to determine which pages should be

swapped out
■ Modified bit: Indicates if page contents have been
altered
■ Used to determine if page must be rewritten to

secondary storage when it’s swapped out

39
The Mechanics of Paging
(continued)

Table 3.3: Page Map Table for Job 1 shown in Figure 3.5.

40
The Mechanics of Paging
(continued)

Table 3.4: Meanings of bits used in PMT

Table 3.5: Possible combinations of modified and


referenced bits
41
The Working Set
■ Working set: Set of pages residing in memory that
can be accessed directly without incurring a page
fault
■ Improves performance of demand page schemes

■ Requires the concept of “locality of reference”

■ System must decide


■ How many pages compose the working set

■ The maximum number of pages the operating

system will allow for a working set

42
The Working Set (continued)

Figure 3.12: An example of a time line showing the amount


of time required to process page faults

43
Chapter 3
Memory Management:
Virtual Memory

Topics:
1. Segmented Memory Allocation
2. Segmented/Demand Page Memory Allocation
Objectives:
■ After the discussion, the students should be able to:
■ Define segmented and segmented/page memory

allocation
■ Familiarize the architecture of segmented and

segmented/page memory allocation


■ Determine the exact location of the line in the

memory using the segmented and


segmented/page memory allocation
■ Participate in the class discussion

45
Segmented Memory Allocation
■ What is the problem of the demand page
memory allocation that gives birth the concept
of segmented memory allocation?
■ Thrashing

■ An excessive amount of page swapping between main


memory and secondary storage
■ Caused when a page is removed from memory but is
called back shortly thereafter
■ Can happen within a job (e.g., in loops that cross page
boundaries)

46
Segmented Memory Allocation
(cont.)

for(j=1; j<100; j++) Page 0


{
k=j*j;

M=a*j;
printf(“\n%d %d %d”, j,k,m)
} Page 1
printf(“\n”);

An example of demand paging that causes a page swap each time the
loop is executed and results in thrashing. If only a single page frame
is available, this program will have one page fault each time the loop
is executed.
47
Segmented Memory Allocation
(cont.)
■ What is segmented memory allocation?
■ Each job is divided into several segments of
different sizes, one for each module that contains
pieces to perform related functions
■ Main memory is no longer divided into page
frames, rather allocated in a dynamic manner
■ Segments are set up according to the program’s
structural modules when a program is compiled or
assembled

48
#include <stdio.h>
main(){
double[] list={10,5,20,6,11,25};
double smallest=SmallestNumber(list);
double biggest=BiggestNumber(list);
DisplayResult(smallest,biggest);
}
double SmallestNumber(double[] l){
double smallest=l[0];
for(int x=0;x<l.length;x++)
if(smallest>l[x]) smallest=l[x];
return smallest;
}
double BiggestNumber(double[] l){
double biggest=l[0];
for(int x=0;x<l.length;x++)
if(biggest<l[x]) biggest=l[x];
return biggest;
}
void DisplayResult(double a,double b){
cout<<“The smallest is ” <<a <<“ and biggest number is ”<<b;
}

49
Segmented Memory Allocation
(cont.)

Figure 3.13: Segmented memory allocation. Job 1


includes a main program, Subroutine A, and
Subroutine B. It’s one job divided into three
segments.
50
Segmented Memory Allocation
(continued)

Figure 3.14: The Segment Map Table tracks each


segment for Job 1
51
Segmented Memory Allocation
(continued)
■ Memory Manager tracks segments in memory using
following three tables:
■ Job Table lists every job in process (one for
whole system)
■ Segment Map Table lists details about each
segment (one for each job)
■ Memory Map Table monitors allocation of main
memory (one for whole system)
■ Segments don’t need to be stored contiguously
■ The addressing scheme requires segment number
and displacement

52
Segmented Memory Allocation
(continued)
■ Advantages:
■ Internal fragmentation is removed

■ Disadvantages:
■ Difficulty managing variable-length segments in

secondary storage
■ External fragmentation

53
Segmented Memory Allocation
(continued)
■ How does the computer knows the exact address of
the line of code in the main memory?
■ exact address = #line of code + starting address

54
Sample Problem
■ What is the exact address of the line 340 of segment
#0?

55
Segmented Memory Allocation
(continued)
■ Segment Memory allocation allows the sharing of
code.
■ A segment of code called copy of Microsoft Word can
be used also in Microsoft Power Point

56
Sample Problem
■ Given the following Segment Map Tables for two jobs:
SMT for Job 1 SMT for Job 2
Segment Number Memory Location Segment Number Memory Location
0 4096 0 2048
1 6144 1 6144
2 9216 2 9216
3 2048
4 7168

■ Which segments, if any, are shared between the two jobs?


■ If the segment now located at 7168 is swapped out and later
reloaded at 8192, and the segment now at 2048 is swapped out
and reloaded at 1024, what would the new segment tables look
like? 57
Segmented/Demand Paged
Memory Allocation
■ What is the problem of the segmented memory allocation
that gives birth the idea of segmented/demand page
memory allocation?
■ Dynamic allocation

■ Subdivides segments into pages of equal size, smaller


than most segments, and more easily manipulated than
whole segments. It offers:
■ Logical benefits of segmentation

■ Physical benefits of paging

■ Removes the problems of compaction, external


fragmentation, and secondary storage handling
■ The addressing scheme requires segment number, page
number within that segment, and displacement within
that page 58
Segmented/Demand Paged
Memory Allocation
■ What is segmented/demand paged memory allocation?
■ Subdivides segments into pages of equal size, smaller
than most segments, and more easily manipulated
than whole segments. It offers:
■ Logical benefits of segmentation
■ Physical benefits of paging
■ Removes the problems of compaction, external
fragmentation, and secondary storage handling
■ Removes the problem of thrashing

59
Segmented/Demand Paged
Memory Allocation (continued)
■ This scheme requires following four tables:
■ Job Table lists every job in process (one for the

whole system)
■ Segment Map Table lists details about each

segment (one for each job)


■ Page Map Table lists details about every page

(one for each segment)


■ Memory Map Table monitors the allocation of

the page frames in main memory (one for the


whole system)

60
Segmented/Demand Paged
Memory Allocation (continued)

Figure 3.16: Interaction of JT, SMT, PMT, and main memory


in a segment/paging scheme
61
Segmented/Demand Paged
Memory Allocation (continued)
■ Advantages:
■ Segment loaded on demand

■ Disadvantages:
■ Table handling overhead

■ Memory needed for page and segment tables

■ To minimize number of references, many systems


use associative memory to speed up the process
■ Its disadvantage is the high cost of the complex
hardware required to perform the parallel
searches

62
Segmented/Demand Paged
Memory Allocation (continued)
■ How does the computer determine the exact
address of the line of code in the memory?
■ Compute the page and displacement of the line of
code
■ Determine what segments the line belong

■ Determine the frame number of the line using the


SMT and PMT
■ Frame number + displacement

63
Sample Problem

What is the exact address of the line 150 of Job0, assuming


that the line belong to segment 2, frame size is 100 and byte
per line is 1.
64
Chapter 3
Memory Management:
Virtual Memory

Topics:
1. Virtual and Cache Memory
Objectives
■ After the discussion, the students should be able to:
■ Define virtual memory and cache memory

■ Familiarize the architecture of computer with

virtual memory and cache memory


■ Compute the efficiency of the cache memory using

the hit ratio and access time formula


■ Appreciate the importance of virtual memory and

cache memory

66
What is virtual memory
■ It is a technique that allows programs to be executed
even though they are not stored entirely in memory.
■ Requires cooperation between the Memory Manager
and the processor hardware

67
Example: Word processor
Main Memory Table Sub-Program Picture Sub-Program

Virtual memory
Main Memory

Main memory

68
Virtual Memory (continued)
■ Advantages of virtual memory management:
■ Job size is not restricted to the size of main memory
■ Memory is used more efficiently
■ Allows an unlimited amount of multiprogramming
■ Eliminates external fragmentation and minimizes internal
fragmentation
■ Allows the sharing of code and data
■ Facilitates dynamic linking of program segments
■ Disadvantages:
■ Increased processor hardware costs
■ Increased overhead for handling paging interrupts
■ Increased software complexity to prevent thrashing

69
Virtual Memory (continued)
Needed Entire
Program Program

Architecture of the Computer with Virtual Memory

70
Cache Memory
■ A small high-speed memory unit that a processor can
access more rapidly than main memory
■ Used to store frequently used data, or instructions
■ Movement of data, or instructions, from main
memory to cache memory uses a method similar to
that used in paging algorithms

71
Cache Memory (continued)

Figure 3.17: Comparison of (a) traditional path used by


early computers and (b) path used by modern
computers to connect main memory and CPU
via cache memory
72
Cache Memory (continued)
■ Types of cache memory
■ L1 cache – built directly to the CPU

■ L2 cache – integral part of the CPU

73
Cache Memory (continued)
■ Efficiency Calculation
■ Hit Ratio

■ h = (number of request found in the cache/total number


of requests) * 100
■ Average Access time
■ ta=tc + (1-h)*tm
■ Where:
■ ta=access time
■ tc=cache access time
■ h=hit ratio
■ tm=main memory access time

74
Summary
■ Paged memory allocations allow efficient use of
memory by allocating jobs in noncontiguous memory
locations
■ Increased overhead and internal fragmentation are
problems in paged memory allocations
■ Job no longer constrained by the size of physical
memory in demand paging scheme
■ LRU scheme results in slightly better efficiency as
compared to FIFO scheme
■ Segmented memory allocation scheme solves internal
fragmentation problem

75
Summary (continued)
■ Segmented/demand paged memory allocation removes
the problems of compaction, external fragmentation,
and secondary storage handling
■ Associative memory can be used to speed up the
process
■ Virtual memory allows programs to be executed even
though they are not stored entirely in memory
■ Job’s size is no longer restricted to the size of main
memory by using the concept of virtual memory
■ CPU can execute instruction faster with the use of
cache memory

76

You might also like