Ch03 Virtual Memory Memory Management

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 74

Chapter 3

Memory Management:
Virtual Memory

Understanding Operating Systems, Fourth Edition


What is virtual memory?
 It is a memory created by using the hard disk to
simulate additional random-access memory

Understanding Operating Systems 4


, Fourth Edition
What are the reasons of
developing the virtual memory?
 Disadvantages of early schemes:
 Required storing entire program in memory

 Fragmentation

 Overhead due to relocation

 Evolution of virtual memory helps to:


 Remove the restriction of storing programs

contiguously
 Eliminate the need for entire program to reside in

memory during execution

Understanding Operating Systems 5


, Fourth Edition
Memory Allocation
 Page Memory Allocation
 Demand Page Memory Allocation
 Segmented Memory Allocation
 Segmented/Demand Page Memory Allocation

Understanding Operating Systems 6


, Fourth Edition
Objectives:
You will be able to describe: (1)
 Define page memory allocation

 Familiarize the architecture, algorithm and tools used

by the page memory allocation


 Compute the displacement and page number of the

line in the memory using the page memory allocation


 Patience and accuracy in solving a problem

Understanding Operating Systems 7


, Fourth Edition
Paged Memory Allocation
 Divides each incoming job into pages of equal size
 Works well if page size, memory block size (page
frames), and size of disk section (sector, block) are
all equal
 Before executing a program, Memory Manager:
 Determines number of pages in program

 Locates enough empty page frames in main

memory
 Loads all of the program’s pages into them

Understanding Operating Systems 8


, Fourth Edition
Paged Memory Allocation
(continued)

Figure 3.1: Paged memory allocation scheme for a


job of 350 lines
Understanding Operating Systems 9
, Fourth Edition
Paged Memory Allocation
(continued)
 Memory Manager requires three tables to keep track of
the job’s pages:
 Job Table (JT) contains information about

 Size of the job


 Memory location where its PMT is stored
 Page Map Table (PMT) contains information about
 Page number and its corresponding page frame memory
address
 Memory Map Table (MMT) contains
 Location for each page frame
 Free/busy status

Understanding Operating Systems 10


, Fourth Edition
Paged Memory Allocation
(continued)

Table 3.1: A Typical Job Table


(a) initially has three entries, one for each job in process.
When the second job (b) ends, its entry in the table is
released and it is replaced by (c), information about the next
job that is processed

Understanding Operating Systems 11


, Fourth Edition
Paged Memory Allocation
(continued)

Job 1 is 350 lines


long and is divided
into four pages of
100 lines each.

Figure 3.2: Paged


Memory Allocation
Scheme

Understanding Operating Systems 12


, Fourth Edition
Paged Memory Allocation
(continued)
 Displacement (offset) of a line: Determines how
far away a line is from the beginning of its page
 Used to locate that line within its page frame

 How to determine page number and displacement of


a line:
 Page number = the integer quotient from the

division of the job space address by the page size


 Displacement = the remainder from the page

number division

Understanding Operating Systems 13


, Fourth Edition
Paged Memory Allocation
(continued)
 Steps to determine exact location of a line in
memory:
 Determine page number and displacement of a line

 Refer to the job’s PMT and find out which page

frame contains the required page


 Get the address of the beginning of the page frame

by multiplying the page frame number by the page


frame size
 Add the displacement (calculated in step 1) to the

starting address of the page frame

Understanding Operating Systems 14


, Fourth Edition
Example
 Determine the exact location of the Line 215 of Job 1 if
the memory size is 100 bytes and 1 bytes is equal to 1
line of code. Assuming the PMT has the following
information as shown in the table below:

Page No Page Frame No


0 4
1 2
2 7
3 9

Understanding Operating Systems 15


, Fourth Edition
Chapter 3
Memory Management:
Virtual Memory

Demand Page Memory Allocation


Review: Paged Memory
Allocation
 Advantages:
 Allows jobs to be allocated in noncontiguous memory

locations
 Memory used more efficiently; more jobs can fit
 Disadvantages:
 Address resolution causes increased overhead

 Internal fragmentation still exists, though in last page

 Requires the entire job to be stored in memory

location
 Size of page is crucial (not too small, not too large)

Understanding Operating Systems 17


, Fourth Edition
Objectives:
 Define demand page memory allocation
 Familiarize the architecture, algorithm and tables
used by the page memory allocation
 Determine the success and failure ratio of the
swapping algorithm

Understanding Operating Systems 18


, Fourth Edition
What is Demand Paging?
 It is a memory allocation schemes that load the pages
into memory only as they are needed, allowing jobs to
be run with less main memory
 Uses the virtual memory (hard disk) as a temporary
storage of the jobs to be processed

Understanding Operating Systems 19


, Fourth Edition
What makes it possible to use load
only the demand page of the
program into the memory?
 Programs are written sequentially so not all pages
are necessary at once. For example:
 User-written error handling modules are processed

only when a specific error is detected


 Mutually exclusive modules

 Certain program options are not always accessible

Understanding Operating Systems 20


, Fourth Edition
Mutually exclusive modules
 void rectangleType::GetInput()
 {
 cout<<"Enter length:";
 cin>>length;
 cout<<"Enter width:";
 cin>>width;
 }
 double rectangleType::ComputeResult()
 {
 return length*width;
 }
 double rectangleType::DisplayResult() const
 {
 cout<<"The area of the rectangle is " << ComputeResult();
 }

Understanding Operating Systems 21


, Fourth Edition
Architecture of Demand Page
 Requires use of a high-speed direct access storage
device that can work directly with CPU
 A space in the hard disk used for virtual memory(2GB)
 Swapping algorithms (predefined policies)
 Table for monitoring the process

Understanding Operating Systems 22


, Fourth Edition
Demand Paging (continued)
 The OS depends on following tables:
 Job Table

 Page Map Table with 3 new fields to determine

 If requested page is already in memory


 If page contents have been modified
 If the page has been referenced recently
 Used to determine which pages should remain in main
memory and which should be swapped out
 Memory Map Table

Understanding Operating Systems 23


, Fourth Edition
Demand Paging (continued)

Total job pages are 15,


and the number of total
available page frames is
12.

Figure 3.5: A typical


demand paging scheme

Understanding Operating Systems 24


, Fourth Edition
Demand Paging (continued)
 Swapping Process:
 To move in a new page, a resident page must be

swapped back into secondary storage; involves


 Copying the resident page to the disk (if it was modified)
 Writing the new page into the empty page frame
 Requires close interaction between hardware
components, software algorithms, and policy
schemes

Understanding Operating Systems 25


, Fourth Edition
Page Replacement Policies
and Concepts
 Policy that selects the page to be removed; crucial
to system efficiency. Types include:
 First-in first-out (FIFO) policy: Removes

page that has been in memory the longest


 Least-recently-used (LRU) policy: Removes

page that has been least recently accessed


 Most recently used (MRU) policy

 Least frequently used (LFU) policy

Understanding Operating Systems 26


, Fourth Edition
Demand Paging (continued)
 Page fault handler: The section of the operating
system that determines
 Whether there are empty page frames in memory

 If so, requested page is copied from secondary storage


 Which page will be swapped out if all page frames
are busy
 Decision is directly dependent on the predefined policy
for page removal

Understanding Operating Systems 27


, Fourth Edition
Demand Paging (continued)
 Thrashing : An excessive amount of page swapping
between main memory and secondary storage
 Operation becomes inefficient

 Caused when a page is removed from memory but is

called back shortly thereafter


 Can occur across jobs, when a large number of jobs

are vying for a relatively few number of free pages


 Can happen within a job (e.g., in loops that cross

page boundaries)
 Page fault: a failure to find a page in memory

Understanding Operating Systems 28


, Fourth Edition
Demand Paging (continued)
 Advantages:
 Job no longer constrained by the size of physical

memory (concept of virtual memory)


 Utilizes memory more efficiently than the previous

schemes
 Disadvantages:
 Increased overhead caused by the tables and the

page interrupts

Understanding Operating Systems 29


, Fourth Edition
Page Replacement Policies
and Concepts (continued)

Figure 3.7: FIFO Policy

Understanding Operating Systems 30


, Fourth Edition
Page Replacement Policies
and Concepts (continued)

Figure 3.8: Working of a FIFO algorithm for a job with


four pages (A, B, C, D) as it’s processed
by a system with only two available page
frames
Understanding Operating Systems 31
, Fourth Edition
Page Replacement Policies
and Concepts (continued)

Figure 3.9: Working of an LRU algorithm for a job with


four pages (A, B, C, D) as it’s processed
by a system with only two available page
frames
Understanding Operating Systems 32
, Fourth Edition
Understanding Operating Systems 33
, Fourth Edition
Page Replacement Policies
and Concepts (continued)
 Efficiency (ratio of page interrupts to page requests)
is slightly better for LRU as compared to FIFO
 FIFO anomaly: No guarantee that buying more
memory will always result in better performance
 In LRU case, increasing main memory will cause
either decrease in or same number of interrupts
 LRU uses an 8-bit reference byte and a bit-shifting
technique to track the usage of each page currently
in memory

Understanding Operating Systems 34


, Fourth Edition
FIFO Example

Page Requested: B A C A B D B A C D
Page Frame 1:
Page Frame 2:
Interrupt:
Time Snapshot:

Success Ratio: =

Failure Ratio: =

Understanding Operating Systems 35


, Fourth Edition
FIFO Example
Page Requested: A B A C A B D B A C D
Page Frame 1:
Page Frame 2:
Interrupt:
Time Snapshot: 1 2 3 4 5 6 7 8 9 10 11

Success Ratio: =

Failure Ratio: =

Understanding Operating Systems 36


, Fourth Edition
LRU Example
Page Requested: A B A C A B D B A C D
Page Frame 1:
Page Frame 2:
Interrupt:
Time Snapshot: 1 2 3 4 5 6 7 8 9 10 11

Success Ratio: =

Failure Ratio: =

Understanding Operating Systems 37


, Fourth Edition
LRU Exercises
Page Requested: A B A C A B D E A C A B D
Page Frame 1:
Page Frame 2:
Page Frame 3:
Interrupt:
Time Snapshot: 1 2 3 4 5 6 7 8 9 10 11 12 13

Success Ratio: =
Failure Ratio: =

Understanding Operating Systems 38


, Fourth Edition
Page Replacement Policies
and Concepts (continued)
• Initially, leftmost bit of its reference byte is set to 1, all bits

to the right are set to zero


• Each time a page is referenced, the leftmost bit is set to 1
• Reference bit for each page is updated with every time tick

Figure 3.11: Bit-shifting technique in LRU policy


Understanding Operating Systems 39
, Fourth Edition
The Mechanics of Paging
 Status bit: Indicates if page is currently in memory
 Referenced bit: Indicates if page has been
referenced recently
 Used by LRU to determine which pages should be

swapped out
 Modified bit: Indicates if page contents have been
altered
 Used to determine if page must be rewritten to

secondary storage when it’s swapped out

Understanding Operating Systems 40


, Fourth Edition
The Mechanics of Paging
(continued)

Table 3.3: Page Map Table for Job 1 shown in Figure 3.5.

Understanding Operating Systems 41


, Fourth Edition
The Mechanics of Paging
(continued)

Table 3.4: Meanings of bits used in PMT

Table 3.5: Possible combinations of modified and


referenced bits
Understanding Operating Systems 42
, Fourth Edition
The Working Set
 Working set: Set of pages residing in memory that
can be accessed directly without incurring a page
fault
 Improves performance of demand page schemes

 Requires the concept of “locality of reference”

 System must decide


 How many pages compose the working set

 The maximum number of pages the operating

system will allow for a working set

Understanding Operating Systems 43


, Fourth Edition
The Working Set (continued)

Figure 3.12: An example of a time line showing the amount


of time required to process page faults

Understanding Operating Systems 44


, Fourth Edition
Chapter 3
Memory Management:
Virtual Memory

Topics:
1. Segmented Memory Allocation
2. Segmented/Demand Page Memory Allocation
Objectives:
 After the discussion, the students should be able to:
 Define segmented and segmented/page memory

allocation
 Familiarize the architecture of segmented and

segmented/page memory allocation


 Determine the exact location of the line in the

memory using the segmented and


segmented/page memory allocation
 Participate in the class discussion

Understanding Operating Systems 46


, Fourth Edition
Segmented Memory Allocation
 What is the problem of the demand page
memory allocation that gives birth the concept
of segmented memory allocation?
 Thrashing

 An excessive amount of page swapping between main


memory and secondary storage
 Caused when a page is removed from memory but is
called back shortly thereafter
 Can happen within a job (e.g., in loops that cross page
boundaries)

Understanding Operating Systems 47


, Fourth Edition
Segmented Memory Allocation
(cont.)

for(j=1; j<100; j++) Page 0


{
k=j*j;

M=a*j;
printf(“\n%d %d %d”, j,k,m)
} Page 1
printf(“\n”);

An example of demand paging that causes a page swap each time the
loop is executed and results in thrashing. If only a single page frame
is available, this program will have one page fault each time the loop
is executed.
Understanding Operating Systems 48
, Fourth Edition
Segmented Memory Allocation
(cont.)
 What is segmented memory allocation?
 Each job is divided into several segments of

different sizes, one for each module that contains


pieces to perform related functions
 Main memory is no longer divided into page

frames, rather allocated in a dynamic manner


 Segments are set up according to the program’s

structural modules when a program is compiled or


assembled

Understanding Operating Systems 49


, Fourth Edition
#include <stdio.h>
main(){
double[] list={10,5,20,6,11,25};
double smallest=SmallestNumber(list);
double biggest=BiggestNumber(list);
DisplayResult(smallest,biggest);
}
double SmallestNumber(double[] l){
double smallest=l[0];
for(int x=0;x<l.length;x++)
if(smallest>l[x]) smallest=l[x];
return smallest;
}
double BiggestNumber(double[] l){
double biggest=l[0];
for(int x=0;x<l.length;x++)
if(biggest<l[x]) biggest=l[x];
return biggest;
}
void DisplayResult(double a,double b){
cout<<“The smallest is ” <<a <<“ and biggest number is ”<<b;
}
Understanding Operating Systems 50
, Fourth Edition
Segmented Memory Allocation
(cont.)

Figure 3.13: Segmented memory allocation. Job 1


includes a main program, Subroutine A, and
Subroutine B. It’s one job divided into three
segments.
Understanding Operating Systems 51
, Fourth Edition
Segmented Memory Allocation
(continued)

Figure 3.14: The Segment Map Table tracks each


segment for Job 1
Understanding Operating Systems 52
, Fourth Edition
Segmented Memory Allocation
(continued)
 Memory Manager tracks segments in memory using
following three tables:
 Job Table lists every job in process (one for

whole system)
 Segment Map Table lists details about each

segment (one for each job)


 Memory Map Table monitors allocation of main

memory (one for whole system)


 Segments don’t need to be stored contiguously
 The addressing scheme requires segment number
and displacement

Understanding Operating Systems 53


, Fourth Edition
Segmented Memory Allocation
(continued)
 Advantages:
 Internal fragmentation is removed

 Disadvantages:
 Difficulty managing variable-length segments in

secondary storage
 External fragmentation

Understanding Operating Systems 54


, Fourth Edition
Segmented Memory Allocation
(continued)
 How does the computer knows the exact address of
the line of code in the main memory?
 exact address = #line of code + starting address

Understanding Operating Systems 55


, Fourth Edition
Sample Problem
 What is the exact address of the line 340 of segment
#0?

Understanding Operating Systems 56


, Fourth Edition
Segmented Memory Allocation
(continued)
 Segment Memory allocation allows the sharing of
code.
 A segment of code called copy of Microsoft Word can
be used also in Microsoft Power Point

Understanding Operating Systems 57


, Fourth Edition
Sample Problem
 Given the following Segment Map Tables for two jobs:
SMT for Job 1 SMT for Job 2
Segment Number Memory Location Segment Number Memory Location
0 4096 0 2048
1 6144 1 6144
2 9216 2 9216
3 2048
4 7168
 Which segments, if any, are shared between the two jobs?
 If the segment now located at 7168 is swapped out and later
reloaded at 8192, and the segment now at 2048 is swapped out and
reloaded at 1024, what would the new segment tables look like?
Understanding Operating Systems 58
, Fourth Edition
Segmented/Demand Paged
Memory Allocation
 What is the problem of the segmented memory allocation that
gives birth the idea of segmented/demand page memory
allocation?
 Dynamic allocation

 Subdivides segments into pages of equal size, smaller than most


segments, and more easily manipulated than whole segments. It
offers:
 Logical benefits of segmentation

 Physical benefits of paging

 Removes the problems of compaction, external fragmentation,


and secondary storage handling
 The addressing scheme requires segment number, page number
within that segment, and displacement within that page

Understanding Operating Systems 59


, Fourth Edition
Segmented/Demand Paged
Memory Allocation (continued)
 This scheme requires following four tables:
 Job Table lists every job in process (one for the

whole system)
 Segment Map Table lists details about each

segment (one for each job)


 Page Map Table lists details about every page

(one for each segment)


 Memory Map Table monitors the allocation of

the page frames in main memory (one for the


whole system)

Understanding Operating Systems 61


, Fourth Edition
Segmented/Demand Paged
Memory Allocation (continued)

Figure 3.16: Interaction of JT, SMT, PMT, and main memory


in a segment/paging scheme
Understanding Operating Systems 62
, Fourth Edition
Segmented/Demand Paged
Memory Allocation (continued)
 Advantages:
 Segment loaded on demand

 Disadvantages:
 Table handling overhead

 Memory needed for page and segment tables

 To minimize number of references, many systems


use associative memory to speed up the process
 Its disadvantage is the high cost of the complex

hardware required to perform the parallel


searches

Understanding Operating Systems 63


, Fourth Edition
Segmented/Demand Paged
Memory Allocation (continued)
 How does the computer determine the exact
address of the line of code in the memory?
 Compute the page and displacement of the line of

code
 Determine what segments the line belong

 Determine the frame number of the line using the

SMT and PMT


 Frame number + displacement

Understanding Operating Systems 64


, Fourth Edition
Sample Problem

What is the exact address of the line 150 of Job0, assuming


that the line belong to segment 2, frame size is 100 and byte
per line is 1.
Understanding Operating Systems 65
, Fourth Edition
Chapter 3
Memory Management:
Virtual Memory

Topics:
1. Virtual and Cache Memory
Objectives
 After the discussion, the students should be able to:
 Define virtual memory and cache memory

 Familiarize the architecture of computer with

virtual memory and cache memory


 Compute the efficiency of the cache memory

using the hit ratio and access time formula


 Appreciate the importance of virtual memory and

cache memory

Understanding Operating Systems 67


, Fourth Edition
What is virtual memory
 It is a technique that allows programs to be executed
even though they are not stored entirely in memory.
 Requires cooperation between the Memory Manager
and the processor hardware

Understanding Operating Systems 68


, Fourth Edition
Example: Word processor
Main Memory Table Sub-Program Picture Sub-Program

Virtual memory
Main Memory

Main memory

Understanding Operating Systems 69


, Fourth Edition
Virtual Memory (continued)
 Advantages of virtual memory management:
 Job size is not restricted to the size of main memory
 Memory is used more efficiently
 Allows an unlimited amount of multiprogramming
 Eliminates external fragmentation and minimizes internal
fragmentation
 Allows the sharing of code and data
 Facilitates dynamic linking of program segments
 Disadvantages:
 Increased processor hardware costs
 Increased overhead for handling paging interrupts
 Increased software complexity to prevent thrashing
Understanding Operating Systems 70
, Fourth Edition
Virtual Memory (continued)
Needed Entire
Program Program

Architecture of the Computer with Virtual Memory

Understanding Operating Systems 71


, Fourth Edition
Cache Memory
 A small high-speed memory unit that a processor can
access more rapidly than main memory
 Used to store frequently used data, or instructions
 Movement of data, or instructions, from main
memory to cache memory uses a method similar to
that used in paging algorithms

Understanding Operating Systems 72


, Fourth Edition
Cache Memory (continued)

Figure 3.17: Comparison of (a) traditional path used by


early computers and (b) path used by modern
computers to connect main memory and CPU
via cache memory
Understanding Operating Systems 73
, Fourth Edition
Cache Memory (continued)
 Types of cache memory
 L1 cache – built directly to the CPU

 L2 cache – integral part of the CPU

Understanding Operating Systems 74


, Fourth Edition
Cache Memory (continued)
 Efficiency Calculation
 Hit Ratio

 h = (number of request found in the cache/total number


of requests) * 100
 Average Access time
 ta=tc + (1-h)*tm
 Where:
 ta=access time
 tc=cache access time
 h=hit ratio
 tm=main memory access time

Understanding Operating Systems 75


, Fourth Edition
Summary
 Paged memory allocations allow efficient use of
memory by allocating jobs in noncontiguous memory
locations
 Increased overhead and internal fragmentation are
problems in paged memory allocations
 Job no longer constrained by the size of physical
memory in demand paging scheme
 LRU scheme results in slightly better efficiency as
compared to FIFO scheme
 Segmented memory allocation scheme solves internal
fragmentation problem

Understanding Operating Systems 76


, Fourth Edition
Summary (continued)
 Segmented/demand paged memory allocation removes
the problems of compaction, external fragmentation,
and secondary storage handling
 Associative memory can be used to speed up the
process
 Virtual memory allows programs to be executed even
though they are not stored entirely in memory
 Job’s size is no longer restricted to the size of main
memory by using the concept of virtual memory
 CPU can execute instruction faster with the use of
cache memory

Understanding Operating Systems 77


, Fourth Edition

You might also like