Chapter 3 - Memory Management (Virtual Memory Systems)
Chapter 3 - Memory Management (Virtual Memory Systems)
Chapter 3 - Memory Management (Virtual Memory Systems)
(figure 3.1)
In this example, each
page frame can hold
100 bytes. This job, at
350 bytes long, is
divided among four
page frames. The
resulting internal
fragmentation (wasted
space) is associated
with Page 3, loaded
into Page Frame 11.
© Cengage Learning
2018
Paged Memory Allocation (3 of 10)
Internal fragmentation: job’s last page frame only
Entire program: required in memory during its
execution
Three tables for tracking pages: Job Table (JT),
Page Map Table (PMT), and Memory Map Table
(MMT)
– Stored in main memory: operating system area
Job Table: information for each active job
– Job size
– Memory location: job’s PMT
Paged Memory Allocation (4 of 10)
Page Map Table: information for each page
– Page number: beginning with Page 0
– Memory address
Memory Map Table: entry for each page
frame
– Location
– Free/busy status
Paged Memory Allocation (5 of 10)
(table 3.1)
Three snapshots of the Job Table. Initially the Job Table has one entry for
each job (a). When the second job ends (b), its entry in the table is released.
Finally, it is replaced by the entry for the next job (c).
© Cengage Learning 2018
Paged Memory Allocation (6 of 10)
Line displacement (offset)
– Line distance: how far away a line is from the beginning
of its page.
– Line location: Used to locate that line within its page
frame.
– Relative value
Determining page number and displacement of a
line
– Divide job space address by the page size
– Page number: integer quotient
– Displacement: remainder
Paged Memory Allocation (7 of 10)
For example, lines 0, 100, 200, and 300 are first lines for pages 0,
1, 2, and 3 respectively so each has displacement of zero.
Likewise, if the operating system needs to access byte 214, it can
first go to page 2 and then go to byte 14 (the fifteenth line). The
first byte of each page has a displacement of zero, and the last
byte, has a displacement of 99.
Divide the line number by the page size, keeping the remainder
as an integer.
Paged Memory
Allocation (8 of 10)
(figure 3.2)
This job is 350 bytes long
and is divided into four
pages of 100 bytes each
that are loaded into four
page frames in memory.
Notice the internal
fragmentation at the end of
Page 3.
© Cengage Learning 2018
Paged Memory Allocation (9 of 10)
(figure 3.7)
First, Pages A and B are loaded into the two available page frames. When Page C is
needed, the first page frame is emptied so C can be placed there. Then Page B is
swapped out so Page A can be loaded there.
© Cengage Learning 2018
First-In-First-Out (3 of 3)
(figure 3.8)
Using a FIFO policy, this page trace analysis shows how each page requested is swapped into
the two available page frames. When the program is ready to be processed, all four pages are
in secondary storage. When the program calls a page that isn’t already in memory, a page
interrupt is issued, as shown by the gray boxes and asterisks. This program resulted in nine
page interrupts.
© Cengage Learning 2018
Least Recently Used (1 of 2)
Removes page: least recent activity
– Theory of locality
Efficiency
– Additional main memory: causes either decrease
in or same number of interrupts
Least Recently Used (2 of 2)
figure 3.9)
Memory management using an LRU page removal policy for the program shown in
Figure 3.8. Throughout the program, 11 page requests are issued, but they cause only 8
page interrupts.
© Cengage Learning 2018
The mechanics of Paging (1 of 4)
Page swapping
– Memory manage requires specific information:
Page Map Table
(table 3.3)
Page Map Table for Job 1 shown in Figure 3.5. For the bit indicators,
1 = Yes and 0 = No.
© Cengage Learning 2018
The mechanics of Paging (2 of 4)
Page Map Table: bit meaning
– Status bit: page currently in memory
– Referenced bit: page referenced recently
• Determines page to swap: LRU algorithm
– Modified bit: page contents altered
• Determines if page must be rewritten to secondary
storage when swapped out
Bits checked when swapping
– FIFO: modified and status bits
– LRU: all bits (status, modified, and reference bits)
The mechanics of Paging (3 of 4)
(table 3.4)
The meaning of the zero and one bits in the Page Map Table.
© Cengage Learning 2018
Status Bit Modified Bit
Value Meaning
0 not called
1 was called
The mechanics of Paging (4 of 4)
Modified? Referenced? What it Means
Case 1 0 0 Not modified AND not referenced
Case 2 0 1 Not modified BUT was referenced
Case 3 1 0 Was modified BUT not referenced
(Impossible?)
Case 4 1 1 Was modified AND was
referenced
(table 3.5)
Four possible combinations of modified and referenced bits and the
meaning of each.
Yes = 1, No = 0.
© Cengage Learning 2018
The Working Set
Set of pages residing in memory accessed directly without
incurring a page fault
– Demand paging schemes: improves performance
The set of pages that is currently being used.
If the entire working set is in memory then no page
faults will occur
Requires “locality of reference” concept
– Structured programs: only small fraction of pages needed during
any execution phase
System needs definitive values:
– Number of pages comprising working set
– Maximum number of pages allowed for a working set
Segmented Memory Allocation (1 of 6)
Each job divided into several segments: different
sizes
– One segment for each module: related functions
Reduces page faults
Main memory: allocated dynamically
Program’s structural modules: determine segments
– Each segment numbered when program
compiled/assembled
– Segment Map Table (SMT) generated
Segmented Memory Allocation (2 of 6)
(figure 3.14)
Segmented memory allocation. Job 1 includes a main program and two
subroutines. It is a single job that is structurally divided into three segments of
different sizes.
© Cengage Learning 2018
Segmented Memory Allocation (3 of 6)
(figure 3.15)
The Segment
Map Table
tracks each
segment for this
job. Notice that
Subroutine B
has not yet been
loaded into
memory.
© Cengage
Learning 2018
Segmented Memory Allocation (4 of 6)
(figure 3.16)
During
execution, the
main program
calls Subroutine
A, which
triggers the
SMT to look up
its location in
memory.
© Cengage
Learning 2018
Segmented Memory Allocation (5 of 6)
Memory Manager: tracks segments in memory
– Job Table: one for whole system
• Every job in process
– Segment Map Table: one for each job
• Details about each segment
– Memory Map Table: one for whole system
• Main memory allocation
Instructions within each segment: ordered
sequentially
Segments: not necessarily stored contiguously
Segmented Memory Allocation (6 of 6)
Two-dimensional addressing scheme
– Segment number and displacement
Disadvantage
– External fragmentation
Major difference between paging and
segmentation
– Pages: physical units; invisible to the program
– Segments: logical units; visible to the program;
variable sizes
Segmented/Demand Paged Memory Allocation (1 of 4)
(figure 3.17)
How the Job
Table, Segment
Map Table,
Page Map
Table, and main
memory
interact in a
segment/paging
scheme.
© Cengage
Learning 2018
Segmented/Demand Paged Memory Allocation (4 of 4)
Disadvantages
– Overhead: managing the tables
– Time required: referencing tables
Associative memory
– Several registers allocated to each job
• Segment and page numbers: associated with main memory
Associative memory
– Primary advantage (large associative memory)
• Increased speed
– Disadvantage
• High cost of complex hardware
Virtual Memory (1 of 3)
Virtual memory is a technique that allows execution of a program
that is bigger than the physical memory of the computer system.
Virtual memory gives the illusion that the system has a much
larger memory than is actually available.
(table 3.6)
Comparison of the advantages and disadvantages of virtual memory
with paging and segmentation.
© Cengage Learning 2018
Virtual Memory (4 of 4)
Advantages
– Job size: not restricted to size of main memory
– More efficient memory use
– Unlimited amount of multiprogramming possible
– Code and data sharing allowed
– Dynamic linking of program segments facilitated
Disadvantages
– Higher processor hardware costs
– More overhead: handling paging interrupts
– Increased software complexity: prevent thrashing
Cache Memory (1 of 2)
Small, high-speed intermediate memory unit
Computer system’s performance increased
– Faster processor access compared to main memory
– Stores frequently used data and instructions
Cache levels
– L2: connected to CPU; contains copy of bus data
– L1: pair built into CPU; stores instructions and data
Data/instructions: move between main memory and
cache
– Methods similar to paging algorithms
Cache Memory (2 of 2)
(figure 3.19)
The traditional path used by early computers was direct: from secondary
storage to main memory to the CPU registers, but speed was slowed by the
slow connections (top). With cache memory directly accessible from the C P
U registers (bottom), a much faster response is possible. © Cengage
Learning 2018
The big picture. This is a comparison of the memory allocation
schemes discussed in Chapters 2 and 3. © Cengage Learning 2018