0% found this document useful (0 votes)
12 views99 pages

Mod-1 3

Chapter 3 of 'Understanding Operating Systems' discusses memory management with a focus on virtual memory systems, including various memory allocation methods such as paged, demand paging, and segmented memory allocation. It outlines the mechanics of paging, page replacement policies, and the role of cache memory in improving system performance. The chapter also covers the advantages and disadvantages of these methods, particularly in terms of efficiency and overhead in managing memory resources.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views99 pages

Mod-1 3

Chapter 3 of 'Understanding Operating Systems' discusses memory management with a focus on virtual memory systems, including various memory allocation methods such as paged, demand paging, and segmented memory allocation. It outlines the mechanics of paging, page replacement policies, and the role of cache memory in improving system performance. The chapter also covers the advantages and disadvantages of these methods, particularly in terms of efficiency and overhead in managing memory resources.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 99

Understanding Operating Systems

Seventh Edition

Chapter 3
Memory Management:
Virtual Memory Systems
Learning Objectives

After completing this chapter, you should be able to


describe:
• The basic functionality of the memory allocation
methods covered in this chapter: paged, demand
paging, segmented, and segmented/demand paged
memory allocation
• The influence that these page allocation methods
have had on virtual memory

Understanding Operating Systems, 7e 2


Learning Objectives (cont'd.)
• The difference between a first-in first-out page
replacement policy, a least-recently-used page
replacement policy, and a clock page replacement
policy
• The mechanics of paging and how a memory
allocation scheme determines which pages should
be swapped out of memory

Understanding Operating Systems, 7e 3


Learning Objectives (cont'd.)
• The concept of the working set and how it is used in
memory allocation schemes
• Cache memory and its role in improving system
response time

Understanding Operating Systems, 7e 4


Introduction
• Evolution of virtual memory
– Paged, demand paging, segmented,
segmented/demand paging
– Foundation of current virtual memory methods
• Areas of improvement from the need for:
– Continuous program storage
– Placement of entire program in memory during
execution
• Enhanced Memory Manager performance: cache
memory

Understanding Operating Systems, 7e 5


Paged Memory Allocation
• Incoming job: divided into pages of equal size
• Best condition
– Pages, sectors, and page frames: same size
• Exact sizes: determined by disk’s sector size
• Memory manager tasks: prior to program execution
– Determine number of pages in program
– Locate enough empty page frames in main memory
– Load all program pages into page frames

Understanding Operating Systems, 7e 6


Paged Memory Allocation (cont’d.)
• Program: stored in noncontiguous page frames
– Advantages: more efficient memory use; compaction
scheme eliminated (no external fragmentation)
– New problem: keeping track of job’s pages (increased
operating system overhead)

Understanding Operating Systems, 7e 7


Paged Memory Allocation (cont'd.)
(figure 3.1)
In this example, each
page frame can hold
100 bytes. This job, at
350 bytes long, is
divided among four
page frames with
internal fragmentation
in the last page frame.
© Cengage Learning
2014

Understanding Operating Systems, 7e 8


Paged Memory Allocation (cont’d.)
• Internal fragmentation: job’s last page frame only
• Entire program: required in memory during its
execution
• Three tables for tracking pages: Job Table (JT),
Page Map Table (PMT), and Memory Map Table
(MMT)
– Stored in main memory: operating system area
• Job Table: information for each active job
– Job size
– Memory location: job’s PMT

Understanding Operating Systems, 7e 9


Paged Memory Allocation (cont’d.)
• Page Map Table: information for each page
– Page number: beginning with Page 0
– Memory address
• Memory Map Table: entry for each page frame
– Location
– Free/busy status

Understanding Operating Systems, 7e 10


Paged Memory Allocation (cont'd.)

(table 3.1)
This section of the Job Table initially has one entry for each job (a).
When the second job ends (b), its entry in the table is released and then
replaced by the entry for the next job (c).
© Cengage Learning 2014

Understanding Operating Systems, 7e 11


Paged Memory Allocation (cont'd.)
• Line displacement (offset)
– Line distance: from beginning of its page
– Line location: within its page frame
– Relative value
• Determining page number and displacement of a
line
– Divide job space address by the page size
– Page number: integer quotient
– Displacement: remainder

Understanding Operating Systems, 7e 12


(figure 3.2)
This job is 350 bytes long and
is divided into four pages of
100 bytes each that are loaded
into four page frames in
memory.
© Cengage Learning 2014

Understanding Operating Systems, 7e 13


Paged Memory Allocation (cont'd.)
• Instruction: determining exact location in memory
Step1: Determine page number/displacement of line
Step 2: Refer to the job’s PMT
• Determine page frame containing required page
Step 3: Obtain beginning address of page frame
• Multiply page frame number by page frame size
Step 4: Add the displacement (calculated in first step)
to starting address of the page frame
• Address resolution (address translation)
– Job space address (logical) → physical address
(absolute)

Understanding Operating Systems, 7e 14


(figure 3.3)
This system has page frame and page sizes of 512 bytes each. The PMT
shows where the job’s two pages are loaded into available page frames in
main memory.
© Cengage Learning 2014

Understanding Operating Systems, 7e 15


Paged Memory Allocation (cont'd.)
• Advantages
– Efficient memory use: job allocation in noncontiguous
memory
• Disadvantages
– Increased overhead: address resolution
– Internal fragmentation: last page
• Page size: crucial
– Too small: very long PMTs
– Too large: excessive internal fragmentation

Understanding Operating Systems, 7e 16


Paged Memory Allocation
 Each incoming job is divided into pages of
equal size
 Main memory is divided into sections called
page frames
 Some OS choose a job page size = memory
page frame size = HD sector size (eg 512 bytes)
 Exact size is usually determined by the disk’s
sector size
Paged Memory Allocation
Before executing a program, the memory
manager:

 Determines the number of pages in the program


 Locates enough empty page frames in memory
 Loads all the program’s pages
Paged Memory Allocation

Advantages:
 Pages do not have to be loaded contiguously
and are loaded into any available page frame
 Memory is used more efficiently, because an
empty page frame can be used by any job
 No compaction is really necessary as there is
no external fragmentation
Paged Memory Allocation
Disadvantages:
 Some internal fragmentation in the last pages of some
jobs loaded
 Memory manager now needs a mechanism to keep
track of a job’s pages => enlarging the size and
complexity the OS software => overhead
 Entire job still needs to be loaded during execution
 The program must be divided into pages by the
compiler during compilation time
 There is an additional effort necessary to access the
bytes in the page frames (calculation of the page
frame number and offset)
Paged Memory Allocation
 The Memory Manager uses tables to keep
track of which job pages are loaded into
which memory page frames
 Three tables that perform this function are:
• Job Table (JT)
• Page Map Table (PMT)
• Memory Map Table (MMT)
 All 3 tables reside in the part of main
memory reserved for the OS
Paged allocation example
Main Memory Page Frame No.

0
Job 1 OS
1
First 100 bytes Page 0 2
3
Second 100 bytes Page 1 4
Job 1 - Page 2 5
Third 100 bytes Page 2
6
Remaining 50 bytes 7
Wasted space Page 3
Job 1 - Page 0 8
9
Job 1 - Page 1 10
Job 1 - Page 3 11
12
Paged Memory Allocation
Example
 Page 3 of job 1 uses only 50 of 100 bytes
available in page frame 11, thus causing
some internal fragmentation
 After Job 1 is loaded, there are 7 free
page frames:
Thus, a job larger than 700 bytes cannot
be loaded until Job 1 ends
JOB TABLE
Example

PMT
Job Size
location

350 3096

200 3100

500 3150
Page Map Table for Job1 Memory Map Table (Example)
Job Page No. Page Frame No. Page Frame Location Status
No. (starting line)
0 8 0 0 Busy
1 100 Busy
1 10 2 200 Free
3 300 Free
2 5
4 400 Free
3 11 5 500 Busy
6 600 Free
7 700 Free
8 800 Busy
9 900 Free
10 1000 Busy
11 1100 Busy
12 1200 Free
Calculating page frame address and offset
 Compiler divides each job into pages:
• page 0: bytes 0 - 99
• page 1: bytes 100 - 199
• page 2: bytes 200 - 299
• page 3: bytes 300 - 399
 e.g. suppose the OS needs to access byte 214
 When accessing a byte, the OS calculates from the
absolute address of a byte the page and offset address
• page = bytes / page size => 214 / 100 => page 2
• offset = byte % page size => 214 % 100 = 14
(% = Modulo Division)
Calculating page frame address and offset
 The OS refers to the job’s PMT and sorts out which page
frame in memory contains which job page
page 2 => page frame 5
 Get the address of the beginning of the page frame by
multiplying the page frame number by the page frame size
addr_page_frame = page_frame_num * page_size
= 5 * 100 = 500
 Now add the offset to the starting address of the page frame
to compute the precise location in memory of the byte
instr_addr_in_mem = addr_page_frame + offset
= 500 + 14 = 514
Calculating page frame address and offset

 Every time an instruction is executed (or data


value is used), this translation has to be done
 Calculation is normally done by HW
 The OS maintains the tables
 The process is called address resolution
Paged Allocation Example
Job 1 Main Memory Page Frame No.
0
Add. no. Instruction/Data 0
512 OS
000 BEGIN 1
1024
2
025 LOAD R1, 518 1536
Job 1 - Page 1 3
2048
4
518 3792
2560
Job 1 - Page 0 5
3072
6
3584
PMT for Job 1 7
4096
Job Page No. Page Frame No.
8
9
0 5
10
1 3 11
12
Calculating page frame address and offset

 Compiler divides each job into pages:


• page 0: bytes 0 - 511
• page 1: bytes 512 -1023
 At address 25, the program needs to load into register 1
the data value at address 518
 When accessing a byte the OS calculates from the
absolute address of a byte the page and offset address
• page = bytes/page size => 518/512 => page 1
• offset = byte%page size => 518%512 = 6
Calculating page frame address and offset
 The OS refers to the job’s PMT and determines which
page frame in memory contains which job page
page 1 => page frame 3
 Get the address of the beginning of the page frame by
multiplying the page frame number by the page frame
size
addr_page_frame = page_frame_num *page_size
= 3 * 512 = 1536
 Now add the offset to the starting address of the page
frame to compute the precise location in memory of the
byte
instr_addr_in_mem = addr_page_frame + offset
= 1536 + 6 =
1542
Demand Paging Memory Allocation
• Loads only a part of the program into memory
– Removes restriction: entire program in memory
– Requires high-speed page access
• Exploits programming techniques
– Modules: written sequentially
• All pages: not needed simultaneously
– Examples
• Error-handling modules instructions
• Mutually exclusive modules
• Certain program options: mutually exclusive or not
always accessible

Understanding Operating Systems, 7e 32


Demand Paging (cont'd.)
• Virtual memory
– Appearance of vast amounts of physical memory
• Less main memory required than paged memory
allocation scheme
• Requires high-speed direct access storage device
(DASDs): e.g., hard drives or flash memory
• Swapping: how and when pages passed between
memory and secondary storage
– Depends on predefined policies

Understanding Operating Systems, 7e 33


Demand Paging Memory Allocation
(cont'd.)
• Algorithm implementation: tables, e.g., Job Table,
Page Map Table, and Memory Map Table
• Page Map Table
– First field: page requested already in memory?
– Second field: page contents modified?
– Third field: page referenced recently?
– Fourth field: frame number

Understanding Operating Systems, 7e 34


(figure 3.5)
Demand paging requires
that the Page Map Table
for each job keep track of
each page as it is loaded
or removed from main
memory. Each PMT tracks
the status of the page,
whether it has been
modified, whether it has
been recently referenced,
and the page frame
number for each page
currently in main memory.
(Note: For this illustration,
the Page Map Tables have
been simplified. See Table
3.3 for more detail.
© Cengage Learning 2014

Understanding Operating Systems, 7e 35


Demand Paging Memory Allocation
(cont'd.)
• Swapping process
– Resident memory page: exchanged with secondary
storage page
• Resident page: copied to disk (if modified)
• New page: written into available page frame
– Requires close interaction between:
• Hardware components
• Software algorithms
• Policy schemes

Understanding Operating Systems, 7e 36


Demand Paging Memory Allocation
(cont'd.)
• Hardware components:
– Generate the address: required page
– Find the page number
– Determine page status: already in memory
• Page fault: failure to find page in memory
• Page fault handler: part of operating system
– Determines if empty page frames in memory
• Yes: requested page copied from secondary storage
• No: swapping (dependent on the predefined policy)

Understanding Operating Systems, 7e 37


Demand Paging Memory Allocation
(cont'd.)
• Tables updated when page swap occurs
– PMT for both jobs (page swapped out; page swapped
in) and the MMT
• Thrashing
– Excessive page swapping: inefficient operation
– Main memory pages: removed frequently; called back
soon thereafter
– Occurs across jobs
• Large number of jobs: limited free pages
– Occurs within a job
• Loops crossing page boundaries

Understanding Operating Systems, 7e 38


(figure 3.6)
An example of demand paging that causes a page swap each time the
loop is executed and results in thrashing. If only a single page frame is
available, this program will have one page fault each time the loop is
executed.
© Cengage Learning 2014

Understanding Operating Systems, 7e 39


Page Replacement Policies
and Concepts
• Page replacement policy
– Crucial to system efficiency
• Two well-known algorithms
– First-in first-out (FIFO) policy
• Best page to remove: page in memory longest
– Least Recently Used (LRU) policy
• Best page to remove: page least recently accessed

Understanding Operating Systems, 7e 40


First-In First-Out
• Removes page: longest in memory
• Failure rate
– Ratio of page interrupts to page requests
• More memory: does not guarantee better
performance

Understanding Operating Systems, 7e 41


(figure 3.7)
First, Pages A and B are loaded into the two available page frames. When
Page C is needed, the first page frame is emptied so C can be placed there.
Then Page B is swapped out so Page A can be loaded there.
© Cengage Learning 2014

Understanding Operating Systems, 7e 42


(figure 3.8)
Using a FIFO policy, this page trace analysis shows how each page requested
is swapped into the two available page frames. When the program is ready to
be processed, all four pages are in secondary storage. When the program
calls a page that isn’t already in memory, a page interrupt is issued, as shown
by the gray boxes and asterisks. This program resulted in nine page interrupts.
© Cengage Learning 2014
Understanding Operating Systems, 7e 43
Least Recently Used
• Removes page: least recent activity
– Theory of locality
• Efficiency
– Additional main memory: causes either decrease in or
same number of interrupts
– Does not experience FIFO Anomaly (Belady Anomaly)

Understanding Operating Systems, 7e 44


(figure 3.9)
Memory management using an LRU page removal policy for the program
shown in Figure 3.8. Throughout the program, 11 page requests are issued,
but they cause only 8 page interrupts.
© Cengage Learning 2014

Understanding Operating Systems, 7e 45


Least Recently Used (cont'd.)
• Clock replacement variation
– Circular queue: pointer steps through active pages’
reference bits; simulates a clockwise motion
– Pace: computer’s clock cycle
• Bit-shifting variation
– 8-bit reference byte and bit-shifting technique: tracks
pages’ usage (currently in memory)

Understanding Operating Systems, 7e 46


The Mechanics of Paging
• Page swapping
– Memory manage requires specific information:
Page Map Table

(table 3.3)
Page Map Table for Job 1 shown in Figure 3.5.
A 1 = Yes and 0 = No.
© Cengage Learning 2014

Understanding Operating Systems, 7e 47


The Mechanics of Paging (cont'd.)
• Page Map Table: bit meaning
– Status bit: page currently in memory
– Referenced bit: page referenced recently
• Determines page to swap: LRU algorithm
– Modified bit: page contents altered
• Determines if page must be rewritten to secondary
storage when swapped out
• Bits checked when swapping
– FIFO: modified and status bits
– LRU: all bits (status, modified, and reference bits)

Understanding Operating Systems, 7e 48


(table 3.4)
The meaning of these bits used in the Page Map Table.
© Cengage Learning 2014

(table 3.5)
Four possible combinations of modified and referenced bits and the
meaning of each.
© Cengage Learning 2014

Understanding Operating Systems, 7e 49


The Working Set
• Set of pages residing in memory: accessed directly
without incurring a page fault
– Demand paging schemes: improves performance
• Requires “locality of reference” concept
– Structured programs: only small fraction of pages
needed during any execution phase
• System needs definitive values:
– Number of pages comprising working set
– Maximum number of pages allowed for a working set
• Time-sharing and network systems
– Must track every working set’s size and identity
Understanding Operating Systems, 7e 50
(figure 3.13)
Time line showing the amount of time required to process page faults for
a single program. The program in this example takes 120 milliseconds
(ms) to execute but an additional 900 ms to load the necessary pages
into memory. Therefore, job turnaround is 1020 ms.
© Cengage Learning 2014

Understanding Operating Systems, 7e 51


Demand Paging
 Problem: Thrashing => when there is an
excessive amount of page swapping, the
operation becomes inefficient
 It is caused when a page is removed from
memory, but then called back shortly thereafter
 Thrashing can occur across jobs when a large
number of jobs are vying for relatively few
number of pages. (The ratio of job pages to free
memory page frames is high)
 Or, it can happen within a job (eg, loops that
cross page boundaries)
Page replacement concepts and policies

 The page replacement policy is crucial to the


efficiency of the system
 Several algorithms exist - most widely used are
• FIFO = First-in First-out and
• LRU = Least-recently used

 Other page removal algorithms are:


• MRU = Most recently used
• LFU = Least frequently used
Page replacement concepts and policies
FIFO (First-in First-out)

 Used by Windows NT, OS/2


 The FIFO page replacement policy will remove
the pages that have been in memory the longest
 Pages are never swapped between page frames
 There is no guarantee that adding more memory
will result in better performance: FIFO anomaly
FIFO
Page requests: A B A C A B D B A C D

Page Interrupts: * * * * * * * * *

Contents of
page frame 1: A A A C C B B B A A D
Contents of
page frame 2: B B B A A D D D C C

Contents of A
secondary B B
storage: C C C C A B C A A B B A
D D D D D D D C C C D B
FIFO
 9 page interrupts for 11 page requests
 Failure rate:
= page interrupts / page requests = 9 / 11 = 82%
 Success rate:
 = successful page requests / page requests = 2/11 = 18%
 No guarantee that adding more memory will always result
in better performance (FIFO anomaly). In some cases,
performance can decrease !!
Least Recently Used (LRU)

 Used by Windows 95 / 98
 The LRU page replacement policy swaps
out the pages that show the least amount of
recent activity
LRU
Page requests: A B A C A B D B A C D

Page Interrupts: * * * * * * * *

Contents of
page frame 1: A A A A A A D D A A D
Contents of
page frame 2: B B C C B B B B C C

Contents of A
secondary B B
storage: C C C C B B C A A C B A
D D D D D D D C C D D B
LRU

 8 page interrupts for 11 page requests


 Failure rate
= page interrupts / page requests = 8/11 = 73%
 Success rate
= successful page requests / page requests = 3/11 = 27%
 An increase in memory will cause either a decrease or
the same number of page interrupts. It will never cause
an increase in page interrupts
FIFO vs LRU

A B A C A B D B A C D B

PF1

PF2

PF3

• Perform a FIFO page replacement policy. (hint: Redraw the table and
use * to represent a page interrupt). Calculate the Failure Rate.
• Perform a LRU page replacement policy. (hint: Redraw the table and
use * to represent a page interrupt). Calculate the Failure Rate.
Mechanics of Paging

 Use of flags in the Page Map Table (PMT):


• FIFO uses only the status and the modified bit
• LRU uses the status, modified and referenced bit
 The OS has to reset the referenced bit periodically
PMT for Job 1

Job Page Status Modified Referenced Page Frame


No. No.
0 1 1 1 5
1 1 0 0 9
2 1 0 0 7
3 1 1 0 12
Segmented Memory Allocation
• Each job divided into several segments: different
sizes
– One segment for each module: related functions
• Reduces page faults
– Loops: not split over two or more pages
• Main memory: allocated dynamically
• Program’s structural modules: determine segments
– Each segment numbered when program
compiled/assembled
– Segment Map Table (SMT) generated

Understanding Operating Systems, 7e 62


(figure 3.14)
Segmented memory allocation. Job 1 includes a main program and two
subroutines. It is a single job that is structurally divided into three
segments of different sizes.
© Cengage Learning 2014

Understanding Operating Systems, 7e 63


(figure 3.15)
The Segment Map
Table tracks each
segment for this
job. Notice that
Subroutine B has
not yet been loaded
into memory.
© Cengage Learning
2014

Understanding Operating Systems, 7e 64


(figure 3.16)
During execution,
the main program
calls Subroutine
A, which triggers
the SMT to look
up its location in
memory.
© Cengage Learning
2014

Understanding Operating Systems, 7e 65


Segmented Memory Allocation
(cont'd.)
• Memory Manager: tracks segments in memory
– Job Table: one for whole system
• Every job in process
– Segment Map Table: one for each job
• Details about each segment
– Memory Map Table: one for whole system
• Main memory allocation
• Instructions within each segment: ordered
sequentially
• Segments: not necessarily stored contiguously

Understanding Operating Systems, 7e 66


Segmented Memory Allocation
(cont'd.)
• Two-dimensional addressing scheme
– Segment number and displacement
• Disadvantage
– External fragmentation
• Major difference between paging and segmentation
– Pages: physical units; invisible to the program
– Segments: logical units; visible to the program;
variable sizes

Understanding Operating Systems, 7e 67


Segmented/Demand Paged
Memory Allocation
• Subdivides segments: equal-sized pages
– Smaller than most segments
– More easily manipulated than whole segments
– Segmentation’s logical benefits
– Paging’s physical benefits
• Segmentation problems removed
– Compaction, external fragmentation, secondary
storage handling
• Three-dimensional addressing scheme
– Segment number, page number (within segment), and
displacement (within page)
Understanding Operating Systems, 7e 68
Segmented/Demand Paged
Memory Allocation (cont'd.)
• Scheme requires four tables
– Job Table: one for the whole system
• Every job in process
– Segment Map Table: one for each job
• Details about each segment
– Page Map Table: one for each segment
• Details about every page
– Memory Map Table: one for the whole system
• Monitors main memory allocation: page frames

Understanding Operating Systems, 7e 69


(figure 3.17)
How the Job Table, Segment Map Table, Page Map Table, and main
memory interact in a segment/paging scheme.
© Cengage Learning 2014

Understanding Operating Systems, 7e 70


Segmented/Demand Paged
Memory Allocation (cont'd.)
• Disadvantages
– Overhead: managing the tables
– Time required: referencing tables
• Associative memory
– Several registers allocated to each job
• Segment and page numbers: associated with main
memory
– Page request: initiates two simultaneous searches
• Associative registers
• SMT and PMT

Understanding Operating Systems, 7e 71


Segmented/Demand Paged
Memory Allocation (cont'd.)
• Associative memory
– Primary advantage (large associative memory)
• Increased speed
– Disadvantage
• High cost of complex hardware

Understanding Operating Systems, 7e 72


Segmented Memory Allocation
 Demand paging loads program pages (sectors) into
memory
 Segmented memory allocation loads logical groupings of
code (segments) into memory
 They may be of different size, and therefore the memory
segments will be of different size
 Pages, on the other hand, are all of the same size, and a
particular module has to be divided into several pages
 The segments can be stored anywhere in memory
 Only the contents of the segments are contiguous
 With memory segments, external fragmentation may
return
Segmented Memory Allocation
0 0
Main Program 0 Subroutine A Subroutine B

99
Segment 2

199
Segment 1

349
Segment 0
How does Segmented Memory Allocation work ?

 The compiler (assembler) sets up the segments


according to the program’s structural modules
 The OS needs 3 tables to manage the memory:
• JT, Job table (one for the whole system) lists
the jobs
• SMT, Segment map table, lists details about
each segment (one for each job)
• MMT, Memory map table, monitors the
allocation of main memory (one for the whole
system)
How does Segmented Memory Allocation work ?

• Elements of the Segment Map Table:

• Segment number
• Length of segment
• Access rights of the segment
• Status, referenced, modified
• Address in memory (if loaded)
Segmented Map Table
0 Segment Size Status Access Memory 0
Main Program no. Address
0
1
350
200
Y
Y
E
E
4000
7000
OS
2 100 N E ______

3000

Empty
349
4000
Main
0 Subroutine A program

Other
programs
199 7000
Subroutine A

0 Subroutine B
Other
99
programs
Segmented Memory Allocation
 To access a byte within a segment the OS
performs an operation similar to the one used
for paged memory management (segments
instead of pages)
 A 2-dimensional addressing scheme is used:
Segment Number  Offset
 Segments are logical units that are visible to
the user’s program
 Pages are physical units that are invisible to the
user’s program
 Disadvantage: need for compaction because of
external fragmentation
Calculating segment address and offset
 Compiler divides each job into segments:
• Segment 0: bytes 0-349
• Segment 1: bytes 350 - 549
• Segment 2: bytes 550 - 649
 Suppose the main program needs to load into
register 1, the instruction at address 353
 When accessing a byte the OS determines from
the absolute address of a byte the segment and
calculates the offset
• Segment = 1
• Offset = 353 – 350 = 3
Calculating segment address and offset
 The OS refers to the job’s SMT to find out which
address in memory contains which job segment
• Segment 1 => address 7000
 Now the OS adds the offset to the starting address
of the segment to compute the precise location of
the byte in memory
• seg_start-addr + offset = act_mem_loc
7000 + 3 = 7003
Memory Management Schemes
Paged Memory Allocation

Demand Segmented Memory


Paging Allocation

Segmented / Demand
Paged Memory Allocation
Segmented / Demand Paged
Memory Allocation
 The segments are divided into pages of equal
size (smaller than the segments)
 The problems of external fragmentation, and
thus of compaction are solved
 Disadvantages:
• Table-handling overhead
• Memory needed for the management tables
(space)
• Internal fragmentation may return in last page
How does segmented / demand paged
memory allocation work ?
 The algorithms used by demand paging and
segmented memory management schemes
are applied with only minor modifications
 Modifications are necessary for the four
tables this scheme is using:
• JT, Job Table, lists every job
• MMT, Memory Map Table, monitors the
allocation of the page frames
How does segmented / demand paged
memory allocation work ?
• PMT, Page Map Table, lists details about every
page:
• page number
• page frame number
• status, modified, referenced
• SMT, Segment MapTable, lists details about
every segment:
• segment number
• length
• address of PMT
• access rights
Segmented / Paging Scheme
How does segmented / demand paged
memory allocation work ?
 To access a byte in memory the OS uses a 3-
dimensional addressing scheme:
segment number  page number  offset

Disadvantages:
 => memory overhead for tables
 => time overhead for referencing an address
How does segmented / demand paged
memory allocation work ?
 To speed up the access time many systems
are using associative memory
 This is a name given to several registers
that are allocated to each job that is active
 Their task is to associate several segments
and page numbers belonging to the active
job with their main memory addresses
 Working Set
 These associative registers reside in main
memory and the exact number varies from
system to system
How does segmented / demand
paged memory allocation work ?
 In general, when a job is allocated to the CPU, its
SMT is loaded into memory, while the PMTs are
loaded only as needed
 As pages are swapped, all tables are updated
 When a page is first requested, the SMT is searched to
locate its PMT
 The PMT is loaded and searched to determine the
page’s location in memory
 If the page isn’t in memory, a page interrupt is issued,
the page is loaded, and the table is updated
 The associative memory contains the information
related to the most-recently-used pages
How does segmented / demand paged
memory allocation work ?
 When further page requests are issued, two searches
begin:
• through associative memory
• through SMTs and PMTs
 If the search through associative memory is successful
then the search through the tables is stopped
 A system with 8 associative registers per job will, for
example, use them to store the SMT and PMT for the last
8 pages referenced by that job
 Details kept by associative memory might include
segment number, page number, page frame number, etc
 Disadvantage: Complex, expensive HW is required for
associative memory
Virtual Memory
• Made possible by swapping pages in/out of memory
• Program execution: only a portion of the program in
memory at any given moment
• Requires cooperation between:
– Memory Manager: tracks each page or segment
– Processor hardware: issues the interrupt and resolves
the virtual address

Understanding Operating Systems, 7e 90


(table 3.6)
Comparison of the advantages and disadvantages of virtual memory with
paging and segmentation.
© Cengage Learning 2014

Understanding Operating Systems, 7e 91


Virtual Memory (cont'd.)
• Advantages
– Job size: not restricted to size of main memory
– More efficient memory use
– Unlimited amount of multiprogramming possible
– Code and data sharing allowed
– Dynamic linking of program segments facilitated
• Disadvantages
– Higher processor hardware costs
– More overhead: handling paging interrupts
– Increased software complexity: prevent thrashing

Understanding Operating Systems, 7e 92


Cache Memory
• Small, high-speed intermediate memory unit
• Computer system’s performance increased
– Faster processor access compared to main memory
– Stores frequently used data and instructions
• Cache levels
– L2: connected to CPU; contains copy of bus data
– L1: pair built into CPU; stores instructions and data
• Data/instructions: move between main memory and
cache
– Methods similar to paging algorithms

Understanding Operating Systems, 7e 93


(figure 3.19)
Comparison of (a) the traditional path used by early computers between
main memory and the CPU and (b) the path used by modern computers
to connect the main memory and the CPU via cache memory.
© Cengage Learning 2014

Understanding Operating Systems, 7e 94


Cache Memory (cont'd.)
• Four cache memory design factors
– Cache size, block size, block replacement algorithm,
and rewrite policy
• Optimal cache and replacement algorithm
– 80-90% of all requests in cache possible

Understanding Operating Systems, 7e 95


Cache Memory (cont'd.)
• Cache hit ratio

• Average memory access time

Understanding Operating Systems, 7e 96


Summary
• Operating system: Memory Manager
– Allocating memory storage: main memory, cache
memory, and registers
– Deallocating memory: execution completed

Understanding Operating Systems, 7e 97


(table 3.7)
The big picture. Comparison of the memory allocation schemes discussed in
Chapters 2 and 3.
© Cengage Learning 2014

Understanding Operating Systems, 7e 98


(table 3.7) (cont’d.)
The big picture. Comparison of the memory allocation schemes discussed in
Chapters 2 and 3.
© Cengage Learning 2014

Understanding Operating Systems, 7e 99

You might also like