0% found this document useful (0 votes)
51 views32 pages

T3 - Memory Management-Virtual Memory

The document discusses virtual memory concepts including demand paging, segmentation, and cache memory. Virtual memory allows a process's logical address space to be larger than physical memory using techniques like paging and segmentation to map logical addresses to physical memory locations stored on disk when not in RAM.

Uploaded by

lordzjason14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views32 pages

T3 - Memory Management-Virtual Memory

The document discusses virtual memory concepts including demand paging, segmentation, and cache memory. Virtual memory allows a process's logical address space to be larger than physical memory using techniques like paging and segmentation to map logical addresses to physical memory locations stored on disk when not in RAM.

Uploaded by

lordzjason14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 32

• Virtual memory – separation of user logical memory from physical

memory
• Only part of the program needs to be in memory for execution
• Logical address space can therefore be much larger than physical
address space
• Allows address spaces to be shared by several processes
• Allows for more efficient process creation
• More programs running concurrently
• Less I/O needed to load or swap processes
Virtual Memory That is Larger Than Physical Memory

disk
• Virtual address space – logical view of how process is stored
in memory
• Usually start at address 0, contiguous addresses until end
of space
• Meanwhile, physical memory organized in page frames
• MMU must map logical to physical
• Virtual memory can be implemented via:
• Demand paging
• Demand segmentation
Paged Memory Allocation

 Divides each incoming job into pages of equal size


 Works well if page size, memory block size (page frames), and
size of disk section (sector, block) are all equal
 Before executing a program, Memory Manager:
– Determines number of pages in program
- Locates enough empty page frames in main memory
- Loads all of the program’s pages into them
Paged memory allocation scheme for
a job of 350 lines
Programs that are too long
to fit on a single page are
split into equal-sized
pages that can be stored
in free page frames. In this
example, each page frame
can hold 100 bytes. Job 1 is
350 bytes long and is
divided among four page
frames, leaving internal
fragmentation in the last
page frame

The simplified example for the image above shows how the Memory
Manager keeps track of a program that is four pages long. To simplify the
arithmetic, we’ve arbitrarily set the page size at 100 bytes. Job 1 is 350 bytes
long and is being readied for execution.
Paged Memory Allocation (cont.)

 Memory Manager requires three tables to keep track of the


job’s pages:
– Job Table (JT) contains information about
o Size of the job
o Memory location where its PMT is stored
– Page Map Table (PMT) contains information about
o Page number and its corresponding page frame
memory address
– Memory Map Table (MMT) contains
o Location for each page frame
o Free/busy status
Demand Paging
• Demand paging system load pages
only on demand, not in advance
• Instead of swapping the entire
process into memory, the pages or
the lazy swapper is used.
• A lazy swapper brings only the
necessary pages into memory
• Swapper that deals with pages is a
pager
Demand Paging (cont.)

• Could bring entire process into


memory at load time
• Or bring a page into memory only
when it is needed
• Less I/O needed, no unnecessary
I/O
• Less memory needed
• Faster response
• More users
Page Table When Some Pages Are Not in Main Memory
When you choose one option from the menu of an application program such as this
one, the other modules that aren’t currently required (such as Help) don’t need to be
moved into memory immediately.

Demand paging requires


that the Page Map Table
for each job keep track of
each page as it is loaded
or removed from main
memory. Each PMT tracks
the status of the page,
whether it has been
modified, whether it has
been recently referenced,
and the page frame
number for each page
currently in main memory.
Swapping Process:
 To move in a new page, a resident page must be swapped
back into secondary storage; involves
– Copying the resident page to the disk (if it was modified)
– Writing the new page into the empty page frame
 Requires close interaction between hardware components,
software algorithms, and policy schemes
Although demand paging is a solution to inefficient memory
utilization, it is not free of problems. When there is an excessive
amount of page swapping between main memory and secondary
storage, the operation becomes inefficient. This phenomenon is
called thrashing.
Thrashing : An excessive amount of page swapping
between main memory and secondary storage
– Operation becomes inefficient
– Caused when a page is removed from memory but is
called back shortly thereafter
– Can occur across jobs, when a large number of jobs are
vying for a relatively few number of free pages
– Can happen within a job (e.g., in loops that cross page
boundaries)
Page fault: a failure to find a page in memory
Page Replacement Algorithms
In a computer operating system that uses paging for virtual memory management, page
replacement algorithms decide which memory pages to page out, sometimes called swap
out, or write to disk, when a page of memory needs to be allocated.
Policy that selects the page to be removed is crucial to system efficiency. Types of page
replacement algorithms:

 First-in first-out (FIFO): Removes page that has been in memory for the
longest.
FIFO is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for
removal.
Example. Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find number
of page faults.
 Least-recently-used (LRU): Removes page that has been least and this algorithm
replaces the page which has not been referred for a long time. This algorithm is
just opposite to the optimal page replacement algorithm. In this, we look at the past
instead of staring at future.

Number of Page Faults = 9


Number of Hits = 6
SEGMENTATION
- In operating system segmentation is a memory management
technique in which memory is divided into the variable size.
- Each part is known as a segment which can be allocated to a
process
- The details about each segment are stored in a table called a
segment table
Segmented Memory Allocation
- Each job is divided into several segments of different sizes, one
for each module that contains pieces to perform related functions
- Main memory is no longer divided into page frames, rather
allocated in a dynamic manner
- Segments are set up according to the program’s structural
modules when a program is compiled or assembled
Segment Table – It maps two-dimensional Logical address into one-dimensional
Physical address. It’s each table entry has:
Base Address: It contains the starting physical address where the segments reside

in memory.
Limit: It specifies the length of the segment.
Translation of Two-dimensional Logical Address to one dimensional Physical Address.
Address generated by the CPU is divided into:

Segment number (s): Number of bits required to represent the
segment.

Segment offset (d): Number of bits required to represent the size
of the segment.
Advantages of Segmentation –

No Internal fragmentation.

Segment Table consumes less space in comparison to Page table
in paging.
Disadvantage of Segmentation –

As processes are loaded and removed from the memory, the free
memory space is broken into little pieces, causing External
fragmentation.
Segmented/Demand Page Memory Allocation

•Subdivides segments: equal-sized pages


– Smaller than most segments
– More easily manipulated than whole segments
– Segmentation’s logical benefits
– Paging’s physical benefits
•Segmentation problems removed
– Compaction, external fragmentation, secondary storage handling
•Three-dimensional addressing scheme
– Segment number, page number (within segment), and
displacement (within page)
• Scheme requires four tables
– Job Table: one for the whole
system
• Every job in process
– Segment Map Table: one for
each job
• Details about each segment
– Page Map Table: one for each
segment
• Details about every page
– Memory Map Table: one for the
whole system
• Monitors main memory
allocation: page frames

How the Job Table, Segment Map Table, Page


Map Table, and main memory interact in a
segment/paging scheme.
Cache Memory

•Small, high-speed intermediate memory unit


•Computer system’s performance increased
– Faster processor access compared to main memory
– Stores frequently used data and instructions
•Cache levels
– L2: connected to CPU; contains copy of bus data
– L1: pair built into CPU; stores instructions and data
•Data/instructions: move between main memory and cache
– Methods similar to paging algorithms
Comparison of (a) the traditional path used by
early computers between main memory and
the CPU and (b) the path used by modern
computers to connect the main memory and
the CPU via cache memory.
•Four cache memory design factors
– Cache size, block size, block
replacement algorithm, and rewrite
policy
•Optimal cache and replacement algorithm
– 80-90% of all requests in cache possible
Cache hit ratio

• Average memory access time

You might also like