0% found this document useful (0 votes)
8 views49 pages

Unit-4 Memory MT

The document covers various memory management techniques, including swapping, contiguous memory allocation, segmentation, and paging. It discusses the structure of page tables, address translation, and the concept of virtual memory, emphasizing demand paging and page replacement algorithms. Additionally, it addresses fragmentation, allocation of frames, and kernel memory allocation strategies.

Uploaded by

anchitaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views49 pages

Unit-4 Memory MT

The document covers various memory management techniques, including swapping, contiguous memory allocation, segmentation, and paging. It discusses the structure of page tables, address translation, and the concept of virtual memory, emphasizing demand paging and page replacement algorithms. Additionally, it addresses fragmentation, allocation of frames, and kernel memory allocation strategies.

Uploaded by

anchitaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 49

Main Memory

Chapter 8: Memory Management


 Background
 Swapping
 Contiguous Memory Allocation
 Segmentation
 Paging
 Structure of the Page Table
Base and Limit Registers
 A pair of base and limit registers define the logical address space
 CPU must check every memory access generated in user mode to
be sure it is between base and limit for that user
Swapping

 A process can be swapped temporarily out of memory to a backing store,


and then brought back into memory for continued execution

 Backing store – fast disk large enough to accommodate copies of all


memory images for all users;

 Roll out, roll in – swapping variant used for priority-based scheduling


algorithms;
Schematic View of Swapping
Contiguous Allocation

 Main memory must support both OS and user processes

 Contiguous allocation is one early method

 Main memory usually into two partitions:

 Resident operating system, usually held in low memory with


interrupt vector

 User processes then held in high memory

 Each process contained in single contiguous section of


memory
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?

 First-fit: Allocate the first hole that is big enough

 Best-fit: Allocate the smallest hole that is big enough; must


search entire list, unless ordered by size
 Produces the smallest leftover hole

 Worst-fit: Allocate the largest hole; must also search entire list
 Produces the largest leftover hole

First-fit and best-fit better than worst-fit in terms of speed and storage
utilization
Fragmentation
 External Fragmentation – total memory space exists to
satisfy a request, but it is not contiguous

 Internal Fragmentation – allocated memory may be slightly


larger than requested memory; this size difference is memory
internal to a partition, but not being used
Paging
 Physical address space of a process can be noncontiguous;
process is allocated physical memory whenever the latter is
available
 Avoids external fragmentation

 Divide physical memory into fixed-sized blocks called frames


 Size is power of 2, between 512 bytes and 16 Mbytes

 Divide logical memory into blocks of same size called pages

 To run a program of size N pages, need to find N free frames and


load program

 Set up a page table to translate logical to physical addresses


Address Translation Scheme
 Address generated by CPU is divided into:
 Page number (p) – used as an index into a page table which
contains base address of each page in physical memory
 Page offset (d) – combined with base address to define the
physical memory address that is sent to the memory unit

page number page offset


p d
m -n n

 For given logical address space 2m and page size 2n


Paging Model of Logical and Physical Memory
Paging Example
Free Frames

Before allocation After allocation


Paging Hardware
Structure of the Page Table
Hierarchical Page Tables
 Break up the logical address space into multiple page
tables
 A simple technique is a two-level page table
 We then page the page table
Two-Level Page-Table Scheme
Two-Level Paging Example
 A logical address (on 32-bit machine with 1K page size) is divided into:
 a page number consisting of 22 bits
 a page offset consisting of 10 bits

 Since the page table is paged, the page number is further divided into:
 a 12-bit page number
 a 10-bit page offset
 Thus, a logical address is as follows:

 where p1 is an index into the outer page table, and p2 is the


displacement within the page of the inner page table
Address-Translation Scheme
Segmentation
 Memory-management scheme that supports user view of memory
 A program is a collection of segments
 A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
User’s View of a Program
Logical View of Segmentation

4
1

3 2
4

user space physical memory space


Segmentation Architecture
 Logical address consists of a two tuple:
<segment-number, offset>,

 Segment table – maps two-dimensional physical addresses; each


table entry has:
 base – contains the starting physical address where the
segments reside in memory
 limit – specifies the length of the segment

 Segment-table base register (STBR) points to the segment table


’s location in memory

 Segment-table length register (STLR) indicates number of


segments used by a program;
segment number s is legal if s < STLR
Segmentation Hardware
Chapter 10: Virtual Memory
 Background
 Demand Paging
 Page Replacement
 Allocation of Frames
 Thrashing
 Operating System Examples
Background
 Virtual memory – separation of user logical memory from
physical memory.

 Only part of the program needs to be in memory for execution.

 Logical address space can therefore be much larger than


physical address space.

 Allows address spaces to be shared by several processes.

.
Virtual Memory That is Larger Than Physical Memory
Demand Paging
 A demand-paging system is similar to a paging system with
swapping

 A lazy swapper is used. It never swaps a page into memory


unless that page will be needed.

 A swapper manipulate entire processes, whereas a pager is


concerned with the individual pages of process.

 Page is needed  reference to it


 invalid reference  abort
 not-in-memory  bring to memory
Transfer of a Paged Memory to Contiguous Disk Space
Page Table When Some Pages Are Not in Main Memory
Page Fault

 If there is ever a reference to a page that is not in the memory,


first reference will trap to OS  page fault
 OS looks at another table to decide:
 Invalid reference  abort.
 Just not in memory.
 Get empty frame.
 Swap page into frame.
 Reset tables, validation bit = 1.
 Restart instruction: Least Recently Used
 block move

 auto increment/decrement location


Steps in Handling a Page Fault
What happens if there is no free frame?

 Page replacement – find some page in memory, but not really in use, swap
it out.
 algorithm
 performance – want an algorithm which will result in minimum number of
page faults.

 Same page may be brought into memory several times.


Need For Page Replacement
Basic Page Replacement
1. Find the location of the desired page on disk.

2. Find a free frame:


- If there is a free frame, use it.
- If there is no free frame, use a page replacement
algorithm to select a victim frame.

3. Read the desired page into the (newly) free frame. Update the
page and frame tables.

4. Restart the process.


Page Replacement
Page Replacement Algorithms
 We must solve two major problems in demand paging: We must
develop:
 A frame-allocation algorithm – How many frames are
allocated to each process.
 A page-replacement algorithm – Which frames are
selected to be replaced.
 Which algorithm? We want lowest page-fault rate.
 Evaluate algorithm by running it on a particular string of memory
references (reference string) and computing the number of
page faults on that string.
 They can be generated by a random generator or from the
system memory reference record.
 In all our examples, the reference string is
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
First-In-First-Out (FIFO) Algorithm
 Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 3 frames (3 pages can be in memory at a time per process)

1 1 4 5
2 2 1 3 9 page faults
3 3 2 4
 4 frames
1 1 5 4
2 2 1 5 10 page faults
3 3 2

4 4 3
FIFO Page Replacement
Optimal Algorithm
 Replace page that will not be used for longest period of time.
 4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

1 4
2 6 page faults
3

4 5
 How do you know this?
 Used for measuring how well your algorithm performs.
Optimal Page Replacement
Least Recently Used (LRU) Algorithm
 Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

1 5
2

3 5 4
4 3
 Counter implementation
 Every page entry has a counter; every time page is referenced through
this entry, copy the clock into the counter.
 When a page needs to be changed, look at the counters to determine
which are to change.
LRU Page Replacement
Allocation of Frames
 Each process needs minimum number of pages.

 Two major allocation schemes.


 fixed allocation
 proportional allocation
Allocation of Frames
 Equal allocation – e.g., if 100 frames and 5 processes, give
each 20 pages.
 Proportional allocation – Allocate according to the size of
process.
si size of process pi
S   si
m total number of frames
si
ai allocation for pi  m
S
m 64
si 10
s2 127
10
a1  64 5
137
127
a2  64 59
137
Global vs. Local Allocation

 Global replacement – process selects a replacement frame from the set of

all frames; one process can take a frame from another.

 Local replacement – each process selects from only its own set of allocated

frames.
Thrashing
 If a process does not have “enough” pages, the page-fault rate is very high.
This leads to:
 low CPU utilization.
 operating system thinks that it needs to increase the degree of
multiprogramming.
 another process added to the system.

 Thrashing  a process is busy swapping pages in and out.


Allocating Kernel Memory

 When a process is executing in user mode and it requests the additional


memory, then the kernel maintains the allocation of pages from the list of
free page frames.

 The process by which the kernel of the operating system allocates memory
for its internal operations and data structures is called kernel memory
allocation.“

 allocating kernel memory is a critical task, therefore, it must be performed


correctly and efficiently.
Buddy Memory Allocation System
 the available memory space is divided into blocks of a fixed and equal size

 In buddy system, whenever a request is made for memory allocation, the

memory allocator finds a bock of appropriate memory size.

 If it found a block of a larger size than required, it is repetitively divided

into smaller blocks until a block of the desired size is obtained.

 Once the block of the desired size is found, the allocator marks it as allocated

and sends a pointer to the requesting process.

You might also like