0% found this document useful (0 votes)
17 views

Module 4

Uploaded by

Fazal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Module 4

Uploaded by

Fazal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Base and Limit Registers

 A pair of base and limit registers define the logical


address space
 CPU must check every memory access generated in
user mode to be sure it is between base and limit for
that user
Hardware Address Protection
Logical vs. Physical Address Space
 The concept of a logical address space that is bound
to a separate physical address space is central to
proper memory management
 Logical address – generated by the CPU; also
referred to as virtual address
 Physical address – address seen by the memory unit
 Logical address space is the set of all logical
addresses generated by CPU
 Physical address space is the set of all physical
addresses generated by MMU
Dynamic relocation using a Relocation Register
Base Register is the relocation register.
Useful when large amounts of code are needed.
No special support from the operating system is required
Implemented through program design
OS can help by providing libraries to implement dynamic loading
Dynamic Loading
 Entire program and all the data of the process to be
executed must be in the physical memory for execution.
 Size of the process is limited to the size of physical
memory.
 With dynamic loading,
 Routine not to be loaded until it is called.
 All routines are stored on the disk in a relocatable load
format.
 First the main() program is loaded in to the main memory
for execution.
 When another routine is needed, the relocatable linking
loader is called to load the desired routine.
 The programs address tables has to reflect the changes.
Dynamic Linking
 Static linking – system libraries and program code
combined by the loader into the binary program image
 Dynamic linking –linking postponed until execution
time.
 This is used in system libraries, such as language
subroutine libraries.
 Small piece of code, stub, used to locate the appropriate
memory-resident library routine
 When a stub is executed, it checks if the needed routine
is already in memory.
 Either way the stub replaces itself with the address of the
routine and executes the routine.
Swapping
 A process can be swapped temporarily out of memory
to a backing store, and then brought back into
memory for continued execution.
 Total physical memory space of processes can exceed
physical memory.
 Backing store – fast disk large enough to
accommodate copies of all memory images for all
users; must provide direct access to these memory
images.
 Roll out, roll in – swapping variant used for priority-
based scheduling algorithms; lower-priority process is
swapped out so higher-priority process can be loaded
and executed.
 Major part of swap time is transfer time; total transfer
time is directly proportional to the amount of memory
swapped.
Schematic View of Swapping
Contiguous Allocation
 Multiple-partition allocation
 Degree of multiprogramming limited by number of partitions
 Hole – block of available memory; holes of various size are scattered
throughout memory
 When a process arrives, it is allocated memory from a hole large enough
to accommodate it
 Process exiting frees its partition, adjacent free partitions combined
 Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
Dynamic Storage-Allocation Problem

 First-fit: Allocate the first hole that is big enough

 Best-fit: Allocate the smallest hole that is big enough;


must search entire list, unless ordered by size
 Produces the smallest leftover hole

 Worst-fit: Allocate the largest hole; must also search


entire list
 Produces the largest leftover hole

First-fit and best-fit better than worst-fit in terms of


speed and storage utilization
Memory Management
 Internal Fragmentation
 External Fragmentation
 Compaction
External Fragmentation
– total memory space exists
to satisfy a request, but it is
not contiguous

Internal Fragmentation
– allocated memory may be
slightly larger than
requested memory; this
size difference is memory
internal to a partition, but
not being used
COMPACTION
 Compaction is to shuffle OS
memory contents to place Process 4
all free memory together
in one large block. Process 3

 It combines all the free


areas into one contiguous
area.
 Fragmentation can be reduced by
 Compaction
 Paging
 Segmentation
PAGING

 Allow physical address space to be non contiguous


Paging Technique
 Address generated by CPU is divided into:
 Page number (p) – used as an index into a page table
which contains base address of each page in physical
memory
 Page offset (d) – is the displacement within the page

 For given logical address space 2m and page size 2n


No. of pages is size 2m-n
Eg. Logical address space 25 and page size 22 , then No. of
pages is 23
PAGING HARDWARE
Free Frames

Before allocation After allocation


Paging Implementation
Implementation of Page Table :

1. Page table is kept in main memory.

2. Page-table base register (PTBR) points to the page


table.

3. Page-table length register (PTLR) indicates size of


the page table.

4. In this scheme we use a special fast-lookup hardware


cache called associative memory or translation look-
aside buffers (TLBs)
Paging Hardware With TLB

Translation look aside buffer (TLB)


Protection
 1. Memory protection implemented by associating protection bit with each
frame.
 2. Valid-invalid bit attached to each entry in the page table:
 a. “valid” indicates that the associated page is in the process’ logical address
space, and is thus a legal page.
 b. “invalid” indicates that the page is not in the process’ logical address space.
Shared Pages
Segmentation
Segmentation
• Segmentation is a memory management technique in which
the memory is divided into the variable size parts. Each part is
known as a segment which can be allocated to a process.
• Segment number (s): Number of bits required to
represent the segment.
• Segment offset (d): Number of bits required to
represent the size of the segment
• Base Address: It contains the starting physical address
where the segments reside in memory.
• Segment Limit: Also known as segment offset. It
specifies the length of the segment.
• Segment Table Base Register
– Location of the segment table
• Segment Table Length Register
- no. of segments used by a program
Segmented Paging
Virtual memory
1. Virtual Memory is a technique that allows the
execution of the process that may not be
completely in main memory.
 It is a storage allocation scheme in which secondary
memory can be addressed as though it were part of the main
memory.
 Only part of the program needs to be in memory for
execution.
 Logical address space can therefore be much larger than
physical address space.
 Allows address spaces to be shared by several processes.
 Allows for more efficient process creation.
2. Virtual memory can be implemented through:
 Demand paging
 Demand segmentation
Virtual Memory Larger Than Physical Memory
Demand Paging
 Entire process cannot be loaded
into memory at once
 Page is loaded only when it is
needed

 Page is needed  reference to it


 not-in-memory  bring to
memory

 Lazy swapper – never swaps a


page into memory unless page
will be needed
 Swapper that deals with pages is
a pager
FRAME ALLOCATION ALGORITHMS
 Equal Allocation:
The easiest way to split m frames among n processes is
to share equally – m/n frames.
 Proportional Allocation:
We allot available memory to each process according
to its size.
Frames allotted
= No. of req. frames per process * No. of available frames
Total No. of frames required
THRASHING
 Thrashing is a process which spends more time in
paging than in execution
Page Fault
 If there is a reference to a page, if that page is not
available in main memory then it will trap to operating
system: as a page fault
1. Operating system looks at another table to decide:
 Invalid reference  abort
 Just not in memory
2. Get empty frame
3. Swap page into frame via scheduled disk operation
4. Reset tables to indicate page now in memory
Set validation bit = v
5. Restart the instruction that caused the page fault
Steps in Handling a Page Fault
Basic Page Replacement
1. Find the location of the desired page on disk

2. Find a free frame:


- If there is a free frame, use it
- If there is no free frame, use a page replacement
algorithm to select a victim frame
- Write victim frame to disk

3. Bring the desired page into the (newly) free frame;


update the page and frame tables

4. Continue the process by restarting the instruction that


caused the trap
Page Replacement
Need For Page Replacement
FIFO Page Replacement

15 page faults
First-In-First-Out (FIFO) Algorithm
 Reference string:
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per
process)
1 7 2 4 0 7

2 0 3 2 1 0 15 page faults

3 1 0 3 2 1

 Can vary by reference string: consider


1,2,3,4,1,2,5,1,2,3,4,5
 Adding more frames can cause more page faults!
 Belady’s Anomaly
FIFO Illustrating Belady’s
Anomaly
Least Recently Used (LRU) Algorithm
 Use past knowledge rather than future
 Replace page that has not been used in the most amount of
time
 Associate time of last use with each page

 12 faults – better than FIFO but worse than OPT


 Generally good algorithm and frequently used
 But how to implement?
Optimal Algorithm

 Replace page that will not be used for longest period of


time
 9 is optimal for the example on the next slide

 How do you know this?


 Can’t read the future

 Used for measuring how well your algorithm performs


Optimal Page Replacement

You might also like