Chapter 6-1
Chapter 6-1
Chapter 6. Main
Memory
Semester : IV
Course Title: Operating Systems Principles and Programming
Course Code: 18ECSC202
1
Earlier known as
B. V. B. College of Engineering &
Technology
Chapter 6. Main
Memory
DON’T JUST READ PPT
READ TEXT BOOK AND REFER LESSON PLAN
2
Chapter Content
1. Memory Management Strategies
2. Swapping
3. Contiguous memory allocation
4. Segmentation
5. Paging.
6. Structure of Page Table
3
Introduction
• Memory Size
• Access time LA T PA
• Cost per unit
Computer
P1
4
Introduction
• Program must be brought (from disk) into memory and placed within a process
for it to be run
• Main memory and registers are only storage, that CPU can access directly
• Memory unit only sees i) a stream of addresses + read requests, or ii) address +
data and write requests
• Register is accessed in one CPU clock (or less)
• Main memory can take many cycles, causing a stall
• Cache is placed between main memory and CPU registers
• Protection of memory required to ensure correct operation
Computer Computer
6
Base and Limit Registers
• A pair of base and limit registers define the logical address space
• CPU must check every memory access generated in user mode to be sure
it is between base and limit for that user
7
Logical vs. Physical Address Space
• Logical address – generated by the CPU; also
referred to as virtual address
• Physical address – address seen by the memory
unit
• Logical and physical addresses are the same in
compile-time and load-time but differ during
execution-time.
8
Binding of Instructions and Data to
Memory
• Address binding of instructions and data to memory addresses
can happen at three different stages
• Compile time:
• Load time:
• Execution time:
9
Multistep Processing of a User Program
10
Swapping
11
Schematic View of Swapping
12
Swapping
• A process can be swapped…
• Backing store – fast disk large enough to accommodate copies
of all memory images for all users;
• Roll out, roll in – swapping variant used for priority-based
scheduling algorithms
• Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped
• System maintains a ready queue of ready-to-run processes
which have memory images on disk
13
Swapping (Cont.)
• Does the swapped out process need to swap back in to same
physical addresses?
• Depends on address binding method
• Plus consider pending I/O to / from process memory space
• Modified versions of swapping are found on many systems (i.e.,
UNIX, Linux, and Windows)
• Swapping normally disabled
• Started if more than threshold amount of memory allocated
• Disabled again once memory demand reduced below threshold
14
Context Switch Time including
Swapping
• If next processes to be put on CPU is not in memory, need to
swap out a process and swap in target process
• Context switch time can then be very high
• 100MB process swapping to hard disk with transfer rate of
50MB/sec
• Swap out time of 2s
• Plus swap in of same sized process
• Total context switch swapping component time of 4000ms (4
seconds)
• Time can be reduced, if size of memory swapped is reduced –
by knowing how much memory really being used
15
Context Switch Time and Swapping
(Cont.)
• Other constraints as well on swapping
• Pending I/O – can’t swap out as I/O would occur to wrong process
• Standard swapping not used in modern operating systems
• But modified version common
• Swap only when free memory extremely low
16
Contiguous memory allocation
17
Memory Allocation
P1
Main Secondary
CPU Memory Memory
18
Contiguous Allocation
• Contiguous allocation is one early method
• Main memory is usually partitioned into two space:
• Resident operating system, usually held in low memory with interrupt
vector
• User processes then held in high memory
Each process contained in single contiguous section of memory
Advantages
Fast Access
Address translation is easy
Array i
Disadvantages
External fragmentation
4 KB 2KB 4 KB
P1 5 KB 19
Non -Contiguous Allocation
Linked List
Advantages
No external fragmentation
4 KB 2KB 1 KB
20
Multiple-partition allocation
• Multiple-partition allocation
• Degree of multiprogramming limited by number of partitions
• Fixed-partition
• Variable-partition sizes for efficiency (sized to a given process’ needs)
• Hole – block of available memory; holes of various size are scattered throughout memory
• When a process arrives - allocated memory from a hole large enough.
• Adjacent free partitions combined
• Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
21
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?
• Worst-fit: Allocate the largest hole; must also search entire list
• Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
22
Fragmentation
• External Fragmentation – total memory space exists to
satisfy a request, but it is not contiguous
• Internal Fragmentation – allocated memory may be slightly
larger than requested memory; this size difference is
memory internal to a partition, but not being used
• First fit analysis reveals that given N blocks allocated, 0.5 N
blocks lost to fragmentation
• 1/3 may be unusable -> 50-percent rule
4 KB 2KB 4 KB 6 KB 4 KB
P1 5 KB P1 5 KB
23
Fragmentation (Cont.)
• Reduce external fragmentation by compaction
• Shuffle memory contents to place all free memory together in one
large block
• Compaction is possible only if relocation is dynamic, and is done at
execution time
• I/O problem
• Latch job in memory while it is involved in I/O
• Do I/O only into OS buffers
24
357
25
357
26
357
27
Homework
• Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb
(in order), how would the first-fit, best-fit, and worst-fit algorithms
place processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)?
Which algorithm makes the most efficient use of memory?
28
Segmentation
29
Segmentation
• Memory-management scheme that supports user view of memory
• A program is a collection of segments
• A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
30
User’s View of a Program
31
Logical View of Segmentation
4
1
3 2
4
32
Virtual address
Seg # Offset
Limit
Table
Base 0 Limit 0 V
Base 1 Limit 1 V
Base 2 Limit 2 N Base
Base 3 Limit 3 V
Base 4 Limit 4 V
Base 5 Limit 5 V +
Base 6 Limit 6 N Physical address
Base 7 Limit 7 V
33
34
Consider the following segment table:
What are the physical addresses for the following logical addresses?
a. 0,430 - Segmentation ID # = 0, Offset = 430, Physical Address = Base + Offset = 219+430 = 649 (Physical Address)
b. 1,10
c. 2,500 == Invalid (Offset > length)
d. 3,400
e. 4,112
35
Consider the Intel address-translation scheme
shown in Figure 8.22.
36
Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>,
38
Paging
• Divide physical memory into
fixed-sized blocks called frames
• Size is power of 2, between 512
bytes and 16 Mbytes
• Divide logical memory into
blocks of same size called pages
• To run a program of size N
pages, need to find N free
frames and load program
• Set up a page table to translate
logical to physical addresses
• Reduce External fragmentation,
Still have Internal
fragmentation 39
Address Translation Scheme
• Address generated by CPU is divided into:
• Page number (p) – used as an index into a page table which contains
base address of each page in physical memory
• Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit
page number page offset
p d
m -n n
40
Paging Hardware
41
Free Frames
42
Paging (Cont.)
• Calculating internal fragmentation
• Page size = 2,048 bytes
• Process size = 72,766 bytes
• 35 pages + 1,086 bytes
• Internal fragmentation of 2,048 - 1,086 = 962 bytes
1. Assuming a 1 KB page size, what are the page numbers and offsets for
the following address references (provided as decimal numbers):
43
Assuming a 1 KB page size, what are the page numbers and offsets for the following
address references (provided as decimal numbers):
a. 2485
b. 19512
c. 30100 Binary Page # Offset
d. 264 6 bits 10 bits
e. 16456
2485 0000 1001 1011 0101 000010 0110110101
2 437
44
Assuming a 1 KB page size, what are the page numbers and offsets for the following
address references (provided as decimal numbers):
a. 2485
b. 19512
c. 30100 Binary Page # Offset
d. 264 6 bits 10 bits
e. 16456
2485 0000 1001 1011 0101 000010 0110110101
2 437
19512 0100 1100 0011 1000 0100 11 000011 1000
19 56
45
Assuming a 1 KB page size, what are the page numbers and offsets for the following
address references (provided as decimal numbers):
a. 2485
b. 19512
c. 30100 Binary Page # Offset
d. 264 6 bits 10 bits
e. 16456
2485 0000 1001 1011 0101 000010 0110110101
2 437
19512 0100 1100 0011 1000 0100 11 000011 1000
19 56
30100 0111 0101 1001 0100 0111 01 01 1001 0100
29 404
46
Assuming a 1 KB page size, what are the page numbers and offsets for the following
address references (provided as decimal numbers):
a. 2485
b. 19512
c. 30100 Binary Page # Offset
d. 264 6 bits 10 bits
e. 16456
2485 0000 1001 1011 0101 000010 0110110101
2 437
19512 0100 1100 0011 1000 0100 11 000011 1000
19 56
30100 0111 0101 1001 0100 0111 01 01 1001 0100
29 404
264 0000 0001 0000 1000 0000 00 01 0000 1000
0 264
47
Assuming a 1 KB page size, what are the page numbers and offsets for the following
address references (provided as decimal numbers):
a. 2485
b. 19512
c. 30100 Binary Page # Offset
d. 264 6 bits 10 bits
e. 16456
2485 0000 1001 1011 0101 000010 0110110101
2 437
19512 0100 1100 0011 1000 0100 11 000011 1000
19 56
30100 0111 0101 1001 0100 0111 01 01 1001 0100
29 404
264 0000 0001 0000 1000 0000 00 01 0000 1000
0 264
16456 0100 0000 0100 1000 0100 00 00 0100 1000
16 72
48
Assuming a 2KB page size, what are the page numbers and offsets for the following
address references (provided as decimal numbers):
a. 2485
b. 19512
c. 30100
d. 264
e. 16456
Binary Page # Offset
5 bits 11 bits
50
Implementation of Page Table
• Page table is kept in main memory
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the page
table
• In this scheme every data/instruction access requires two
memory accesses
• One for the page table and one for the data / instruction
• The two memory access problem can be solved by the use
of a special fast-lookup hardware cache called associative
memory or translation look-aside buffers (TLBs)
51
Implementation of Page Table (Cont.)
• Some TLBs store address-space identifiers (ASIDs) in each
TLB entry
• TLBs typically small (64 to 1,024 entries)
• On a TLB miss, value is loaded into the TLB for faster access
next time
• Replacement policies must be considered
• Some entries can be wired down for permanent fast access
52
Associative Memory
53
Paging Hardware With TLB
54
Effective Access Time
• Associative Lookup = time unit
• Can be < 10% of memory access time
• Hit ratio =
• Hit ratio – percentage of times that a page number is found in the
associative registers; ratio related to number of associative registers
• Consider = 80%, = 20ns for TLB search, 100ns for memory access
• Effective Access Time (EAT)
EAT = (1 + ) + (2 + )(1 – )
• Consider = 80%, = 20ns for TLB search, 100ns for memory access
56
Memory Protection
• Memory protection implemented by associating protection
bit with each frame to indicate if read-only or read-write
access is allowed
• Can also add more bits to indicate page execute-only, and so on
• Valid-invalid bit attached to each entry in the page table:
• “valid” indicates that the associated page is in the process’ logical
address space, and is thus a legal page
• “invalid” indicates that the page is not in the process’ logical
address space
• Or use page-table length register (PTLR)
• Any violations result in a trap to the kernel
57
Valid (v) or Invalid (i) Bit In A Page Table
58
Structure of the Page Table
• Memory structures for paging can get huge using straight-forward
methods
• Consider a 32-bit logical address space as on modern computers
• Page size of 4 KB (212)
• Page table would have 1 million entries (232 / 212)
• If each entry is 4 bytes -> 4 MB of physical address space / memory for
page table alone
• That amount of memory used to cost a lot
• Don’t want to allocate that contiguously in main memory
• Hierarchical Paging
• Hashed Page Tables
• Inverted Page Tables
59
32 bit (4 byte)
4MB
1Million entries
(232 / 212)
4KB size
(212)
60
Review Questions
1. Assuming a 1 KB page size, what are the page numbers and offsets for the
following address references (provided as decimal numbers):
a. 1475 b. 19876 c. 30500 d. 1023 e. 18415
2. Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in
order), how would the first-fit, best-fit, and worst-fit algorithms place
processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)? Which
algorithm makes the most efficient use of memory?
3. On scanning primary memory pm addresses 00..0. We find holes of the
following sizes 10K, 15K, 5K, 32K and 2K. In what order would the
allocation happen if we need to allocate segments having sizes 2K, 7K, 27K
and 1K if we followed a. First fit policy b. Next fit policy c. Best fit policy
61
• On a simple paged system, TLB holds the most active page
entries and the full page table is stored in the main memory.
If the references satisfied by the TLB take 100 ns, and
references through the main memory page table take 180 ns,
what must be the hit-ratio to achieve an effective memory
access time of 300 ns?
EAT = (1 + ) + (2 + )(1 – )
62
Thank You
63