Lecture 7 - Main Memory Final
Lecture 7 - Main Memory Final
Chapter Three
Memory Management
Background
Swapping
Contiguous Allocation
Paging
Segmentation
2
Background
Program must be brought into memory and placed within a process
for it to be executed.
Input Queue - collection of processes on the disk that are waiting
to be brought into memory for execution.
Main memory and registers are only storage CPU can access
directly.
Register access in one CPU clock (or less)
Main memory can take many cycles
Cache sits between main memory and CPU registers
Memory needs to be allocated efficiently to pack as many processes
into memory as possible.
Problem
How to manage relative speed of accessing physical memory
How to Ensure correct operation to protect the operating system
from being accessed by user process and user processes from
one another.
May 11, 2025 Operating Systems
3
One possible solution for the above
Make sure that each process has a separate address space.
problem
Determine the legal addresses that the process can access
legally
Ensure that a process can access only these legal addresses.
This protection can be done using two registers
Defines legal
o Base registers: - holds the smallest legally
addresses
accepted physical address.
o Limit registers:-specifies the range.
o Use a HW to compare every address generated Ensures memory
in user space with registers value protection
4
Base and Limit Registers
o A pair of base and limit registers define the logical address space
0]4
09
,42
0
04
00
[3
0=
ss=
1 2 0 90
0+
e
4
3 00 0
dr
ad
li d
Va
May 11, 2025 Operating Systems
5
at
wh
lain cts?
p i
ex dep
u
yo ure
a n g
C is fi
th
May 11, 2025 Operating Systems
6
Address Binding
o Usually a program resides in a disk in the form of executable
binary file.
o It is brought to memory for execution (it may be moved between
disk and memory in the meantime).
o When a process is executed it accesses instructions and data
from memory. When execution is completed the memory space
will be freely available.
o A user program may be placed at any part of the memory.
o A user program passes through a number of steps before being
executed.
o Addresses may be represented in different ways during these
steps.
Symbolic addresses:- addresses in source program(eg e
t h
count) ’t s e of
a sbeginning
Re-locatable addresses:(eg. 14 bytes fromhthe
W p o g?
of this module) u r in
p nd
bi
2025
May 11, Absolute addresses:-(eg. 74014)
7
Operating Systems
Binding of instructions and data to memory
e
• Need hardware support for
es
s? s in e
ge off e th
th
address maps (e.g. base and limit registers).
sta de ar
tra hat
W
May 11, 2025 Operating Systems
8
Multi-step Processing of a User
Program
9
Logical vs. Physical Address Space
• The concept of a logical address space that is bound to a
separate physical address space is central to proper memory
management.
– Logical Address: or virtual address - generated by CPU
er ence
• Set of logical addresses generated by a program is called diff nd
he
logical address space. h at’s t ogical a ?
W l s
t w een ddresse
– Physical Address: address seen by memory unit.be ical a
phys
• Set of physical addresses corresponds to logical space is
called physical address space
• Logical and physical addresses are the same in compile-time
and load-time address-binding schemes; logical (virtual) and
Memory
physical Management
addresses differ inUnit (MMU) address-binding
execution-time
scheme.
10 — The user program deals with logical addresses; it never sees the
real physical address.
Dynamic relocation using a relocation
register
l a in e
p
ex gur
ou s fi
y hi
a n t t ?
C ha t s
w pic
May 11, 2025 de
Operating Systems
11
Dynamic Loading
o In the discussion made so far, we assume that a program and its
associated resources must be loaded to memory before
execution(not efficient in resource utilization).
o But In Dynamic loading
Routine is not loaded until it is called.
Better memory-space utilization; unused routine is never
loaded.
Useful when large amounts of code are needed to handle
infrequently occurring cases.
No special support from the operating system is required;
implemented through program design.
12
Dynamic Linking
o Linking postponed until execution time.
o Small piece of code, stub, used to locate the appropriate
memory-resident library routine.
o Stub replaces itself with the address of the routine, and executes
the routine.
o Operating system needed to check if routine is in processes’
memory address.
o Dynamic linking is particularly useful for libraries
rla n
ove at’s a
o System also known as shared libraries
y?
Wh
Overlays
Keep in memory only those instructions and data that are
needed at any given time.
Needed when process is larger than amount of
memory allocated to it.
Implemented by user, no special support from OS;
programming design of overlay structure is complex.
May 11, 2025 Operating Systems
13
Swapping
A process can be swapped temporarily out of memory to a
backing store and then brought back into memory for
continued execution.
– Backing Store - fast disk large enough to accommodate copies
of all memory images for all users; must provide direct access to
these memory images.
– Roll out, roll in - swapping variant used for priority based
scheduling algorithms; lower priority process is swapped out, so
higher priority process can be loaded and executed.
– Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped(might
be?).
– In swapping
• context time must be taken into consideration.
e b
• Swapping a pending process for an IO needs care.
in g
to
h a d? p p
pin ed ed
– Modified versions of swapping are found on many systems,
W de swa
ap er s ne
i.e. UNIX and Microsoft Windows.
g ? in
sw nsid ue
o nee y is
– System maintains a ready queue of ready-to-run
co t iss
h
W
processes which have memory images on disk.
o
May 11, 2025 Operating Systems
– Give possible solution
14
Schematic View of Swapping
15
May 11, 2025 16
Operating Systems
May 11, 2025 17
Operating Systems
May 11, 2025 18
Operating Systems
Contiguous Allocation
• Main memory usually divided into two partitions
Resident Operating System, usually held in low memory
with interrupt vector.
User processes then held in high memory.
Single partition allocation
Relocation register scheme used to protect user
processes from each other, and from changing OS
code and data.
Relocation register contains value of smallest physical
address; limit register contains range of logical addresses
- each logical address must be less than the limit register.
When CPU scheduler selects a process for execution, the
dispatcher loads the relocation and limit registers with
the correct values. Every address generated by CPU is
compared against these values.
ensures memory protection of OS and other processes
from being modified by the running process.
May 11, 2025 Operating Systems
19
Contiguous Allocation (cont.)
Multiple partition Allocation
o Hole - block of available memory; holes of various sizes are
scattered throughout memory.
o When a process arrives, it is allocated memory from a hole
large enough to accommodate it.
o Operating system maintains information about which
partitions are
• allocated partitions
• free partitions (hole)
OS OS OS OS
Process 5 Process 5 Process 5 Process 5
Process 9 Process 9
Process 8 Process 10
20
• Dynamic Partitioning
• Partitions are of variable length and number
• Process is allocated exactly as much memory as
required
• Eventually get holes in the memory. This is called
external fragmentation
• Must use compaction to shift processes so they are
contiguous and all free memory is in one block
21
Contiguous Allocation (Cont.)
• Example: 2560K of memory available and
a resident OS of 400K. Allocate memory to Process Memory time
processes P1…P4 following FCFS. P1 600K 10
• Shaded regions are holes P2 1000K 5
• Initially P1, P2, P3 create first memory P3 300K 20
map. P4 700K 8
P5 500K 15
0 0 0
OS OS
0 0
OS OS
400K OS
400K 400K
400K 400K
P1 P1 P5
P1
1000K 1000K 900K
1000K
1000K 1000K
P4 P4
P2 P2 terminates Allocate P4 P4
P1 terminates Allocate P5
1700K
2000K 2000K
1700K 1700K
2000K
2000K 2000K
P3 P3
2300K P3 P3
2300K 2300K P3
2300K 2300K
t
Wh leas
– the smallest amount of fragmentation is left
y?
ad? the
– Worst performer overall
rhe as
o Worst-fit
ove thm h
– Allocate the largest hole; must also search entire list.
i
nni gor
– Produces the largest leftover hole.
sca ich al
ng
– First-fit and best-fit are better than worst-fit
Wh
May 11, 2025
in terms of speed and storage utilization.
Operating Systems
23
Dynamic Storage Allocation
Problem
be
re? cri
gu e s
s fi u d
thi n yo
Ca
May 11, 2025 Operating Systems
24
Fragmentation
• External Fragmentation – total memory space exists to satisfy
a request, but it is not contiguous.
• 50 % rule: Given N allocated blocks 0.5 blocks will be lost due
to fragmentation.
• Internal Fragmentation – allocated memory may be slightly
larger than requested memory; this size difference is memory
internal to a partition, but not being used.
• Consider the hole of 18,464 bytes and process requires 18462
bytes.
• If we allocate exactly the required block, we are left with a
hole of 2 bytes.
• The overhead to keep track of this free partition will be
substantially larger than the hole itself.
• Solution: allocate very small free partition as a part of the
larger request.
25
Solution to fragmentation
1. Compaction
2. Paging
3. Segmentation
1. Compaction: Reduce external fragmentation by compaction
– Shuffle memory contents to place all free memory together in one large block.
– Compaction is possible only if relocation is dynamic, and is done at execution
time.
– Compaction depends on cost.
0 0
OS OS
400K 400K
P5 P5
900K 900K
1000K 100K
P4 P4
1700K 1600K
2000K
300K P3
Compact 1900K
P3
2300K 660K
260K
2560K 2560K
26
2. Paging
o Basic Idea: logical address space of a process can be made
noncontiguous; process is allocated physical memory wherever
it is available.
o Divide physical memory into fixed-sized blocks called frames
(size is power of 2, between 512 bytes and 8,192 bytes)
o Divide logical memory into blocks of same size called pages
o Keep track of all free frames
o To run a program of size n pages, need to find n free frames and
load program
o Set up a page table to translate logical to physical addresses
• Note:: Internal Fragmentation possible!!
o Paging is a form of dynamic relocation(Explain why?)
27
Address Translation Scheme
• Address generated by CPU is divided into:
• Page number (p) – used as an index into a page table which
contains base address of each page in physical memory.
• Page offset (d) – combined with base address to define the
physical memory address that is sent to the memory unit.
• Page number is an index to the page table.
• The page table contains base address of each page in physical
memory.
• The base address is combined with the page offset to define the
physical address that is sent to the memory unit.
• The size of a page is typically a power of 2.
• 512 –8192 bytes per page.
• The size of logical address space is 2m and page size is 2n address
units.
• Higher m-n bits designate the page number
• n lower order bits indicate the page offset.
Page
p d Page
# offset
m-n n
May 11, 2025 Operating Systems
28
Address Translation Architecture
? in t is
he
ure d at
fig picte wh
de plain
Ex
May 11, 2025 Operating Systems
29
Paging Example(cont….)
Assume:-
o page size=4 bytes
o physical memory = 32
bytes (8 pages).
How a logical memory
address can be mapped into
physical memory?
Logical address
0(containing ‘a’)
i. is on page 0.
ii. Is at offset 0.
Indexing into the page table, Logical address 0 maps 5*4+0=20
you can that page 0 is in Logical address 3 maps to=
frame 5. 5*4+3=23
logical address 0 is mapped to Logical address 4 maps to
physical 20, i.e. =6*4+0=24
20=[(5x4)+0] Logical address 13 maps to=
Similarly,
May 11, 2025 2*4+1=9. Operating Systems
30
a. Logical address 13(page
3,offset 2) mapped
physical address
Example of Paging
be
Page size= 4 bytes; Physical memory=32 bytes (8 pages)
re? cri
Logical address 0 maps 1*4+0=4
gu e s
s fi u d
Logical address 3 maps to= 1*4+3=7
thi n yo
Logical address 4 maps to =4*4+0=16
Logical address 13 maps to= 7*4+1=29.
Ca
May 11, 2025 Operating Systems
31
More on Paging(contd.)
o
In paging scheme, Pages are allocated as units.
o
In this scheme there is no external fragmentation.
o
But internal fragmentation is inevitable.
o
Example:-
― assume page size=2048 bytes.
― a process of 61442 bytes needs 30 pages plus 2 bytes.
Since units are managed in terms of pages 31 pages are
allocated.
― Internal fragmentation=2046 bytes!!!!.
o In the worst case a process needs n pages plus 1 byte.
So it will be allocated n+1 pages.
Fragmentation =(page size-1 byte) ~ entire page. re any
i s t h e e r an
e , n t
g s c h em
c e s s to e s er
a g i n p ro e r u
In p o r u s er r a n o th
e f S o
chanc s of the O
s
addre ? Why?
ro c e ss
May 11, 2025 p Operating Systems
32
Free Frames
When a process arrives the size in pages is examined
Each page of process needs one frame.
If n frames are available these are allocated, and page table is
updated with frame number.
be
re? cri
gu des
is fi u
Before allocation
th n yo
After allocation
Ca
May 11, 2025 Operating Systems
33
Implementation of Page Table
• Two options: Page table can be kept in registers or main memory
• Page table is kept in main memory due to bigger size.
• Ex: address space = 232 to 264
• Page size= 212
• Page table= 232 / 212=220
• If each entry consists of 4 bytes, the page table size = 4MB.
• Page-table base register (PTBR) points to the page table.
• Page-table length register (PTLR) indicates size of the page table.
• PTBR, and PTLR are maintained in the registers.
• Context switch means changing the contents of PTBR and PRLR.
• In this scheme every data/instruction access requires two memory accesses. One
for the page table and one for the data/instruction.
• Memory access is slowed by a factor of 2.
• The two memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers (TLBs)
34
Translation look-aside buffer (TLB)
TLB is an associative memory – parallel search
A set of associative registers is built of especially
high-speed memory.
Each register consists of two parts: key and value
When associative registers are presented with an Page # Frame #
item, it is compared with all keys simultaneously.
If corresponding field is found, corresponding field is
output.
Associative registers contain only few of page table
entries.
• When a logical address is generated it is presented
to a set of associative registers.
• If yes, the page is returned.
• Otherwise memory reference to page table mode.
Then that page # is added to associative registers.
Address translation (A´, A´´)
• If A´ is in associative register, get frame # out.
• Otherwise get frame # from page table in memory
It may take 10 % longer than normal time.
% of time the page # is found in the associative
registers is called hit ratio.
May 11, 2025 Operating Systems
35
Paging Hardware With TLB
36
Effective Access Time
Associative Lookup = time unit, Assume memory cycle time is 1
microsecond
Hit ratio – percentage of times that a page number is found in the
associative registers; ratio related to number of associative registers
Hit ratio =
Effective access time=hit ratio*Associate memory access time
+miss ratio* memory access time.
37 = 22 % slowdown.
Memory Protection
Memory protection
implemented by associating
protection bit with each
frame
Valid-invalid bit
attached to each entry in
the page table:
“valid” indicates that
the associated page is in
the process’ logical
address space, and is
thus a legal page
“invalid” indicates
that the page is not in
the process’ logical
address space Figure:-Shared Pages Example
May 11, 2025 Operating Systems
38
Shared Pages
Shared code
window systems).
Shared code must appear in
same location in the logical
processes
40
User’s View of a Logical View of
Program Segmentation
11
1 4
3 2
4
33
41
Segmentation Architecture
o Logical address consists of a two tuple
<segment-number, offset>
o Segment Table
• Maps two-dimensional user-defined addresses into one-
dimensional physical addresses. Each table entry has
– Base - contains the starting physical address where
the segments reside in memory.
– Limit - specifies the length of the segment.
• Segment-table base register (STBR) points to the
segment table’s location in memory.
• Segment-table length register (STLR) indicates the
number of segments used by a program;.
Note: segment number s is legal if s < STLR.
42
Segmentation Architecture (cont.)
o Relocation is dynamic - by segment table
o Sharing
―Code sharing occurs at the segment level.
―Shared segments must have same segment number.
o Allocation - dynamic storage allocation problem
―use best fit/first fit, may cause external fragmentation.
o Protection
• protection bits associated with segments
• read/write/execute privileges
• array in a separate segment - hardware can check for
illegal array indexes.
43
Shared segments
Limit Base
editor 25286 43062
0 4425 68348
segment 0 1 43062
data 1 Segment Table editor
process P1 68348
Logical Memory data 1
process P1 segment 1 72773
44
Exercises
1) Consider a logical address space of 64 pages of 1,024 words each, mapped
onto a physical memory of 32 frames.
a. How many bits are there in the logical address?
b. How many bits are there in the physical address?
2) Why are page sizes always powers of 2?
3) Describe a mechanism by which one segment could belong to the address
space of two different processes.
4) Sharing segments among processes without requiring that they have the
same segment number is possible in a dynamically linked segmentation
system.
a. Define a system that allows static linking and sharing of segments
without requiring that the segment numbers be the same.
b. Describe a paging scheme that allows pages to be shared without
requiring that the page numbers be the same.
May 11, 2025 45
Operating Systems
Exercises ( …)
5) Given six memory partitions of 300 KB, 600 KB, 350 KB, 200 KB, 750
KB, and 125 KB (in order), how would the first-fit, best-fit, and worst-fit
algorithms place processes of size 115 KB, 500 KB, 358 KB, 200 KB, and
375 KB (in order)? Rank the algorithms in terms of how efficiently they
use memory.
6) Most systems allow a program to allocate more memory to its address
space during execution. Allocation of data in the heap segments of
programs is an example of such allocated memory. What is required to
support dynamic memory allocation in the following schemes?
a. Contiguous memory allocation
b. Pure segmentation
c. Pure paging
7) Consider a logical address space of 256 pages with a 4-KB page size,
mapped onto a physical memory of 64 frames.
a. How many bits are required in the logical address?
b. How many bits are required in the physical address?
May 11, 2025 46
Operating Systems
Exercises (…)
8) Compare the memory organization schemes of contiguous memory
allocation, pure segmentation, and pure paging with respect to the
following issues:
a. External fragmentation
b. Internal fragmentation
c. Ability to share code across processes
9) Program binaries in many systems are typically structured as follows.
Code is stored starting with a small, fixed virtual address, such as 0. The
code segment is followed by the data segment that is used for storing
the program variables. When the program starts executing, the stack is
allocated at the other end of the virtual address space and is allowed
to grow toward lower virtual addresses. What is the significance of this
structure for the following schemes?
a. Contiguous memory allocation
b. Pure segmentation
c. Pure paging
May 11, 2025 47
Operating Systems