OS Unit 3
OS Unit 3
UNIT 3
Memory Management
1
MAIN MEMORY MANAGEMENT
For the computation of a program,
9/18/2017
requirement is that it should be
resident in main memory to run.
9/18/2017
management :-
9/18/2017
released by a process is usually not
accounted for immediately by the
processor – its garbage!!. Compaction
or garbage collection is responsibility
of the OS.
9/18/2017
address translation mechanism to map a
logical address to the physical address to
access the desired data or instruction.
5
BINDING OF INSTRUCTIONS AND DATA TO MEMORY
9/18/2017
stages
Compile time: If memory location known a
priori, absolute code can be generated; must
recompile code if starting location changes
Load time: Must generate relocatable code if
memory location is not known at compile time
Execution time: Binding delayed until run
time if the process can be moved during its
execution from one memory segment to another.
Need hardware support for address maps (e.g.,
base and limit registers). 6
MULTISTEP PROCESSING OF A USER PROGRAM
9/18/2017
7
LOGICAL VS. PHYSICAL ADDRESS SPACE
The concept of a logical address space that is
bound to a separate physical address space is
9/18/2017
central to proper memory management
Logical address – generated by the CPU;
also referred to as virtual address
Physical address – address seen by the
memory unit
Logical and physical addresses are the same in
compile-time and load-time address-binding
schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme
8
MEMORY-MANAGEMENT UNIT (MMU)
9/18/2017
Hardware device that maps virtual to physical
address
9
COMPILER GENERATED BINDINGS
9/18/2017
The advantage of relocation can be seen in the
light of binding of addresses to variables in a
program.
10
COMPILER GENERATED BINDINGS
The fixed address allocated by the compiler for the
variable x is known as binding.
9/18/2017
If x is bound to fixed location then we can execute a
Program P , only when x would be put in its allocated
memory location.
9/18/2017
12
LINKING AND LOADING CONCEPTS
Stage1: In the first stage the HLL source program is
compiled and an object code is produced. Technically,
depending upon the program, this object code may by
9/18/2017
itself be sufficient to generate a relocatable process.
However, many programs are compiled in parts, so
this object code may have to link up with other object
modules. At this stage the compiler may also insert
stub at points where run time library modules may be
linked.
9/18/2017
to make substitution for the stubs
with run time library code which is a
relocatable code.
9/18/2017
memory.
Program Counter is set to the absolute address of
the first instruction of the program.
Data can also fetched if we know its absolute
address.
In case that part of memory is currently in use
then this program can not be run.
Note if we have program instructions then we
should be able to execute the program from
starting at any location.
15
MEMORY RELOCATION CONCEPT
9/18/2017
16
DYNAMIC RELOCATION USING A RELOCATION
REGISTER
9/18/2017
17
MEMORY RELOCATION CONCEPT
9/18/2017
With this flexibility, we can allocate any area in
the memory to load this process.
18
DYNAMIC LOADING
9/18/2017
Routine is not loaded until it is called
Better memory-space utilization; unused routine
is never loaded
Useful when large amounts of code are needed to
handle infrequently occurring cases
No special support from the operating system is
required implemented through program design
19
DYNAMIC LINKING
Linking postponed until execution time
Small piece of code, stub, used to locate the
9/18/2017
appropriate memory-resident library routine
Stub replaces itself with the address of the
routine, and executes the routine
Operating system needed to check if routine is in
processes’ memory address
Dynamic linking is particularly useful for
libraries
20
SWAPPING
A process can be swapped temporarily out of memory
to a backing store, and then brought back into
memory for continued execution
9/18/2017
Backing store – fast disk large enough to
accommodate copies of all memory images for all
users; must provide direct access to these memory
images
Roll out, roll in – swapping variant used for
priority-based scheduling algorithms; lower-priority
process is swapped out so higher-priority process can
be loaded and executed
Major part of swap time is transfer time; total
transfer time is directly proportional to the amount of
memory swapped
Modified versions of swapping are found on many
systems (i.e., UNIX, Linux, and Windows) 21
SCHEMATIC VIEW OF SWAPPING
9/18/2017
22
CONTIGUOUS ALLOCATION
Main memory usually into two partitions:
Resident operating system, usually held in low
9/18/2017
memory with interrupt vector
User processes then held in high memory
Single-partition allocation
Relocation-register scheme used to protect
user processes from each other, and from
changing operating-system code and data
Relocation register contains value of smallest
physical address; limit register contains range
of logical addresses – each logical address
must be less than the limit register 23
A BASE AND A LIMIT REGISTER DEFINE A LOGICAL
ADDRESS SPACE
9/18/2017
24
HW ADDRESS PROTECTION WITH BASE AND LIMIT
REGISTERS
9/18/2017
25
CONTIGUOUS ALLOCATION
Multiple-partition allocation
Hole – block of available memory; holes of
9/18/2017
various size are scattered throughout memory
When a process arrives, it is allocated memory
from a hole large enough to accommodate it
Operating system maintains information
about:
a) allocated partitions b) free partitions
(hole)
26
DYNAMIC STORAGE-ALLOCATION PROBLEM
How to satisfy a request of size n from a list of
free holes
9/18/2017
First-fit: Allocate the first hole that is big
enough
Best-fit: Allocate the smallest hole that is big
enough; must search entire list, unless ordered
by size. Produces the smallest leftover hole.
Worst-fit: Allocate the largest hole; must also
search entire list. Produces the largest leftover
hole.
First-fit and best-fit better than worst-fit in
terms of speed and storage utilization
27
FRAGMENTATION
External Fragmentation – total memory space
exists to satisfy a request, but it is not contiguous
9/18/2017
Internal Fragmentation – allocated memory may
be slightly larger than requested memory; this size
difference is memory internal to a partition, but not
being used
Reduce external fragmentation by compaction
Shuffle memory contents to place all free memory
together in one large block
Compaction is possible only if relocation is
dynamic, and is done at execution time
I/O problem
Latch job in memory while it is involved in I/O
9/18/2017
and First Fit (memory allocation) policies.
29
MEMORY ALLOCATION POLICIES
Best Fit Policy scans all available holes and chooses the
one with a size closest to the requirement.
9/18/2017
It requires a scan of the whole memory and is slow.
First Fit and Next Fit are fastest and are hence preferred
methods.
9/18/2017
• OS resides in 6 units
• User processes share 14 units.
The user process data:
31
FCFS POLICY
9/18/2017
Statement: “Jobs are processed in the order they
arrive”.
32
FCFS MEMORY ALLOCATION
9/18/2017
33
THE FIRST FIT POLICY
Statement: “Jobs are processed always from one
end and find the first block of free space which is
9/18/2017
large enough to accommodate the incoming
process”.
Given Data:
34
THE FIRST FIT POLICY OF MEMORY ALLOCATION
9/18/2017
35
THE BEST FIT POLICY
Statement: “Jobs are selected after scanning the
main memory for all the available holes and
having information about all the holes in the
9/18/2017
memory, the job which is closest to the size of the
requirement of the process will be processed ”.
Given Data:
36
THE BEST FIT POLICY OF MEMORY ALLOCATION
9/18/2017
37
FIXED AND VARIABLE PARTITION
Fixed Partition : Memory is divided into chunks.
For example, 4K/8K/16K Bytes. All of these are
9/18/2017
same size.
Allocation : If a certain chunk can hold the
program/data, then the allocation. If a chunk can
not hold program/data then multiple chunks are
allocated to accommodate the program/data.
38
FIXED AND VARIABLE PARTITION
Variable Partition: Memory is divided into
chunks of various sizes. For Example there could
9/18/2017
be chunks of 8K, some may be 16 K or even more.
Allocation: The program/Data are allocated to
the chunks that can accommodate the incoming
program/Data.
39
BUDDY SYSTEM
The buddy system of partitioning relies on the fact
that space allocations can be conveniently handled in
sizes of power of 2.
9/18/2017
There are two ways in which the buddy system
allocates space.
- Suppose we have a hole which is the closest
power of two. In that case, that hole is used for
allocation.
- In case we do not have that situation then we
look for the next power of 2 hole size, split it in two
equal halves and allocate one of these.
9/18/2017
41
BUDDY SYSTEM
9/18/2017
We assume that initially we have a space of
1024 K. We also assume that processes arrive
and are allocated following a time sequence as
shown in figure.
9/18/2017
assume scan of memory from the beginning. We
always use the first hole which accommodates the
process.
9/18/2017
memory whenever the latter is available
Divide physical memory into fixed-sized blocks
called frames (size is power of 2, between 512
bytes and 8192 bytes)
Divide logical memory into blocks of same size
called pages.
Keep track of all free frames
To run a program of size n pages, need to find n
free frames and load program
Set up a page table to translate logical to physical
addresses
Internal fragmentation 44
ADDRESS TRANSLATION SCHEME
Address generated by CPU is divided into:
9/18/2017
Page number (p) – used as an index into a page
table which contains base address of each page
in physical memory
45
ADDRESS TRANSLATION ARCHITECTURE
9/18/2017
46
PAGING EXAMPLE
9/18/2017
47
PAGING EXAMPLE
9/18/2017
48
FREE FRAMES
9/18/2017
49
IMPLEMENTATION OF PAGE TABLE
Page table is kept in main memory
Page-table base register (PTBR) points to the
9/18/2017
page table
Page-table length register (PRLR) indicates size of
the page table
In this scheme every data/instruction access
requires two memory accesses. One for the page
table and one for the data/instruction.
The two memory access problem can be solved by
the use of a special fast-lookup hardware cache
called associative memory or translation
look-aside buffers (TLBs)
50
PAGING HARDWARE WITH TLB
9/18/2017
51
MEMORY PROTECTION
Memory protection implemented by associating
protection bit with each frame
9/18/2017
Valid-invalid bit attached to each entry in the
page table:
“valid” indicates that the associated page is in
the process’ logical address space, and is thus
a legal page
“invalid” indicates that the page is not in the
process’ logical address space
52
VALID (V) OR INVALID (I) BIT IN A PAGE TABLE
9/18/2017
53
CONCEPT OF VIRTUAL STORAGE
The directly addressable main memory is limited and is quite
small in comparison to the logical addressable space.
9/18/2017
The actual size of main memory is referred as the physical
memory. The logical addressable space is referred to as
virtual memory.
9/18/2017
figure. In other words, the processor is fooled into
believing that it is accessing a large addressable
space. Hence, the name virtual storage space. The
disk area may map to the virtual space
requirements and even beyond.
55
VIRTUAL MEMORY : PAGING
Once we have addressable segments in the
9/18/2017
secondary memory-we need to bring it within the
main memory for physical access for process.
Often mechanism of paging helps.
56
VIRTUAL MEMORY : PAGING
The primary idea is to always keep focus on that area
of memory from which instructions are executed.
9/18/2017
Once that area is identified – it is loaded into the
primary memory into the fixed size pages.
9/18/2017
Also, we require that the virtual space is divided
into pages of the same size as the frames.
9/18/2017
helps to keep the internal fragmentation small.
59
PAGING : IMPLEMENTATION
9/18/2017
60
DEMAND PAGING
Bringa page into memory only when it is
needed
9/18/2017
Less I/O needed
Less memory needed
Faster response
More users
9/18/2017
62
PAGING: REPLACEMENT
When a page is no longer needed it can be replaced.
Consider an example shown in figure process P29
has all its pages present in main memory.
9/18/2017
Process P6 does not have all its pages in main
memory. If a page is present we record 1 against its
entry. The OS also records if a page has been
referenced to read or to write. In both these cases a
reference is recorded.
63
PAGING: REPLACEMENT
If a page frame is written into, then a modified bit is
set. In our example, frames 4, 9, 40, 77, 79 have been
referenced and page frames 9 and 13 have been
modified.
9/18/2017
Sometimes OS may also have some information about
protection using rwe information. If a reference is
made to a certain virtual address and its
corresponding page is not present in main memory,
then we say a page fault has occurred.
9/18/2017
process P which gets an allocation of four pages
to execute.
Further, we assume that the OS collects some
information (depicted in figure) about the use of
these pages as this process progresses in
execution.
65
PAGE REPLACEMENT
9/18/2017
66
PAGE REPLACEMENT POLICIES
Let us examine the information depicted in figure in some
detail to determine how this may help in evolving a page
replacement policy.
9/18/2017
Note that we have the following information available about P.
9/18/2017
Based on the choice of the policy and the data
collected for P, we shall be able to decide which
page to swap out to bring in a new page.
9/18/2017
69
PAGE REPLACEMENT POLICIES
LRU policy: LRU expands to least recently used. This
policy suggests that we remove a page whose last
usage is farthest from current time. Note that the
9/18/2017
current time is 135 and the least recently used page is
the page located at 23. It was used last at time unit
125 and every other page is more recently used. So
page 23 is the least recently used page and so it
should be swapped if LRU replacement policy is
employed.
9/18/2017
71
OPTIMAL (NFU) PAGE REPLACEMENT
9/18/2017
72
PAGE HIT AND PAGE MISS
9/18/2017
page fault occurs we say we have a page miss.
73
PAGE TABLE WHEN SOME PAGES ARE NOT IN MAIN
MEMORY
9/18/2017
74
PAGE FAULT
If there is a reference to a page, first reference to
that page will trap to operating system:
9/18/2017
page fault
1. Operating system looks at another table to
decide:
Invalid reference abort
Just not in memory
2. Get empty frame
3. Swap page into frame
4. Reset tables
5. Set validation bit = v
6. Restart the instruction that caused the page fault
75
STEPS IN HANDLING PAGE FAULT
9/18/2017
76
PAGING: HW SUPPORT
Recall the point we need HW within CPU to
support paging. The CPU generates a logical
9/18/2017
address which must get translated to a physical
address. In Figure we indicate the basic address
generation and translation.
77
PAGING: HW SUPPORT
The sequence of steps in generation of address is
as follows:
9/18/2017
- The process generates a logical address. This
address is interpreted in two parts.
9/18/2017
to find out the following:
9/18/2017
A page fault is generated if the page is not in the
physical memory – trap.
80
SEGMENTATION
Segmentation also supports virtual memory
concept.
9/18/2017
One view of segmentation could be that each part
like its code segment, its stack requirements (of
data, nested procedure calls), its different object
modules etc. has a contiguous space. This view is
uni- dimensional.
81
SEGMENTATION
Memory-management scheme that supports user
view of memory
9/18/2017
A program is a collection of segments. A segment
is a logical unit such as:
-main program,
-procedure,
-function,
-method,
-object,
-local variables, global variables,
-common block,
-stack,
-symbol table, arrays 82
USER VIEW OF PROGRAM
9/18/2017
83
SEGMENTATION
Each segment has requirements that vary over
time – stacks grow or shrink, memory
9/18/2017
requirements of object and data segments may
change during the process’s lifetime.
We, therefore, have a two dimensional view of a
process’s memory requirement - each process
segment can acquire a variable amount of space
over time.
84
LOGICAL VIEW OF SEGMENTATION
9/18/2017
85
SEGMENTATION ARCHITECTURE
9/18/2017
Logical address consists of a two tuple:
<segment-number, offset>,
Segment table – maps two-dimensional physical
addresses; each table entry has:
base – contains the starting physical address where
the segments reside in memory
limit – specifies the length of the segment
9/18/2017
Protection. With each entry in segment table
associate:
validation bit = 0 illegal segment
read/write/execute privileges
87
ADDRESS TRANSLATION ARCHITECTURE
9/18/2017
88
SEGMENTATION EXAMPLE
9/18/2017
89
SHARING OF SEGMENTS
9/18/2017
90
SEGMENTATION
Segmentation : paging with variable page size
9/18/2017
Advantages:
memory protection added to segment table like
paging
sharing of memory similar to paging (but per
area rather than per page)
Drawbacks:
allocation algorithms as for memory partitions
9/18/2017
have a segment table look ups to identify address
values.
9/18/2017
A user may develop a code segment and share it
amongst many applications.
9/18/2017
faults. Segmentation offers no such problems.
94
SEGMENTATION AND PAGING
In practice, there are segments for the code(s),
data and stack.
9/18/2017
Each segment carries the rwe information as
well.
95
SEGMENTATION & PAGING
A clever scheme with advantages of both would
be segmentation with paging. In such a scheme
9/18/2017
each segment would have a descriptor with its
pages identified. Such a scheme is shown in
figure.
96