0% found this document useful (0 votes)
11 views26 pages

OS-MM Module 4

The document discusses various memory management strategies in operating systems, including swapping, contiguous memory allocation, paging, and segmentation. It explains the importance of address binding, logical versus physical address space, and the role of the memory management unit (MMU) in mapping addresses. Additionally, it covers dynamic loading, dynamic linking, and the implications of memory allocation techniques.

Uploaded by

R G Sakthivelan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views26 pages

OS-MM Module 4

The document discusses various memory management strategies in operating systems, including swapping, contiguous memory allocation, paging, and segmentation. It explains the importance of address binding, logical versus physical address space, and the role of the memory management unit (MMU) in mapping addresses. Additionally, it covers dynamic loading, dynamic linking, and the implications of memory allocation techniques.

Uploaded by

R G Sakthivelan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 26

Operating Systems

Memory Management
Module-4

Part -1
Memory Management
• Memory Management Strategies
• Background
• Swapping
• Contiguous Memory Allocation
• Paging
• Structure of Page Table
• Segmentation

2
MEMORY MANAGEMENT
Main Memory Management Strategies
• Every program to be executed has to be
executed must be in memory. The instruction
must be fetched from memory before it is
executed.
•In multi-tasking OS memory management is
complex, because as processes are swapped in
and out of the CPU, their code and data must be
swapped in and out of memory.
3
Basic Hardware
• Main memory, cache and CPU registers in the processors are
the only storage spaces that CPU can access directly.
• The program and data must be bought into the memory from
the disk, for the process to run. Each process has a separate
memory space and must access only this range of legal
addresses. Protection of memory is required to ensure correct
operation. This prevention is provided by hardware
implementation.
• Two registers are used - a base register and a limit register.
– The base register holds the smallest legal physical memory
address;
– the limit register specifies the size of the range.
4
• For example, The base
register holds the smallest
legal physical memory
address; the limit register
specifies the size of the
range. For example, if the
base register holds 300040
and limit register is 120900,
then the program can legally
access all addresses from
300040 through 420940
(inclusive).
Figure: A base and a limit-register define a logical-
5 address space
• The base and limit registers can be loaded only by the operating system, which uses
a special privileged instruction. Since privileged instructions can be executed only
in kernel mode only the operating system can load the base and limit registers.
• CPU must check every memory access generated in user mode to be sure it is
between base and limit for that user

6
Address Binding
• User programs typically refer to memory addresses with symbolic names.
These symbolic names must be mapped or bound to physical memory
addresses.
• Address binding of instructions to memory-addresses can happen at 3
different stages.
– Compile Time - If it is known at compile time where a program will reside in
physical memory, then absolute code can be generated by the compiler,
containing actual physical addresses. However, if the load address changes at
some later time, then the program will have to be recompiled.
– Load Time - If the location at which a program will be loaded is not known at
compile time, then the compiler must generate relocatable code, which
references addresses relative to the start of the program. If that starting address
changes, then the program must be reloaded but not recompiled.
– Execution Time - If a program can be moved around in memory during the
course of its execution, then binding must be delayed until execution time.

7
Multistep Processing of a User Program

8
Logical vs. Physical Address Space
• The address generated by the CPU is a logical
address, whereas the memory address where programs
are actually stored is a physical address.
• The set of all logical addresses used by a program
composes the logical address space, and the set of all
corresponding physical addresses composes the
physical address space.

9
• The run time mapping of logical to physical addresses
is handled by the memory- management unit (MMU).
– One of the simplest is a modification of the base-register
scheme.
– The base register is termed a relocation register
– The value in the relocation-register is added to every
address generated by a user-process at the time it is sent to
memory.
– The user-program deals with logical-addresses; it never sees
the real physical- addresses.

10
Figure: Dynamic relocation using a relocation-register
11
Dynamic Loading
 The entire program does need to be in memory to execute
 Routine is not loaded until it is called
 Better memory-space utilization; unused routine is never
loaded
 All routines kept on disk in relocatable load format
 Useful when large amounts of code are needed to handle
infrequently occurring cases
 No special support from the operating system is required
• Implemented through program design
• OS can help by providing libraries to implement
dynamic loading
12
Dynamic Linking and Shared Libraries
• With static linking library modules get fully included in
executable modules, wasting both disk space and main memory
usage, because every program that included a certain routine
from the library would have to have their own copy of that
routine linked into their executable code.
• With dynamic linking, only a stub is linked into the
executable module, containing references to the actual library
module linked in at run time.

13
Dynamic Linking and Shared Libraries
• The stub is a small piece of code used to locate the appropriate
memory-resident library-routine.
• This method saves disk space, because the library routines do
not need to be fully included in the executable modules, only
the stubs.
• An added benefit of dynamically linked libraries (DLLs, also
known as shared libraries or shared objects on UNIX systems)
involves easy upgrades and update.
Shared libraries
• A library may be replaced by a new version, and all programs
that reference the library will automatically use the new one.
• Version info. is included in both program & library so that
14 programs won't accidentally execute incompatible versions.
Swapping
• A process must be loaded into memory in order to
execute.
• If there is not enough memory available to keep all
running processes in memory at the same time, then some
processes that are not currently using the CPU may have
their memory swapped out to a fast local disk called the
backing store.
• Swapping is the process of moving a process from
memory to backing store and moving another process
from backing store to memory. Swapping is a very slow
process compared to other operations
15
• A variant of swapping policy is used for priority-based
scheduling algorithms.
– If a higher-priority process arrives and wants service, the memory
manager can swap out the lower-priority process and then load
and execute the higher-priority process.
– When the higher-priority process finishes, the lower-priority
process can be swapped back in and continued. This variant of
swapping is called roll out, roll in.

16
• Swapping depends upon address-binding:
– If binding is done at load-time, then process cannot be easily
moved to a different location.
– If binding is done at execution-time, then a process can be
swapped into a different memory-space, because the physical-
addresses are computed during execution-time.
• Major part of swap-time is transfer-time; i.e. total transfer-
time is directly proportional to the amount of memory
swapped.

17
Disadvantages:
•Context-switch time is fairly
high.
•If we want to swap a process,
we must be sure that it is
completely idle. Two
solutions:
– Never swap a process
with pending I/O.
– Execute I/O operations
only into OS buffers.
Figure: Swapping of two processes
18 using a disk as a backing store
Example:
•Assume that the user process is 10 MB in size and the backing
store is a standard hard disk with a transfer rate of 40 MB per
second.
•The actual transfer of the 10-MB process to or from main
memory takes 10000 KB/40000 KB per second = 1/4 second
•= 250 milliseconds.
•Assuming that no head seeks are necessary, and assuming an
average latency of 8 milliseconds, the swap time is 258
milliseconds. Since we must both swap out and swap in, the
total swap time is about 516 milliseconds.
19
Contiguous Allocation
• The main memory must accommodate both the operating
system and the various user processes. Therefore we need
to allocate the parts of the main memory in the most
efficient way possible.
• Memory is usually divided into 2 partitions:
– Resident operating system, usually held in low memory
with interrupt vector
– User processes then held in high memory
• Each process is contained in a single contiguous section of
memory.

20
Memory Mapping and Protection
•Memory-protection means protecting OS from user-process and
protecting user- processes from one another.
•Memory-protection is done using
– Relocation-register: contains the value of the smallest physical-
address.
– Limit-register: contains the range of logical-addresses.
•Each logical-address must be less than the limit-register.
•The MMU maps the logical-address dynamically by adding the value
in the relocation- register. This mapped-address is sent to memory
•When the CPU scheduler selects a process for execution, the dispatcher
loads the relocation and limit-registers with the correct values.
21
• Because every address generated by the CPU is
checked against these registers, we can protect
the OS from the running-process.
• The relocation-register scheme provides an
effective way to allow the OS size to change
dynamically.
• Transient OS code: Code that comes & goes as
needed to save memory-space and overhead for
unnecessary swapping.
22
23
Memory Allocation
• Two types of memory partitioning are:
– Fixed-sized partitioning
– Variable-sized partitioning

24
Fixed-sized Partitioning
• The memory is divided into fixed-sized partitions.
• Each partition may contain exactly one process.
• The degree of multiprogramming is bound by the number
of partitions.
• When a partition is free, a process is selected from the
input queue and loaded into the free partition.
• When the process terminates, the partition becomes
available for another process.

25
Variable Partition

26

You might also like