0% found this document useful (0 votes)
17 views10 pages

Unit 4 - Introduction

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views10 pages

Unit 4 - Introduction

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT 4

MEMORY MANAGEMENT AND VIRTUAL MEMORY


INDEX:

 LOGICAL ADDRESS vs. PHYSICAL ADDRESS SPACE


 SWAPPING
 CONTIGOUS ALLOCATION
 PAGING
 SEGMENTATION
 SEGMENTATION WITH PAGING
 DEMAND PAGING
 PAGE REPLACEMENT
 PAGE REPLACEMET ALGORITHMS

INTRODUCTION:
 Memory is defined as a collection of data is stored in a specific
format. It stores instructions and process data.
 Memory comprise of large array or group of words/bytes.
 Each array or word or bytes has its own location.
 Every program along with the necessary information has to be
loaded into the Main Memory during execution.
 The CPU fetches the instructions from the memory according to the
program counter.
 To achieve a degree of multiprogramming and proper utilization
of memory, memory management is important.(imp point)

DEFINE MEMORY MANAGEMENT?


 Memory Management is the task of subdividing the memory
among different processes.
 Memory management is a method in the operating system to
manage operations between main memory and disk during process
execution.
 The main aim of memory management is to achieve efficient
utilization of memory with degree of multiprogramming.

WHAT IS THE NEED OF MEMORY MANAGEMENT?

1. Allocate and de-allocate memory before and after process


execution.

2. Keep track of used memory space by processes.

3. Minimize fragmentation issues.

4. Proper utilization of main memory.

5. Maintain data integrity while executing of process.

LOGICAL ADDRESS SPACE vs. PHYSICAL ADDRESS SPACE

 Logical address space and physical address space are the key
concepts in memory management in computer systems.

DEFINITION FOR LOGICAL ADDRESS SPACE:

 CPU generated address is called logical address.

 Logical address is also called as virtual address.

 Logical Addresses are generated during compile time of address


binding.

 The set of all logical addresses generated by a program is known as


Logical Address Space.
DEFINITION FOR PHYSICAL ADDRESS SPACE:

 Physical Address is the address seen by the memory unit (RAM).

 Physical address corresponding to logical address is generated


during load time.

 The set of all physical addresses corresponding to its logical


addresses are called Physical Address Space.

 The run time mapping from virtual address to physical address is


done by a hardware device known as Memory Management Unit
(MMU).

ASPECT LOGICAL ADDRESS SPACE PHYSICAL ADDRESS SPACE

 Created by the CPU during  Managed by the operating system


GENERATION
program execution and hardware (MMU)
 Seen and used by programs  Not directly seen by user
VISIBILITY
and processes programs
 Must be translated to physical  Directly accessed by the memory
TRANSLATION
addresses unit
 Allows for relocation,  Constrained by the physical
FLEXIBILITY
swapping, and virtual memory memory installed
 Can be larger than physical
 Limited by the actual physical
SIZE memory due to virtual
RAM size
memory
 Refers to physical memory
NATURE  Also called virtual addresses
addresses
 Used by CPU and processes  Used by memory hardware to
USAGE
to access memory load instructions and data
VISIBILITY TO
 Visible and used in code  Hidden from the programmer
PROGRAMMER
 Translated to physical
MAPPING  Directly used by the hardware
addresses by the MMU
 Enables virtual memory,
 Directly influences memory
IMPORTANCE process isolation, memory
access efficiency.
management
Table 4.1: Summary of Logical Address Space Vs Physical Address Space
Memory-Management Unit (MMU):

 MMU is a, Hardware device that maps virtual address to physical


address.
 In MMU scheme, the value in the relocation register is added to
every address generated by a user process at the time it is sent to
memory.
 The user program deals with logical addresses & it never sees the
real physical addresses.

Fig 4.1: Dynamic Relocation using Relocation Register.

 From above figure ,Fig 4.1;


 Relocation register is known as base register.
 If, Logical address ranges from (0 to max) then Physical addresses
ranges from (R to R+max) where R is the value of relocation
register.

ADDRESS BINDING: (imp)


 Address binding is defined as the process of mapping program
addresses (logical/virtual addresses) to physical addresses in the
computer's memory.
 This mapping can occur at different stages of a program’s lifecycle:
compile time, load time, or execution time.
 The purpose of address binding is to ensure that the instructions
and data of a program are correctly accessed during execution,
regardless of where the program is actually located in memory.

IMPORTANCE OF ADDRESS BINDING:


 Memory Management: Helps in efficiently using memory by
allowing relocation and virtual memory.
 Process Isolation: Ensures that processes do not interfere with each
other’s memory.
 Flexibility: Enables features like swapping and dynamic memory
allocation, enhancing system performance and stability.

TYPES OF ADDRESS BINDING:


1. Compile-Time Binding:
 Occurs when the memory location of the program is known at
compile time.
 The compiler generates absolute code, if memory location
known a priori.
 If the location changes, the program must be recompiled.
 E.g.; Purchase of railway ticket.

2. Load-Time Binding:
 Used when the memory location is not known at compile time.
 The compiler generates re-locatable code, and the binding is
done when the program is loaded into memory.
 Allows for different memory locations without recompiling.
 E.g.; Booking of flight ticket.
3. Execution-Time Binding:
 Necessary if the process can move during execution.
 Binding is delayed until run time and managed dynamically by
the hardware, typically using a Memory Management Unit
(MMU).
 Allows for greater flexibility and efficient memory use.
 E.g.; Exchange seat in a flight.

Fig 4.2.: Multistep Processing of a User Program

 Above fig 4.2., shows the different stages of a program life cycle
where address binding takes place.
DYNAMIC LOADING:
 Dynamic loading is a mechanism by which a computer program
loads a library or other module into memory at runtime, rather than
at the start of the program.
 This technique helps to optimize memory usage and improve
performance, as only the necessary modules are loaded when
needed.
BENEFITS OF DYNAMIC LOADING:
1. Routine is not loaded until it is called.
2. Better memory-space utilization; unused routine is never be
loaded.
3. Useful when large amounts of code are needed to handle
infrequently occurring cases.
4. No special support from the operating system is required
implemented through program design.

DYNAMIC LINKING AND SHARED LIBRARIES:


 Dynamic linking is defined as the process of linking a program
with one or more shared libraries, during program's execution
rather than at compile time. These shared libraries are linked
when the program is loaded into memory.
 Stub: it is a small piece of code, used to locate the appropriate
memory-resident library routine.
 Stub replaces itself with the address of the routine, and
executes the routine.
 Typically managed by the operating system's loader/linker.

SHARED LIBRARIES:
 Shared libraries, also known as dynamic link libraries (DLLs) in
Windows or Shared Objects (SO) in Unix-like systems, are
collections of routines, functions, and resources that multiple
programs can use simultaneously.
 These libraries are loaded into memory at runtime and are shared
among different programs, allowing them to use common
functionality without the need to include the library's code directly
in each program.
KEY CHARACTERISTICS / BENEFITS OF SHARED LIBRARIES:
1. Reusability:
 Code in shared libraries can be used by multiple programs,
promoting code reuse and reducing redundancy.
2. Memory Efficiency:
 Since the shared library code is loaded into memory only once,
it reduces the overall memory footprint when multiple
programs use the same library.
3. Modularity:
 Programs can be modularized, with common functionality
separated into shared libraries, making them easier to maintain
and update.
4. Updatability:
 Shared libraries can be updated independently of the
programs that use them. This allows for easier distribution of
bug fixes and updates.
5. Dynamic Linking:
 Programs link to shared libraries at runtime.
 The executable contains references to the shared library.

OVERLAYS:
 Overlays is defined as, it is a memory management technique used
to enable programs to run that are larger than the available physical
memory by loading only the necessary portions (overlays) of the
program into memory at any given time.
 This allows the system to handle larger applications within the
constraints of limited memory resources.
 Example for overlays is assembler,
 Consider the 2 pass assembler, in which Pass1 & Pass 2 which are
mutual exclusive.
 Let’s assume that available main memory size is 150KB and total
code size is 200KB. The size of different components of the total
code is as given below,

Pass 1.............................................70KB
Pass 2.............................................80KB
Symbol table...............................30KB
Common routine.......................20KB
---------------------------------------
TOTAL CODE SIZE……………..200KB
---------------------------------------
 As the total code size is 200KB and main memory size is 150KB, it is
not possible to use 2 passes together.
 Overlay technique is,
o For pass 1, total memory needed is = (70KB + 30KB + 20KB +
10KB) = 130KB.
o For pass 2, total memory needed is = (80KB + 30KB + 20KB +
10KB) = 140KB.
o So if we have minimum 140KB size partition then we can run
this code very easily.
o Below fig 4.3 shows the overlays technique for 2 pass
assembler.
o From the fig 4.3.,
o An overlay driver is a specialized type of driver that manages
memory overlays in systems with limited memory resources.
o Overlay drivers are responsible for loading and unloading
different parts (overlays) of a program into memory as needed,
ensuring that only the currently required overlay is resident in
memory.
o Overlay driver code is a user written code.
Fig 4.3.: Overlays for 2-pass assembler

You might also like