0% found this document useful (0 votes)
5 views27 pages

Memory Management

Memory management is a crucial function of operating systems that allocates memory to processes, tracks usage, and ensures efficient memory utilization. It involves concepts such as logical and physical addresses, static and dynamic loading, and swapping techniques to manage limited RAM space. Fragmentation, both internal and external, can occur, leading to inefficient memory usage, which compaction aims to address by consolidating free memory.

Uploaded by

10650sarvesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views27 pages

Memory Management

Memory management is a crucial function of operating systems that allocates memory to processes, tracks usage, and ensures efficient memory utilization. It involves concepts such as logical and physical addresses, static and dynamic loading, and swapping techniques to manage limited RAM space. Fragmentation, both internal and external, can occur, leading to inefficient memory usage, which compaction aims to address by consolidating free memory.

Uploaded by

10650sarvesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Dr.H.

Summia Parveen
Memory Management
The task of subdividing the memory among different processes is called Memory
Management.
Memory management is a method in the operating system to manage operations
between main memory and disk during process execution.
The main aim of memory management is to achieve efficient utilization of
memory.
Memory management keeps track of each and every memory location,
regardless of either it is allocated to some process or it is free.
It checks how much memory is to be allocated to processes. It decides which
process will get memory at what time.
It tracks whenever some memory gets freed or unallocated and correspondingly
it updates the status. Dr.H.Summia Parveen
1. Guests get rooms based on
their needs
The hotel manager
2. No two guests get the same
(Operating room (avoiding conflicts)
System) ensures:
3. Rooms are freed when guests
leave (efficient space usage)

Dr.H.Summia Parveen
What is Main Memory?
• Main memory is a repository of rapidly
available information shared by the CPU and
I/O devices. Main memory is the place where
programs and information are kept when the
processor is effectively utilizing them.
• Main memory is associated with the
processor, so moving instructions and
information into and out of the processor is
extremely fast.
• Main memory is also known as RAM
(Random Access Memory). This memory is
volatile. RAM loses its data when a power
interruption occurs.
Dr.H.Summia Parveen
Dr.H.Summia Parveen
Logical Address Space
• A logical address is generated by the CPU while a program is
running. The logical address is a virtual address as it does not
exist physically, therefore, it is also known as a Virtual Address.
The physical address describes the precise position of
necessary data in a memory. Before they are used, the MMU
must map the logical address to the physical address. In
operating systems, logical and physical addresses are used to
manage and access memory.

Dr.H.Summia Parveen
Physical Address Space
• A physical address is the actual location in the computer’s
main memory (RAM) where data is stored. Unlike a logical (or
virtual) address, which is used by programs, the physical address
refers to a real place in memory hardware.
• Programs do not use physical addresses directly. They use logical
(virtual) addresses.The CPU generates logical addresses when a
program runs.The Memory Management Unit (MMU) translates
logical addresses into physical addresses behind the scenes.

Dr.H.Summia Parveen
Memory Management Unit (MMU)
The Memory Management Unit (MMU) plays a pivotal role in this
interplay. It acts as an intermediary, translating logical addresses to
physical addresses. This enables programs to operate in a seemingly
large logical address space, while efficiently utilizing the available
physical memory.
• Example: Consider a scenario where a program attempts to access
memory address 'x' in its logical address space. The MMU translates this
to the corresponding physical address 'y' and retrieves the data from the
actual RAM location. This abstraction allows for efficient multitasking and
memory allocation

Dr.H.Summia Parveen
•Logical address = where the program thinks the data is.
•Physical address = where the data actually is in memory.

•All logical addresses = Logical Address Space


•All physical memory locations = Physical Address Space

Dr.H.Summia Parveen
Static Loading
• Static Loading involves loading all the necessary program components
into the main memory before the program's execution begins. This means
that both the executable code and data are loaded into predetermined
memory locations. This allocation is fixed and does not change during the
program's execution. While it ensures direct access to all required
resources, it may lead to inefficiencies in memory usage, especially if the
program doesn't utilize all the loaded components.

Dr.H.Summia Parveen
Dynamic Loading
• A program's components are loaded into the main memory only when
they are specifically requested during execution. This results in a more
efficient use of memory resources as only the necessary components
occupy space. Dynamic loading is particularly advantageous for programs
with extensive libraries or functionalities that may not be used in every
session.

Dr.H.Summia Parveen
Static Linking
•Static linking means all necessary libraries and modules are included
directly in the final executable during compile time.
•The library code is copied into the executable file itself.
•This creates a self-contained executable, which:
• Does not depend on external libraries at runtime
• Is portable – can run on any system without needing extra files
•Benefits:
•Reliable: Program always has what it needs to run
•Portable: No need to install separate libraries
•Simpler deployment (just send one file)
•Drawbacks:
• Larger file size, because libraries are bundled in
• Redundancy: If many programs use the same library, it gets copied into
each one
• Updating a shared library means recompiling all statically linked
programs to get the fix or update Dr.H.Summia Parveen
What Happens When You Run a .exe File?
•When you click on an executable file (.exe), the operating system loads all the
necessary parts of the program into the process’s virtual address space (i.e.,
the memory space the program thinks it has).
•Most programs also need to use functions from system libraries (like printing
to the screen, reading files, etc.).
•Static linking happens during compilation (when the source code is turned into
a binary).
•It takes: Relocatable object files (compiled pieces of the program and
libraries)Command-line arguments (linker flags, library paths, etc.)
•The linker then combines everything into one final, fully linked executable
file.
•For static linking, the linking process happens before the program runs—not at
runtime.
•Once linked, the .exe file has everything needed and can be loaded and
executed directly.
Dr.H.Summia Parveen
Dynamic Linking

• When you run a dynamically linked program, a


small built-in function runs first.
• This function’s job is to: Load the dynamic
libraries (DLLs or shared libraries) the program
needs
• Figure out which functions and variables the
program is using from those libraries
• Load the libraries into memory and connect
everything so the program can use them
• The program doesn’t know the exact memory
location of the libraries.
• So, libraries are written using Position-
Independent Code (PIC):
• This means the library code can work from any
memory address, wherever the system loads it.

Dr.H.Summia Parveen
Swapping

Dr.H.Summia Parveen
Swapping
• Swapping is when data or programs are moved between the
computer's RAM (main memory) and secondary storage (like a hard
disk or SSD).
• The purpose of swapping is to manage limited RAM space and let
the system run more programs than the RAM can hold at once.
• Swapping happens only when there’s not enough space in RAM
for new data.
• It’s like moving things into storage to make room for something
important. While it allows running more programs, it can slow
down the system because the CPU has to access slower secondary
storage.

Dr.H.Summia Parveen
Types of Swapping
1.Swap-out:
1. This is when a program or data is moved from RAM to secondary
storage (usually a hard disk or SSD) to free up space.
2.Swap-in:
1. This is when a program or data that was swapped out is moved back
from secondary storage into RAM when it’s needed again.

Dr.H.Summia Parveen
How Swapping Works

• When RAM is full and a new program needs to run:


• The operating system picks a program or data that's currently in RAM
but not actively used.
• It swaps it out to secondary storage, making room in RAM for the
new program.
• When the swapped-out program is needed again:
• It’s swapped back into RAM to continue running.
• If RAM is still full, another program might get swapped out to make
space.

Dr.H.Summia Parveen
Dr.H.Summia Parveen
Contiguous memory allocation
In the Contiguous Memory Allocation, each process is contained in a
single contiguous section of memory. In this memory allocation, all
the available memory space remains together in one place which
implies that the freely available memory partitions are not spread
over here and there across the whole memory space.
• In Contiguous memory allocation which is a memory management
technique, whenever there is a request by the user process for the
memory then a single section of the contiguous memory block is
given to that process according to its requirement

Dr.H.Summia Parveen
Dr.H.Summia Parveen
Fixed Partition Scheme

Each process is allotted a fixed-size


continuous block in the main
memory. That means there will be
continuous blocks of fixed size into
which the complete memory will be
divided, and each time a process
comes in, it will be allotted one of the
free blocks. Because irrespective of
the size of the process, each is
allotted a block of the same size
memory space. This technique is also
called static partitioning.

Dr.H.Summia Parveen
Variable-size Partition
Scheme

• no fixed blocks or partitions are made


in the memory. Instead, each process is
allotted a variable-sized block
depending upon its requirements. That
means, that whenever a new process
wants some space in the memory, if
available, this amount of space is
allotted to it. Hence, the size of each
block depends on the size and
requirements of the process which
occupies it.

Dr.H.Summia Parveen
What is Fragmentation?

• When processes are moved to and


from the main memory, the available
free space in primary memory is
broken into smaller pieces. This
happens when memory cannot be
allocated to processes because the
size of available memory is less than
the amount of memory that the
process requires. Such blocks of
memory stay unused. This issue is
called fragmentation.

Dr.H.Summia Parveen
Internal
fragmentation

• Internal fragmentation occurs when the


memory block assigned to the process
is larger than the amount o memory
required by the process. In such a
situation a part of memory is left
unutilized because it will not be used
by any other process. Internal
fragmentation can be decreased by
assigning the smallest partition of free
memory that is large enough for
allocating to a process.

Dr.H.Summia Parveen
External
fragmentation
• External fragmentation occurs when a storage
medium, such as a hard disc or solid-state
drive, has many small blocks of free space
scattered throughout it. This can happen when
a system creates and deletes files frequently,
leaving many small blocks of free space on the
medium. When a system needs to store a new
file, it may be unable to find a single
contiguous block of free space large enough to
store the file and must instead store the file in
multiple smaller blocks. This can cause
external fragmentation and performance
problems when accessing the file.

Dr.H.Summia Parveen
Compaction
• Moving all the processes toward the top or towards the
bottom to make free available memory in a single continuous
place is called compaction. Compaction is undesirable to
implement because it interrupts all the running processes in
the memory.

Dr.H.Summia Parveen

You might also like