0% found this document useful (0 votes)
24 views8 pages

OS Unit 3

The document discusses memory management and protection in operating systems, detailing the allocation and tracking of memory for processes, as well as security mechanisms to prevent unauthorized access. It covers concepts such as logical and physical address spaces, swapping, contiguous memory allocation, and types of memory allocation, including static and dynamic methods. Additionally, it addresses memory fragmentation, its causes, impacts, and potential solutions like compaction, paging, and segmentation.

Uploaded by

chetanpatil21444
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views8 pages

OS Unit 3

The document discusses memory management and protection in operating systems, detailing the allocation and tracking of memory for processes, as well as security mechanisms to prevent unauthorized access. It covers concepts such as logical and physical address spaces, swapping, contiguous memory allocation, and types of memory allocation, including static and dynamic methods. Additionally, it addresses memory fragmentation, its causes, impacts, and potential solutions like compaction, paging, and segmentation.

Uploaded by

chetanpatil21444
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Unit 3

Memory Management and protection

"Memory Management" refers to the function that controls how the


computer's primary memory is allocated and used by different
processes, tracking which memory locations are free or assigned to a
program, while "memory protection" is a security mechanism that
prevents unauthorized access to memory locations, ensuring each
program can only access its designated memory space and not
interfere with others.
What is Memory Management?
Memory management mostly involves management of main memory.
In a multiprogramming computer, the Operating System resides in a
part of the main memory, and the rest is used by multiple processes.
The task of subdividing the memory among different processes is
called Memory Management. Memory management is a method in
the operating system to manage operations between main memory and
disk during process execution. The main aim of memory management
is to achieve efficient utilization of memory.
Why is Memory Management Required?
● Allocate and de-allocate memory before and after process
execution.
● To keep track of used memory space by processes.
● To minimize fragmentation issues.
● To proper utilization of main memory.
● To maintain data integrity while executing of process.
Logical and Physical Address Space
● Logical Address Space: An address generated by the CPU is
known as a “Logical Address”. It is also known as a Virtual
address. Logical address space can be defined as the size of the
process. A logical address can be changed.
The process accesses memory using logical addresses, which are
translated by the operating system into physical addresses. An
address that is created by the CPU while a program is running is
known as a logical address. Because the logical address is
virtual—that is, it doesn’t exist physically—it is also referred to
as such. The CPU uses this address as a reference to go to the
actual memory location. All logical addresses created from a
program’s perspective are referred to as being in the “logical
address space”.

● Physical Address Space: An address seen by the memory unit


(i.e. the one loaded into the memory address register of the
memory) is commonly known as a “Physical Address”. A
Physical address is also known as a Real address. The set of all
physical addresses corresponding to these logical addresses is
known as Physical address space. A physical address is
computed by MMU. The run-time mapping from virtual to
physical addresses is done by a hardware device Memory
Management Unit(MMU). The physical address always remains
constant.
What is Swapping in the Operating System?
Swapping in an operating system is a process that moves data or
programs between the computer’s main memory (RAM) and a
secondary storage (usually a hard disk or SSD). This helps manage
the limited space in RAM and allows the system to run more
programs than it could otherwise handle simultaneously.
It’s important to remember that swapping is only used when data isn’t
available in RAM. Although the swapping process degrades system
performance, it allows larger and multiple processes to run
concurrently. Because of this, swapping is also known as memory
compaction.
The CPU scheduler determines which processes are swapped in and
which are swapped out. Consider a multiprogramming environment
that employs a priority-based scheduling algorithm. When a high-
priority process enters the input queue, a low-priority process is
swapped out so the high-priority process can be loaded and executed.
When this process terminates, the low-priority process is swapped
back into memory to continue its execution. The below figure shows
the swapping process in the operating system:

Swapping has been subdivided into two concepts: swap-in and swap-
out.
● Swap-out is a technique for moving a process from RAM to the
hard disc.
● Swap-in is a method of transferring a program from a hard disc
to main memory, or RAM.
Process of Swapping
● When the RAM is full and a new program needs to run, the
operating system selects a program or data that is currently in
RAM but not actively being used.
● The selected data is moved to secondary storage, freeing up
space in RAM for the new program
● When the swapped-out program is needed again, it can be
swapped back into RAM, replacing another inactive program or
data if necessary.
Real Life Example of Swapping
Imagine you have a disk (RAM) that is too small to hold all your
books and papers (programs). You keep the most important items on
the desk and store the rest in a cabinet (secondary storage). When you
need something from the cabinet, you swap it with something on your
desk. This way, you can work with more items than your desk alone
could hold.

Contiguous memory allocation is a method used in operating systems


for managing memory where each process is allocated a single
contiguous block of memory. In this scheme, the operating system
assigns a contiguous block of memory to each process, meaning that
all the memory for a given process is located next to each other,
forming a continuous chunk.
Key Features of Contiguous Memory Allocation:

1. Simple and Fast: It is a simple memory management technique


because it does not require complex algorithms for memory
allocation and deallocation. It’s easy to implement, and
accessing the memory is quick because all addresses are
continuous.
2. Fixed or Variable Size: The allocated memory can either be
fixed in size or variable. In a fixed-size allocation, each process
gets a predetermined amount of memory, while in a variable
size allocation, memory is allocated based on the process’s
needs.
3. Fragmentation:
○ External Fragmentation: Over time, as processes are
loaded and removed, the system may experience external
fragmentation, meaning that the memory becomes
fragmented into small, unusable blocks. This happens
because processes come and go, leaving gaps between the
allocated memory blocks.
○ Internal Fragmentation: If the allocated memory block is
larger than the process needs, unused memory within the
block is wasted (this is internal fragmentation).
4. Limited Flexibility: The main limitation is that if a process
requires more memory than the initially allocated block, it
cannot easily expand without relocating the process. Also, it
might be difficult to find a large enough contiguous block of
free memory, especially in systems with high fragmentation.
Example:

Imagine a system with 100 MB of available RAM. A process requires


20 MB of memory. In contiguous memory allocation, the operating
system will allocate one continuous block of 20 MB for the process. If
another process needs memory, the operating system will search for
another free contiguous block of appropriate size. If free space is
fragmented into smaller blocks, a new process might not get the
required memory.

Memory allocation
Memory allocation in an operating system refers to the process of
assigning memory spaces to programs or processes during their
execution. The operating system must efficiently allocate and manage
memory to ensure that processes get the memory they need without
wasting resources or interfering with each other.
Types of Memory Allocation:

There are several ways to allocate memory in an operating system,


and they typically fall under two main categories:
1. Static Memory Allocation:
○ Fixed at compile-time: In static memory allocation, the
memory required for a program is determined at compile
time, and it is fixed during program execution.
○ No runtime changes: The size of the memory required by
the program cannot be altered during execution, which can
result in wasted memory (if more memory is allocated than
needed) or memory shortages (if not enough memory is
allocated).
○ Example: Global variables and static arrays in C/C++
programs.
2. Dynamic Memory Allocation:
○ Allocated at runtime: Dynamic memory allocation occurs
at runtime, meaning the operating system allocates
memory for a process as needed. It allows processes to
request and release memory during their execution.
○ Flexible: This method provides flexibility because
memory is allocated only when required, reducing waste,
and the size of memory can be adjusted as needed.
○ Example: Functions like malloc() or free() in C/C+
+ allow a program to allocate and release memory during
execution.

Memory fragmentation
Memory fragmentation in an operating system refers to the
phenomenon where free memory is split into small, non-contiguous
blocks over time, which can lead to inefficient memory utilization.
Fragmentation occurs when memory is allocated and deallocated
dynamically, leaving gaps of unused memory that are not large
enough to satisfy new memory allocation requests.
There are two main types of fragmentation:
1. External Fragmentation:

External fragmentation happens when free memory is divided into


small, scattered blocks outside of any process's allocated space. Even
though there might be enough total free memory, it is not contiguous,
and therefore, a process may not be able to get the required memory
in one continuous block.
Example:

● Suppose a system has 100 MB of free memory but is


fragmented into blocks of 20 MB, 5 MB, and 30 MB. If a
process needs 25 MB of memory, there is enough total free
memory (55 MB), but there isn't a single contiguous 25 MB
block available. This is external fragmentation.
2. Internal Fragmentation:

Internal fragmentation occurs when memory is allocated in fixed-size


blocks, and a process uses less memory than the allocated block. The
leftover unused memory within the allocated block is wasted because
it cannot be used by other processes.
Example:

● If a process is allocated a 10 MB block but only needs 8 MB,


there will be 2 MB of unused memory in the allocated block.
This wasted space is called internal fragmentation.
Causes of Fragmentation:

● Dynamic Allocation: Memory fragmentation is primarily


caused by dynamic memory allocation and deallocation, as
processes are created, terminated, or resized.
● Fixed-Size Allocation: Systems that allocate memory in fixed
sizes are prone to internal fragmentation because processes
might not need the entire block.
● Process Creation/Termination: As processes come and go,
they leave behind gaps (external fragmentation), which may not
be usable for new processes.
Impact of Fragmentation:

● Memory Wastage: Fragmentation can lead to inefficient use of


memory, as free space may be scattered across the system and
may not be usable by processes.
● Performance Degradation: In the case of external
fragmentation, processes may not find large enough blocks of
free memory, leading to delays in memory allocation or the need
for memory compaction, which can slow down system
performance.
● Increased Overhead: The operating system may need to
perform additional tasks, such as memory compaction or
swapping, to manage fragmented memory, adding overhead.
Solutions to Fragmentation:

1. Compaction: The operating system can periodically rearrange


the memory to eliminate gaps and create larger contiguous
blocks of free memory. This process is called compaction, but it
can be costly in terms of time and resources.
2. Paging: A technique that breaks the physical memory into
fixed-size blocks called pages. This helps reduce fragmentation
because processes are allocated memory in small, fixed-size
pages, minimizing internal fragmentation. It also allows for non-
contiguous memory allocation, helping to alleviate external
fragmentation.
3. Segmentation: In segmentation, memory is divided into
variable-sized segments based on logical divisions like code,
data, and stack. This reduces fragmentation by better matching
the size of memory blocks to the process's needs.

You might also like