0% found this document useful (0 votes)
11 views

Module4 Os

The document provides an overview of memory management in operating systems, detailing concepts such as contiguous and non-contiguous memory allocation, fragmentation, paging, and segmentation. It explains the importance of managing memory effectively due to limited resources and discusses various memory allocation algorithms. Additionally, it covers virtual memory and demand paging as methods to enhance system performance and manage multiple processes.

Uploaded by

Shashank S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Module4 Os

The document provides an overview of memory management in operating systems, detailing concepts such as contiguous and non-contiguous memory allocation, fragmentation, paging, and segmentation. It explains the importance of managing memory effectively due to limited resources and discusses various memory allocation algorithms. Additionally, it covers virtual memory and demand paging as methods to enhance system performance and manage multiple processes.

Uploaded by

Shashank S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 94

Department: Information Science & Engineering

Course Name: OPERATING SYSTEMS (MODULE 4 )


Module4
MEMORY MANAGEMENT

 Contiguous Memory Allocation


 Fragmentation
 Paging
 Segmentation.
 Virtual Memory: Demand Paging
 Page Replacement
 Page replacement algorithm
 Allocation of frames
 Thrashing
Memory Management in Operating System
• Memory is the important part of the computer that is used to store
the data.
• Its management is critical to the computer system because the
amount of main memory available in a computer system is very
limited. At any time, many processes are competing for it.
• Moreover, to increase performance, several processes are executed
simultaneously. For this, we must keep several processes in the main
memory, so it is even more important to manage them effectively.
Introduction
• Main memory and the registers built
into the processor itself are the only
storage that the CPU can access
directly
• If the data are not in memory, they
must be moved there before the CPU
can operate on them.
• For each process, the base and limit
registers define its logical address
space
• Base register holds the smallest legal
physical memory address
• Limit register specifies the size of the
range of accessible addresses
Logical and Physical Address in Operating System
• What is a Logical Address?
• A logical address, also known as a virtual
address, is an address generated by the
CPU during program execution.
• It is the address seen by the process and
is relative to the program’s address space.
• The process accesses memory using
logical addresses, which are translated by
the operating system into physical
addresses.
• An address that is created by the CPU
while a program is running is known as a
logical address. Because the logical
address is virtual—that is, it doesn’t exist
physically—it is also referred to as such.
• What is a Physical Address?
• A physical address is the actual address in the main memory where data is
stored. It is a location in physical memory, as opposed to a virtual address.
• Physical addresses are used by the Memory management Unit to translate
logical addresses into physical addresses.
• The term “physical address” describes the precise position of necessary
data in a memory. Before they are used, the MMU must map the logical
address to the physical address. This is because the user program creates
the logical address and believes that the program is operating in this logical
address. However, the program requires physical memory to execute.
• The Memory Management Unit (MMU) plays a pivotal role in this
interplay. It acts as an intermediary, translating logical addresses to physical
addresses. This enables programs to operate in a seemingly large logical
address space, while efficiently utilizing the available physical memory.
Contiguous Memory Management
• Contiguous memory allocation is a memory allocation strategy. As the name implies, we utilize
this technique to assign contiguous blocks of memory to each task. Thus, whenever a process
asks to access the main memory, we allocate a continuous segment from the empty region to
the process based on its size. In this technique, memory is allotted in a continuous way to the
processes.
1. Fixed Partition Scheme
• In the fixed partition scheme
memory is divided into fixed
number of partitions.
• Fixed means number of
partitions are fixed in the
memory. In the fixed partition,
in every partition only one
process will be accommodated.
• Degree of multi-programming is
restricted by number of
partitions in the memory.
• Maximum size of the process is
restricted by maximum size of
the partition. Every partition is Internal fragmentation: Wastage of memory with in fixed partition
associated with the limit
registers
Internal Fragmentation
• Internal Fragmentation is a problem
that occurs due to poor memory
allocation and it results in wastage of
memory.
• When a process is loaded into the
system it requests memory which is
essential for its working.
• The operating system allocates
memory to processes to work with but
if the memory happens to be smaller
and bigger than the process
requirement the extra space goes
unused.
• This small amount of memory un
utilization is the major Internal
Fragmentation appearing in the
Operating System
• Disadvantages Fix partition scheme
• Maximum process size <= Maximum partition size.
• The degree of multiprogramming is directly proportional to the number of
partitions.
• Internal fragmentation is found in fixed partition scheme.

• To overcome the problem of internal fragmentation, instead of fixed


partition scheme, variable partition scheme is used.
2. Variable Partition Scheme
• In the variable partition scheme , initially memory will be single continuous
free block.
• Whenever the request by the process arrives, accordingly partition will be
made in the memory. If the smaller processes keep on coming then the larger
partitions will be made into smaller partitions.
• In variable partition schema initially, the memory will be full contiguous free
block
• Memory divided into partitions according to the process size where process
size will vary.
• One partition is allocated to each active partition.
External Fragmentation is found in variable partition scheme. To overcome the problem
of external fragmentation, compaction technique is used or non-contiguous memory
management techniques are used.
External Fragmentation
• The total amount of free available primary is sufficient to reside a
process, but can not be used because it is non-contiguous.
External fragmentation occurs whenever a method of dynamic memory
allocation happens to allocate some memory and leave a small amount of
unusable memory. The total quantity of the memory available is reduced
substantially in case there’s too much external fragmentation. Now, there’s
enough memory space in order to complete a request, and it is not
contiguous. Thus, it is known as external fragmentation.
Solution of External Fragmentation
• 1. Compaction
• Moving all the processes toward the top or towards the bottom to
make free available memory in a single continuous place is called
compaction. compaction is undesirable to implement because it
interrupts all the running processes in the memory.
• 2. Non-contiguous memory allocation (Will Discuss Later)
MEMORY ALLOCATION ALGORITHMS:
• 1. First Fit In the first fit approach is to allocate the first free partition
or hole large enough which can accommodate the process. It finishes
after finding the first suitable free partition.
Advantage
• Fastest algorithm because it searches as little as possible.
Disadvantage
• The remaining unused memory areas left after allocation become
waste if it is too smaller. Thus request for larger memory requirement
cannot be accomplished.
First fit
2. Best Fit :
The best fit deals with allocating the smallest free partition which
meets the requirement of the requesting process.
This algorithm first searches the entire list of free partitions and
considers the smallest hole that is adequate.
It then tries to find a hole which is close to actual process size
needed.
Advantage
Memory utilization is much better than first fit as it searches the
smallest free partition first available.
Disadvantage
It is slower and may even tend to fill up memory with tiny useless
holes.
Best Fit
3. Worst fit:
In worst fit approach is to locate largest available free portion so that the
portion left will be big enough to be useful. It is the reverse of best fit.

Advantage Reduces the rate of production of small gaps.

Disadvantage If a process requiring larger memory arrives at a later stage


then it cannot be accommodated as the largest hole is already split and
occupied.
Worst Fit
solution
PAGING
• Paging is a technique that divides memory into fixed-sized blocks.
• The main memory is divided into blocks known as Frames and
• The logical memory(process) is divided into blocks known as Pages.
• Paging requires extra time for the address conversion, so we use a
special hardware cache memory known as TLB.
• This concept of Paging in OS includes dividing each process in the
form of pages of equal size and also, the main memory is divided in
the form of frames of fixed size.
• It is also important to have the pages and frames of equal size for
mapping and maximum utilization of the memory.
If a process has n
pages in the
secondary memory
then there must be n
frames available in
the main memory for
mapping.

page size = frame size.


Example to understand Paging in OS

NOTE:
1. All the pages of process must be loaded in Main memory. pages can be placed in any frames, It is not necessary to
be inorder
2.The last page of process may leads small internal fragmentation
Contiguous memory Vs Non Contiguous
memory
1.We cant store the starting
address of each page in relocation
register. Since it is costly
2. We need to maintain one table
which contains page details.
Where each pages stored

3. This page table also stored in


memory
4. One process- one page table
two process- two page table.
Each process have own page table
The page number is used as an index
into a The page table contains the
base address of each page in physical
memory.
The page size (like the frame size) is
defined by the hardware.
The size of a page is typically a power of
2, varying between 512 bytes and 16
MB per page, depending on the
computer architecture
Translation of Logical Address into Physical Address
• The CPU always generates a logical address.
• In order to access the main memory always a physical address is
needed.
• The logical address generated by CPU always consists of two parts:
1. Page Number(p)
2. Page Offset (d)
• where,
• Page Number is used to specify the specific page of the process from
which the CPU wants to read the data. and it is also used as an index
to the page table.
• Page offset is mainly used to specify the specific word on the page
that the CPU wants to read.
• The physical address consists of two parts:
1. Page offset(d)
2. Frame Number(f)
• The Frame number is used to indicate the specific frame where the required page
is stored.
• Page Offset indicates the specific word that has to be read from that page.
The above diagram indicates the translation of the Logical address into the Physical
address. The PTBR in the above diagram means page table base register and it
basically holds the base address for the page table of the current process.
The PTBR is mainly a processor register and is managed by the operating system.
Drawback
• There is a problem with this approach and that is with the time
required to access a user memory location.
• Suppose if we want to find the location i, we must first find the index
into the page table by using the value in the PTBR offset by the page
number for I. And this task requires memory access. It then provides
us the frame number which is combined with the page offset in order
to produce the actual address. After that, we can then access the
desired place in the memory. Two memory accesses are needed in
order to access a byte
Translation of look-aside buffer(TLB)
• There is the standard solution for the previous problem that is to use
a special, small, and fast-lookup hardware cache that is commonly
known as Translation of look-aside buffer(TLB).
• TLB is associative and high-speed memory.
• Each entry in the TLB mainly consists of two parts: a key(that is the
tag) and a value.
• When associative memory is presented with an item, then the item is
compared with all keys simultaneously. In case if the item is found
then the corresponding value is returned.
• The search with TLB is fast though the hardware is expensive.
• The number of entries in the TLB is small and generally lies in
between 64 and 1024.
TLB is used with Page Tables in the following ways:
• The TLB contains only a few of the page-table entries. Whenever the logical address is generated by the
CPU then its page number is presented to the TLB.

• If the page number is found, then its frame number is immediately available and is used in order to access
the memory. The above whole task may take less than 10 percent longer than would if an unmapped memory
reference were used.

• In case if the page number is not in the TLB (which is known as TLB miss), then a memory reference to the
Page table must be made.

• When the frame number is obtained it can be used to access the memory. Additionally, page number and
frame number is added to the TLB so that they will be found quickly on the next reference.

• In case if the TLB is already full of entries then the Operating system must select one for replacement.
Advantages of Paging
• Paging mainly allows to storage of parts of a single process in a non-
contiguous fashion.
• With the help of Paging, the problem of external fragmentation is
solved.
• Paging is one of the simplest algorithms for memory management.
Disadvantages of Paging
• In Paging, sometimes the page table consumes more memory.
• Internal fragmentation is caused by this technique.
• There is an increase in time taken to fetch the instruction since now
two memory accesses are required.
Segmentation
• A process is divided into Segments. The chunks that a program is
divided into which are not necessarily all of the exact sizes are called
segments.
• Segmentation gives the user’s view of the process which paging does
not provide. Here the user’s view is mapped to physical memory.
• Segmentation divides processes into smaller subparts known
as modules. The divided segments need not be placed in contiguous
memory. Since there is no contiguous memory allocation, internal
fragmentation does not take place.
• The length of the segments of the program and memory is decided by
the purpose of the segment in the user program.
• Types of Segmentation
Segmentation can be divided into two types:
• Virtual Memory Segmentation: Virtual Memory Segmentation
divides the processes into n number of segments. All the segments
are not divided at a time. Virtual Memory Segmentation may or may
not take place at the run time of a program.
• Simple Segmentation: Simple Segmentation also divides the
processes into n number of segments but the segmentation is done
all together at once. Simple segmentation takes place at the run time
of a program. Simple segmentation may scatter the segments into the
memory such that one segment of the process can be at a different
location than the other(in a noncontinuous manner).
Why Segmentation is required?
• Segmentation came into existence because of the problems in the
paging technique.
• In the case of the paging technique, a function or piece of code is
divided into pages without considering that the relative parts of code
can also get divided.
• Hence, for the process in execution, the CPU must load more than
one page into the frames so that the complete related code is there
for execution.
• Paging took more pages for a process to be loaded into the main
memory. Hence, segmentation was introduced in which the code is
divided into modules so that related code can be combined in one
single block.
Example of Segmentation
• Let us assume we have five segments namely: Segment-0, Segment-1,
Segment-2, Segment-3, and Segment-4. Initially, before the execution
of the process, all the segments of the process are stored in the
physical memory space. We have a segment table as well. The
segment table contains the beginning entry address of each segment
(denoted by base). The segment table also contains the length of
each of the segments (denoted by limit).
• As shown in the image , the base address of Segment-0 is 1400 and
its length is 1000, the base address of Segment-1 is 6300 and its
length is 400, the base address of Segment-2 is 4300 and its length
is 400, and so on.
Advantages of Segmentation
• No internal fragmentation
• Average Segment Size is larger than the actual page size.
• Less overhead
• It is easier to relocate segments than entire address space.
• The segment table is of lesser size as compared to the page table in
paging
Disadvantages
• It can have external fragmentation.
• it is difficult to allocate contiguous memory to variable sized partition.
• Costly memory management algorithms.
Virtual Memory: Demand Paging

• Virtual memory is a mechanism used to manage memory using


hardware and software.
• It is a section of a hard disk that's set up to emulate the computer's
RAM.
• It helps in running multiple applications with low main memory and
increases the degree of multiprogramming in systems.
• It is commonly implemented using demand paging.
• Virtual memory is a part of the system's secondary memory that acts
and gives us a feel as if it is a part of the main memory. Virtual
memory allows a system to execute heavier applications or multiple
applications simultaneously without exhausting the RAM (Random
Access Memory).
Demand Paging
• In paging All the pages of the process and full page table must be in main
memory
• In Demand Paging-> Some pages of the process and full page table must
be in main memory
• In demand paging, the operating system loads only the necessary pages of
a program into memory at runtime, instead of loading the entire program
into memory at the start.
• In this technique a page is brought into memory for its execution only
when it is demanded
• It is a combination of paging and swapping
• It is not necessary that all the pages or segments are present in the main
memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented
using Demand Paging or Demand segmentation
• Virtual memory serves two purposes.
• First, it allows us to extend the use of physical memory by using
disk.
• Second, it allows us to have memory protection (Protection Bit),
because each virtual address is translated to a physical address.
All the pages are loaded into backing
store (hard disk).

By the mechanism of swapping when


the main memory requests the page
Only then it is loaded from hard disk

As main memory is small in size and


cannot handle large programs only few
pages are loaded into main memory

after completing its execution it is


swapped out simply and new process is
then swapped in..

The demand paging system is somehow similar to the paging system with swapping where processes
mainly reside in the main memory(usually in the hard disk). Thus demand paging is the process that
solves the paging problems only by swapping the pages on Demand. This is also known as lazy
swapper( It never swaps the page into the memory unless it is needed).
Valid-Invalid Bit
• Some form of hardware support is used to distinguish between the
pages that are in the memory and the pages that are on the disk.
• With each page table entry, a valid-invalid bit is associated
where 1 indicates in the memory and 0 indicates not in the memory)
• Initially, the valid-invalid bit is set to 0 for all table entries.

• The valid–invalid bit scheme can be used for this purpose.


• bit is set to “valid(v),”  associated page is both legal and in memory
• bit is set to “Invalid(i),” not in the logical address space of the
process and currently on the disk
A page fault trap is caused when we access a page
marked as invalid. It causes a trap in the operating
system. This is the result of the failure of the
operating system to bring the desired page into the
1.Firstly we determine
whether the reference was
valid or invalid memory
access by checking an
internal table for this
process.
2.If the reference was
invalid but not yet brought
into that page, Then we
check the free frame list and
search for the free frames
Advantages of Demand Paging
1. Efficient use of physical memory: Query paging allows for more efficient use
because only the necessary pages are loaded into memory at any given time.
2. Support for larger programs: Programs can be larger than the physical memory
available on the system because only the necessary pages will be loaded into
memory.
3. Faster program start: Because only part of a program is initially loaded into
memory, programs can start faster than if the entire program were loaded at
once.
Disadvantages of Demand Paging
1. Page Fault Overload: The process of swapping pages between memory and
disk can cause a performance overhead, especially if the program frequently
accesses pages that are not currently in memory.
2. Degraded performance: If a program frequently accesses pages that are not
currently in memory, the system spends a lot of time swapping out pages,
which degrades performance.
Page Replacement
• Page Fault-over view
• When a page referenced by the CPU is not found in the main memory,
it is called as a page fault.
• When a page fault occurs, the required page has to be fetched from
the secondary memory into the main memory.

Page replacement is required when-


•All the frames of main memory are already occupied.
•Thus, a page has to be replaced to create a room for the required page.
Basic Page Replacement 1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement
algorithm to select a victim frame
c. Write the victim frame to the disk; change the page
and frame tables accordingly.
3. Read the desired page into the newly freed frame;
change the page and frame tables.
4. Restart the user process.
Page replacement Algorithm
• Common Page Replacement Techniques
• First In First Out (FIFO)
• Optimal Page replacement
• Least Recently Used
First In First Out (FIFO) page replacement
• The FIFO algorithm is the simplest of all the page replacement
algorithms. In this, we maintain a queue of all the pages that are in
the memory currently.
• The oldest page in the memory is at the front end of the queue and
the most recent page is at the back or rear end of the queue..
• Whenever a page fault occurs, the operating system looks at the front
end of the queue to know the page to be replaced by the newly
requested page
• It also adds this newly requested page at the rear end and removes
the oldest page from the front end of the queue.
• Example: Consider the page reference string
as 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frames. Let’s try to find the number
of page faults:

Total page faults = 7


Beledy’s Anomaly:
LRU (Least-Recently-Used) Page Replacement
• In this algorithm, the page that has been not used for longest period
of time is selected for replacement.
• Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3,
2, 1, 2, 0 for a memory with three frames and calculate number of
page faults by using Least Recently Used (LRU) Page replacement
algorithms.

The Page Hit Percentage = 7 * 100 / 20 = 35%


Number of Page Hits = 7 The Page Fault Percentage = 100 - Page Hit Percentage
Number of Page Faults = 13 = 100 - 35 = 65%
Optimal Page Replacement in OS
• Optimal page replacement is the best page replacement algorithm as
this algorithm results in the least number of page faults.
• In this algorithm, the pages are replaced with the ones that will not
be used for the longest duration of time in the future.
Allocation of frames
• Frame allocation algorithms are used if you have multiple processes;
it helps decide how many frames to allocate to each process
• Frame allocation algorithms –
The two algorithms commonly used to allocate frames to a process
are:
1. Equal allocation
2. Proportional allocation
3. Global vs Local Allocation
1.Equal allocation: In a system with x frames and y
processes, each process gets equal number of frames, i.e. x/y.
For instance, if the system has 48 frames and 9 processes,
each process will get 5 frames. The three frames which are not
allocated to any process can be used as a free-frame buffer
pool.
Disadvantage: In systems with processes of varying sizes,
it does not make much sense to give each process equal
frames. Allocation of a large number of frames to a small
process will eventually lead to the wastage of a large
number of allocated unused frames.
Proportional allocation
• The proportional frame allocation algorithm allocates frames based
on the size that is necessary for the execution and the number of
total frames the memory has.

For instance, in a system with 62 frames, if there is a process of 10KB and another process of
127KB, then the first process will be allocated (10/137)*62 = 4 frames and the other process will get
(127/137)*62 = 57 frames.
Global vs Local Allocation
Thrashing
• A process that is spending more time paging than executing is said to
be thrashing.
• In other words, it means, that the process doesn't have enough
frames to hold all the pages for its execution, so it is swapping pages
in and out very frequently to keep executing. Sometimes, the pages
which will be required in the near future have to be swapped out.
• Initially, when the CPU utilization is low, the process scheduling
mechanism, to increase the level of multiprogramming loads multiple
processes into the memory at the same time, allocating a limited
amount of frames to each process.
• As the memory fills up, the process starts to spend a lot of time for
the required pages to be swapped in, again leading to low CPU
utilization because most of the processes are waiting for pages.
Hence the scheduler loads more processes to increase CPU utilization,
as this continues at a point of time the complete system comes to a
stop.
• This phenomenon is illustrated in
figure in which CPU utilization is
plotted against degree of
multiprogramming.
• As degree of multiprogramming
increases, CPU utilization goes on
increasing although more slowly
until a maximum is reached.
• After this point of degree of
multiprogramming is increased then
thrashing is occurred and CPU
utilization drops sharply after this
point.
• At this point, to increase CPU
utilization and stop thrashing degree
of multiprogramming should be
reduced.
• To prevent from thrashing a strategy called as page-fault frequency
(PFF) is used. Since thrashing has a high rate of page fault. So we can
control the page-fault rate. When it is too high, we know that the
process needs more frames. Similarly if the page fault rate is too low,
then the process has too many frames.
• We can establish upper and lower bounds on the desired page fault
rate (Figure 6). If the actual page fault rate exceeds the upper limit,
we allocate that process to another frame.
• If the page fault rate falls below the lower limit, we remove a frame
from that process. Thus we can directly measure and control the page
fault rate to prevent thrashing

You might also like