0% found this document useful (0 votes)
35 views25 pages

Unit 2 (Memory Management)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views25 pages

Unit 2 (Memory Management)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Free Space Management

A file system is responsible to allocate the free blocks to the file therefore it has to
keep track of all the free blocks present in the disk. There are mainly two approaches
by using which, the free blocks in the disk are managed.

1. Bit Vector
In this approach, the free space list is implemented as a bit map vector. It contains
the number of bits where each bit represents each block.

If the block is empty then the bit is 1 otherwise it is 0. Initially all the blocks are
empty therefore each bit in the bit map vector contains 1.

LAs the space allocation proceeds, the file system starts allocating blocks to the files
and setting the respective bit to 0.

Simply if block is free 1..if block is allocated 0.

adv:

1. it is simple to implement

2. it occupies less memory

2. Linked List
It is another approach for free space management. This approach suggests linking
together all the free blocks and keeping a pointer in the cache which points to the
first free block.

Therefore, all the free blocks on the disks will be linked together with a pointer.
Whenever a block gets allocated, its previous free block will be linked to its next free
block.

total 4 free blocks i.e 1 ,3,7,2

advantages:

memory is saved

diadvantages:

1.for pointers more memory

2.more transversing
3.grouping
Stores address of n-1 free blocks in 1st free blocks
Advantages:
Address of large no of blocks can be found quickly.

4.counting:
In addition to nxt free block pointer ,keep a pointer saying how many blocks
are contiguously free after the block.

Unit-3
What do you mean by memory management?

Memory is the important part of the computer that is used to store the data. Its
management is critical to the computer system because the amount of main memory
available in a computer system is very limited. At any time, many processes are
competing for it. Moreover, to increase performance, several processes are executed
simultaneously. For this, we must keep several processes in the main memory, so it is
even more important to manage them effectively.

Role of Memory management


Following are the important roles of memory management in a computer system:
o Memory manager is used to keep track of the status of memory locations, whether it
is free or allocated. It addresses primary memory by providing abstractions so that
software perceives a large memory is allocated to it.
o Memory manager permits computers with a small amount of main memory to
execute programs larger than the size or amount of available memory. It does this by
moving information back and forth between primary memory and secondary memory
by using the concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each
process from being corrupted by another process. If this is not ensured, then the
system may exhibit unpredictable behavior.
o Memory managers should enable sharing of memory space between processes. Thus,
two programs can reside at the same memory location although at different times.

Memory Management Techniques:


The memory management techniques can be classified into following main
categories:

o Contiguous memory management schemes


o Non-Contiguous memory management schemes

o
Contiguous memory management
schemes:
In a Contiguous memory management scheme, each program occupies a single
contiguous block of storage locations, i.e., a set of memory locations with
consecutive addresses.

Single contiguous memory management schemes:

The Single contiguous memory management scheme is the simplest memory


management scheme used in the earliest generation of computer systems. In this
scheme, the main memory is divided into two contiguous areas or partitions. The
operating systems reside permanently in one partition, generally at the lower
memory, and the user process is loaded into the other partition.

Advantages of Single contiguous memory management schemes:

o Simple to implement.
o Easy to manage and design.
o In a Single contiguous memory management scheme, once a process is loaded, it is
given full processor's time, and no other processor will interrupt it.

Disadvantages of Single contiguous memory management schemes:


o Wastage of memory space due to unused memory as the process is unlikely to use all
the available memory space.
o The CPU remains idle, waiting for the disk to load the binary image into the main
memory.
o It can not be executed if the program is too large to fit the entire available main
memory space.
o It does not support multiprogramming, i.e., it cannot handle multiple programs
simultaneously.

Multiple Partitioning:
The single Contiguous memory management scheme is inefficient as it limits
computers to execute only one program at a time resulting in wastage in memory
space and CPU time. The problem of inefficient CPU use can be overcome using
multiprogramming that allows more than one program to run concurrently. To
switch between two processes, the operating systems need to load both processes
into the main memory. The operating system needs to divide the available main
memory into multiple parts to load multiple processes into the main memory. Thus
multiple processes can reside in the main memory simultaneously.

The multiple partitioning schemes can be of two types:

o Fixed Partitioning
o Dynamic Partitioning

Fixed Partitioning
The main memory is divided into several fixed-sized partitions in a fixed partition
memory management scheme or static partitioning. These partitions can be of the
same size or different sizes. Each partition can hold a single process. The number of
partitions determines the degree of multiprogramming, i.e., the maximum number of
processes in memory. These partitions are made at the time of system generation
and remain fixed after that.

Advantages of Fixed Partitioning memory management schemes:

o Simple to implement.
o Easy to manage and design.

Disadvantages of Fixed Partitioning memory management schemes:


o This scheme suffers from internal fragmentation.
o The number of partitions is specified at the time of system generation.

Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a fixed
partitioning scheme. In a dynamic partitioning scheme, each process occupies only
as much memory as they require when loaded for processing. Requested processes
are allocated memory until the entire physical memory is exhausted or the remaining
space is insufficient to hold the requesting process. In this scheme the partitions
used are of variable size, and the number of partitions is not defined at the system
generation time.

Advantages of Dynamic Partitioning memory management schemes:

o Simple to implement.
o Easy to manage and design.

Disadvantages of Dynamic Partitioning memory management schemes:

o This scheme also suffers from internal fragmentation.


o The number of partitions is specified at the time of system segmentation.

Non-Contiguous memory management


schemes:
In a Non-Contiguous memory management scheme, the program is divided into
different blocks and loaded at different portions of the memory that need not
necessarily be adjacent to one another. This scheme can be classified depending
upon the size of blocks and whether the blocks reside in the main memory or not.

What is paging?
Paging is a technique that eliminates the requirements of contiguous allocation of
main memory. In this, the main memory is divided into fixed-size blocks of physical
memory called frames. The size of a frame should be kept the same as that of a page
to maximize the main memory and avoid external fragmentation.

Advantages of paging:

o Pages reduce external fragmentation.


o Simple to implement.
o Memory efficient.
o Due to the equal size of frames, swapping becomes very easy.
o It is used for faster access of data.

What is Segmentation?
Segmentation is a technique that eliminates the requirements of contiguous
allocation of main memory. In this, the main memory is divided into variable-size
blocks of physical memory called segments. It is based on the way the programmer
follows to structure their programs. With segmented memory allocation, each job is
divided into several segments of different sizes, one for each module. Functions,
subroutines, stack, array, etc., are examples of such modules.

Swapping in Operating System


Swapping is a memory management scheme in which any process can be
temporarily swapped from main memory to secondary memory so that the main
memory can be made available for other processes. It is used to improve main
memory utilization. In secondary memory, the place where the swapped-out process
is stored is called swap space.

The purpose of the swapping in operating system is to access the data present in the
hard disk and bring it to RAM so that the application programs can use it. The thing
to remember is that swapping is used only when data is not present in RAM.

Although the process of swapping affects the performance of the system, it helps to
run larger and more than one process. This is the reason why swapping is also
referred to as memory compaction.

The concept of swapping has divided into two more concepts: Swap-in and Swap-
out.

o Swap-out is a method of removing a process from RAM and adding it to the


hard disk.
o Swap-in is a method of removing a program from a hard disk and putting it
back into the main memory or RAM.
Paging in OS (Operating System)
In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.

One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the
contiguous frames or holes.

Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must be
equal. Considering the fact that the pages are mapped to the frames in Paging, page size
needs to be as same as frame size.
Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
memory will be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is
divided into pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in the
contiguous way.

Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames
become empty and therefore other pages can be loaded in that empty place. The process P5 of
size 8 KB (8 pages) is waiting inside the ready queue.

Given the fact that, we have 8 non contiguous frames available in the memory and paging
provides the flexibility of storing the process at the different places. Therefore, we can load
the pages of process P5 in the place of P2 and P4.
Memory Management Unit
The purpose of Memory Management Unit (MMU) is to convert the logical address into the
physical address. The logical address is the address generated by the CPU for every page
while the physical address is the actual address of the frame where each page will be stored.

When a page is to be accessed by the CPU by using the logical address, the operating system
needs to obtain the physical address to access that page physically.

The logical address has two parts.

1. Page Number
2. Offset

Memory management unit of OS needs to convert the page number to the frame number.
Example

Considering the above image, let's say that the CPU demands 10th word of 4th page
of process P3. Since the page number 4 of process P1 gets stored at frame number 9
therefore the 10th word of 9th frame will be returned as the physical address.

Example: Suppose the user process's size is 2048KB and is a standard hard disk
where swapping has a data transfer rate of 1Mbps. Now we will calculate how long it
will take to transfer from main memory to secondary memory.

1. User process size is 2048Kb


2. Data transfer rate is 1Mbps = 1024 kbps
3. Time = process size / transfer rate
4. = 2048 / 1024
5. = 2 seconds
6. = 2000 milliseconds
7. Now taking swap-in and swap-out time, the process will take 4000 millisecond
s.

Advantages of Swapping
1. It helps the CPU to manage multiple processes within a single main memory.
2. It helps to create and use virtual memory.
3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore,
processes do not have to wait very long before they are executed.
4. It improves the main memory utilization.

Disadvantages of Swapping
1. If the computer system loses power, the user may lose all information related
to the program in case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the
number of Page Fault and decrease the overall processing performance.

Note:

o In a single tasking operating system, only one process occupies the user
program area of memory and stays in memory until the process is complete.
o In a multitasking operating system, a situation arises when all the active
processes cannot coordinate in the main memory, th en a process is swap out
from the main memory so that other processes can enter it.

Paging in OS (Operating System)


In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.

One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the
contiguous frames or holes.

Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must be
equal. Considering the fact that the pages are mapped to the frames in Paging, page size
needs to be as same as frame size.
Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
memory will be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is
divided into pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in the
contiguous way.

Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames
become empty and therefore other pages can be loaded in that empty place. The process P5 of
size 8 KB (8 pages) is waiting inside the ready queue.

Given the fact that, we have 8 non contiguous frames available in the memory and paging
provides the flexibility of storing the process at the different places. Therefore, we can load
the pages of process P5 in the place of P2 and P4.
Memory Management Unit
The purpose of Memory Management Unit (MMU) is to convert the logical address into the
physical address. The logical address is the address generated by the CPU for every page
while the physical address is the actual address of the frame where each page will be stored.

When a page is to be accessed by the CPU by using the logical address, the operating system
needs to obtain the physical address to access that page physically.

The logical address has two parts.

1. Page Number
2. Offset

Memory management unit of OS needs to convert the page number to the frame number.
Example

Considering the above image, let's say that the CPU demands 10th word of 4th page
of process P3. Since the page number 4 of process P1 gets stored at frame number 9
therefore the 10th word of 9th frame will be returned as the physical address.

Segmentation in OS (Operating
System)
In Operating Systems, Segmentation is a memory management technique in which
the memory is divided into the variable size parts. Each part is known as a segment
which can be allocated to a process.

The details about each segment are stored in a table called a segment table.
Segment table is stored in one (or many) of the segments.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment


2. Limit: It is the length of the segment.

Why Segmentation is required?


Till now, we were using Paging as our main memory management technique. Paging
is more close to the Operating system rather than the User. It divides all the
processes into the form of pages regardless of the fact that a process can have some
relative parts of functions which need to be loaded in the same page.

Operating system doesn't care about the User's view of the process. It may divide the
same function into different pages and those pages may or may not be loaded at the
same time into the memory. It decreases the efficiency of the system.

It is better to have segmentation which divides the process into the segments. Each
segment contains the same type of functions such as the main function can be
included in one segment and the library functions can be included in the other
segment.
Translation of Logical address into
physical address by segment table
CPU generates a logical address which contains two parts:

1. Segment Number
2. Offset

For Example:

Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for
the segment offset so the maximum segment size is 4096 and the maximum number
of segments that can be refereed is 16.

When a program is loaded into memory, the segmentation system tries to locate
space that is large enough to hold the first segment of the process, space
information is obtained from the free list maintained by memory manager. Then it
tries to locate space for other segments. Once adequate space is located for all the
segments, it loads them into their respective areas.

The operating system also generates a segment map table for each program.
With the help of segment map tables and hardware assistance, the operating system
can easily translate a logical address into physical address on execution of a
program.

The Segment number is mapped to the segment table. The limit of the respective
segment is compared with the offset. If the offset is less than the limit then the
address is valid otherwise it throws an error as the address is invalid.

In the case of valid addresses, the base address of the segment is added to the offset
to get the physical address of the actual word in the main memory.

The above figure shows how address translation is done in case of segmentation.

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.

Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.

What is Thrash?
In computer science, thrash is the poor performance of a virtual memory (or paging)
system when the same pages are being loaded repeatedly due to a lack of main
memory to keep them in memory. Depending on the configuration and algorithm,
the actual throughput of a system can degrade by multiple orders of magnitude.

In computer science, thrashing occurs when a computer's virtual memory resources


are overused, leading to a constant state of paging and page faults, inhibiting most
application-level processing. It causes the performance of the computer to degrade
or collapse. The situation can continue indefinitely until the user closes some running
applications or the active processes free up additional virtual memory resources.

To know more clearly about thrashing, first, we need to know about page fault and
swapping.

o Page fault: We know every program is divided into some pages. A page fault occurs
when a program attempts to access data or code in its address space but is not
currently located in the system RAM.
o Swapping: Whenever a page fault happens, the operating system will try to fetch
that page from secondary memory and try to swap it with one of the pages in RAM.
This process is called swapping.

Thrashing is when the page fault and swapping happens very frequently at a higher
rate, and then the operating system has to spend more time swapping these pages.
This state in the operating system is known as thrashing. Because of thrashing, the
CPU utilization is going to be reduced or negligible.
The basic concept involved is that if a process is allocated too few frames, then there
will be too many and too frequent page faults. As a result, no valuable work would
be done by the CPU, and the CPU utilization would fall drastically.

The long-term scheduler would then try to improve the CPU utilization by loading
some more processes into the memory, thereby increasing the degree of
multiprogramming. Unfortunately, this would result in a further decrease in the CPU
utilization, triggering a chained reaction of higher page faults followed by an
increase in the degree of multiprogramming, called thrashing.

Algorithms during Thrashing


Whenever thrashing starts, the operating system tries to apply either the Global page
replacement Algorithm or the Local page replacement algorithm.

1. Global Page Replacement

Since global page replacement can bring any page, it tries to bring more pages
whenever thrashing is found. But what actually will happen is that no process gets
enough frames, and as a result, the thrashing will increase more and more. Therefore,
the global page replacement algorithm is not suitable when thrashing happens.

2. Local Page Replacement

Unlike the global page replacement algorithm, local page replacement will select
pages which only belong to that process. So there is a chance to reduce the
thrashing. But it is proven that there are many disadvantages if we use local page
replacement. Therefore, local page replacement is just an alternative to global page
replacement in a thrashing scenario.

Causes of Thrashing
Programs or workloads may cause thrashing, and it results in severe performance
problems, such as:

o If CPU utilization is too low, we increase the degree of multiprogramming by


introducing a new system. A global page replacement algorithm is used. The CPU
scheduler sees the decreasing CPU utilization and increases the degree of
multiprogramming.
o CPU utilization is plotted against the degree of multiprogramming.
o As the degree of multiprogramming increases, CPU utilization also increases.
o If the degree of multiprogramming is increased further, thrashing sets in, and CPU
utilization drops sharply.
o So, at this point, to increase CPU utilization and to stop thrashing, we must decrease
the degree of multiprogramming.

How to Eliminate Thrashing


Thrashing has some negative impacts on hard drive health and system performance.
Therefore, it is necessary to take some actions to avoid it. To resolve the problem of
thrashing, here are the following methods, such as:

o Adjust the swap file size:If the system swap file is not configured correctly, disk
thrashing can also happen to you.
o Increase the amount of RAM: As insufficient memory can cause disk thrashing, one
solution is to add more RAM to the laptop. With more memory, your computer can
handle tasks easily and don't have to work excessively. Generally, it is the best long-
term solution.
o Decrease the number of applications running on the computer: If there are too
many applications running in the background, your system resource will consume a
lot. And the remaining system resource is slow that can result in thrashing. So while
closing, some applications will release some resources so that you can avoid
thrashing to some extent.
o Replace programs: Replace those programs that are heavy memory occupied with
equivalents that use less memory.
Techniques to Prevent Thrashing
The Local Page replacement is better than the Global Page replacement, but local
page replacement has many disadvantages, so it is sometimes not helpful. Therefore
below are some other techniques that are used to handle thrashing:

1. Locality Model

A locality is a set of pages that are actively used together. The locality model states
that as a process executes, it moves from one locality to another. Thus, a program is
generally composed of several different localities which may overlap.

For example, when a function is called, it defines a new locality where memory
references are made to the function call instructions, local and global variables, etc.
Similarly, when the function is exited, the process leaves this locality.

2. Working-Set Model

This model is based on the above-stated concept of the Locality Model.

The basic principle states that if we allocate enough frames to a process to


accommodate its current locality, it will only fault whenever it moves to some new
locality. But if the allocated frames are lesser than the size of the current locality, the
process is bound to thrash.

According to this model, based on parameter A, the working set is defined as the set
of pages in the most recent 'A' page references. Hence, all the actively used pages
would always end up being a part of the working set.

The accuracy of the working set is dependent on the value of parameter A. If A is too
large, then working sets may overlap. On the other hand, for smaller values of A, the
locality might not be covered entirely.

If D is the total demand for frames and WSSi is the working set size for process i,

D = ⅀ WSSi

Now, if 'm' is the number of frames available in the memory, there are two
possibilities:

o D>m, i.e., total demand exceeds the number of frames, then thrashing will occur as
some processes would not get enough frames.
o D<=m, then there would be no thrashing.
If there are enough extra frames, then some more processes can be loaded into the
memory. On the other hand, if the summation of working set sizes exceeds the
frames' availability, some of the processes have to be suspended (swapped out of
memory).

This technique prevents thrashing along with ensuring the highest degree of
multiprogramming possible. Thus, it optimizes CPU utilization.

3. Page Fault Frequency

A more direct approach to handle thrashing is the one that uses the Page-Fault
Frequency concept.

The problem associated with thrashing is the high page fault rate, and thus, the
concept here is to control the page fault rate.

If the page fault rate is too high, it indicates that the process has too few frames
allocated to it. On the contrary, a low page fault rate indicates that the process has
too many frames.

Upper and lower limits can be established on the desired page fault rate, as shown in
the diagram.

If the page fault rate falls below the lower limit, frames can be removed from the
process. Similarly, if the page faults rate exceeds the upper limit, more frames can be
allocated to the process.
In other words, the graphical state of the system should be kept limited to the
rectangular region formed in the given diagram.

If the page fault rate is high with no free frames, some of the processes can be
suspended and allocated to them can be reallocated to other processes. The
suspended processes can restart later.

4.

You might also like