Unit-2 Memory Management

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

UNIT 2:- MEMORY MANAGEMENT IN OPERATING SYSTEM

Memory is the important part of the computer that is used to store the data. Its management is critical
to the computer system because the amount of main memory available in a computer system is very
limited. At any time, many processes are competing for it. Moreover, to increase performance, several
processes are executed simultaneously. For this, we must keep several processes in the main memory,
so it is even more important to manage them effectively.

Role of Memory management


Following are the important roles of memory management in a computer system:
o Memory manager is used to keep track of the status of memory locations, whether it is free or allocated.
It addresses primary memory by providing abstractions so that software perceives a large memory is
allocated to it.
o Memory manager permits computers with a small amount of main memory to execute programs larger
than the size or amount of available memory. It does this by moving information back and forth
between primary memory and secondary memory by using the concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each process from being
corrupted by another process. If this is not ensured, then the system may exhibit unpredictable
behavior.
o Memory managers should enable sharing of memory space between processes. Thus, two programs
can reside at the same memory location although at different times.
Memory Management Techniques:
The memory management techniques can be classified into following main categories:
o Contiguous memory management schemes
o Non-Contiguous memory management schemes
Contiguous memory management schemes:
In a Contiguous memory management scheme, each program occupies a single contiguous block of
storage locations, i.e., a set of memory locations with consecutive addresses.
Single contiguous memory management schemes:
The Single contiguous memory management scheme is the simplest memory management scheme
used in the earliest generation of computer systems. In this scheme, the main memory is divided into
two contiguous areas or partitions. The operating systems reside permanently in one partition,
generally at the lower memory, and the user process is loaded into the other partition.
Advantages of Single contiguous memory management schemes:
o Simple to implement.
o Easy to manage and design.
o In a Single contiguous memory management scheme, once a process is loaded, it is given full
processor's time, and no other processor will interrupt it.
Disadvantages of Single contiguous memory management schemes:
o Wastage of memory space due to unused memory as the process is unlikely to use all the available
memory space.
o The CPU remains idle, waiting for the disk to load the binary image into the main memory.
o It can not be executed if the program is too large to fit the entire available main memory space.
o It does not support multiprogramming, i.e., it cannot handle multiple programs simultaneously.
Multiple Partitioning:
The single Contiguous memory management scheme is inefficient as it limits computers to execute
only one program at a time resulting in wastage in memory space and CPU time. The problem of
inefficient CPU use can be overcome using multiprogramming that allows more than one program to
run concurrently. To switch between two processes, the operating systems need to load both processes
into the main memory. The operating system needs to divide the available main memory into multiple
parts to load multiple processes into the main memory. Thus multiple processes can reside in the main
memory simultaneously.
The multiple partitioning schemes can be of two types:
o Fixed Partitioning
o Dynamic Partitioning
Fixed Partitioning
The main memory is divided into several fixed-sized partitions in a fixed partition memory
management scheme or static partitioning. These partitions can be of the same size or different sizes.
Each partition can hold a single process. The number of partitions determines the degree of
multiprogramming, i.e., the maximum number of processes in memory. These partitions are made at
the time of system generation and remain fixed after that.
Advantages of Fixed Partitioning memory management schemes:
o Simple to implement.
o Easy to manage and design.
Disadvantages of Fixed Partitioning memory management schemes:
o This scheme suffers from internal fragmentation.
o The number of partitions is specified at the time of system generation.
Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a fixed partitioning scheme. In a
dynamic partitioning scheme, each process occupies only as much memory as they require when
loaded for processing. Requested processes are allocated memory until the entire physical memory is
exhausted or the remaining space is insufficient to hold the requesting process. In this scheme the
partitions used are of variable size, and the number of partitions is not defined at the system generation
time.
Advantages of Dynamic Partitioning memory management schemes:
o Simple to implement.
o Easy to manage and design.
Disadvantages of Dynamic Partitioning memory management schemes:

o This scheme also suffers from internal fragmentation.


o The number of partitions is specified at the time of system segmentation.
Non-Contiguous memory management schemes:
In a Non-Contiguous memory management scheme, the program is divided into different blocks and
loaded at different portions of the memory that need not necessarily be adjacent to one another. This
scheme can be classified depending upon the size of blocks and whether the blocks reside in the main
memory or not.
paging
Paging is a technique that eliminates the requirements of contiguous allocation of main memory. In
this, the main memory is divided into fixed-size blocks of physical memory called frames. The size of
a frame should be kept the same as that of a page to maximize the main memory and avoid external
fragmentation.
Advantages of paging:
o Pages reduce external fragmentation.
o Simple to implement.
o Memory efficient.
o Due to the equal size of frames, swapping becomes very easy.
o It is used for faster access of data.
Segmentation
Segmentation is a technique that eliminates the requirements of contiguous allocation of main memory.
In this, the main memory is divided into variable-size blocks of physical memory called segments. It
is based on the way the programmer follows to structure their programs. With segmented memory
allocation, each job is divided into several segments of different sizes, one for each module. Functions,
subroutines, stack, array, etc., are examples of such modules.
--------------------------------------------------------------------------------------------------------------------------
Topic: Address binding in Operating System
The Address Binding refers to the mapping of computer instructions and data to physical memory
locations. Both logical and physical addresses are used in computer memory. It assigns a physical
memory region to a logical pointer by mapping a physical address to a logical address known as a
virtual address. It is also a component of computer memory management that the OS performs on
behalf of applications that require memory access.
Types of Address Binding in Operating System
There are mainly three types of an address binding in the OS. These are as follows:
1. Compile Time Address Binding
2. Load Time Address Binding
3. Execution Time or Dynamic Address Binding
Compile Time Address Binding
It is the first type of address binding. It occurs when the compiler is responsible for performing address
binding, and the compiler interacts with the operating system to perform the address binding. In other
words, when a program is executed, it allocates memory to the system code of the computer. The
address binding assigns a logical address to the beginning of the memory segment to store the object
code. Memory allocation is a long-term process and may only be modified by recompiling the program.
Load Time Address Binding
It is another type of address binding. It is done after loading the program in the memory, and it would
be done by the operating system memory manager, i.e., loader. If memory allocation is specified when
the program is assigned, no program in its compiled state may ever be transferred from one computer
to another. Memory allocations in the executable code may already be in use by another program on
the new system. In this case, the logical addresses of the program are not connected to physical
addresses until it is applied and loaded into memory.
Execution Time or Dynamic Address Binding
Execution time address binding is the most popular type of binding for scripts that aren't compiled
because it only applies to variables in the program. When a variable in a program is encountered during
the processing of instructions in a script, the program seeks memory space for that variable. The
memory would assign the space to that variable until the program sequence finished or unless a specific
instruction within the script released the memory address connected to a variable.
------------------------------------------------------------------------------------------------------------------------
Topic : Memory Sharing And Protection
Memory protection is a crucial component of operating systems which permits them to avert one
method's storage from being utilized by another. Memory safeguarding is vital in contemporary
operating systems since it enables various programs to run in tandem lacking tampering with their
respective storage space
The primary goal of safeguarding memory is to avert an application from accessing RAM without
permission. Whenever an approach attempts to use memory that it does not have permission to enter,
the computer's operating system will stop and end the process. This hinders the program from obtaining
memory that it should not.
Memory backup is frequently carried out using equipment memory management units (MMUs). An
MMU is an instruction set component that corresponds digital addresses utilized by a program to actual
locations in memory. The MMU is in charge of converting artificial addresses to real addresses and
guaranteeing the program only has access to the recall that it has been granted access to.
Memory security usually happens within contemporary operating systems using an approach known
as memory virtualization. Virtual RAM enables every program to operate in a virtual address space of
its own, which the MMU maps to physical memory. This enables several programs to run concurrently,
everyone having a different virtual address space but distributing the same physical storage space.
Different Ways of Memory Protection
Segmentation
Memory is segmented into sections, every single one which can have a separate set of access rights.
An OS kernel segment, for instance, might be read-only, whereas a user data segment could have been
designated as read-write.
Example
As an illustration, User A may be using a text-editing programme while User B is using an internet
browser. A distinct segment is given for every consumer's implementation of their code, data, and stack.
The section for the document evaluating programme used by User A is entirely separate from the
internet browser programme used by User B.
The word processing programme used by User A can only use or alter data that is located in its
designated segment. A segmentation fault or gain access infringement is going to happen if the
programme tries to get into RAM outside of its segment, and the OS terminates the implementation to
stop unauthorized access to additional segments.
Paged Virtual Memory
Memory is divided into pages in paged virtual memory, and each page can be saved to its own place
in physical memory. In order to maintain track of where pages are kept, the OS uses a page table. This
gives the operating system the ability to move pages to various parts of physical memory, where they
can be secured against unauthorized access.
Example
The OS sets permissions for entry on every page to safeguard memory. For instance, the information
pages could be granted read-write authorizations in order for the game to change its internal
configuration whereas the code pages might be identified as read-only to safeguard against
unintentional alterations. Depending on their needs, framework processes' pages might be granted
various access authorizations
The virtual memory management unit (VMM) uses a table of pages to convert an Internet address to a
real address when an app attempts to reach a specific memory location. The page table identifies the
exact position of the information in physical memory by mapping the digital numbering of pages to
physical numbers for pages.
Protection keys
Each RAM page has a set of bits called encryption keys. Accessibility to the page can be controlled
using these bits. A protection key could be utilized, for instance, to specify whether or not a document
will be read, written to, or operated
Example
On an equivalent server that is User A operates an application with a database which holds private
client information, and User B is operating an algorithm that uses machine learning. Memory
protection among both of these programmes is enforced by the OS using protection keys.
The protection key linked to User A's data is the only way for the database implementation to get into
memory. The protection key makes certain that neither the database usage nor other system methods
have access to memory locations used by User B's machine learning method.
Similar to User A's, User B's machine learning algorithm works within the confines of the protection
key that was given to it. This hinders unauthorized gain of User A's information or additional system
assets and limits User B's access to just its own memory
Advantages
Applying security for memory in a platform offers multiple perks.
Listed below are a few of the primary benefits −
• Improved Stability − Memory security prevents one program from accessing another procedure's
memory area, which can enhance system stability and prevent the loss of vital information.
• Increased Security − Memory protection helps to prevent the unauthorized access of private
information, as the OS will interrupt and terminate any application attempting to access unauthorized
RAM, preventing security breaches.
• Better Resource Management − Memory shielding allows multiple processes to run concurrently
without affecting each other's memory space, improving the overall efficiency of the system's resource
management.
• More Efficient Memory Usage − Simulated memory security strategies can optimize the use of
memory while decreasing the amount of RAM necessary for the system, allowing multiple programs
to use the same physical storage space.
• Facilitates Multitasking − Memory protection enables multiple processes to run simultaneously,
allowing for multitasking and running multiple programs at the same time.
Disadvantages
Applying security for memory in a platform offers multiple perks, alongside downfalls as well which
are considered below −
• Overhead − Guarding memory requires additional software and hardware resources, which can lead
to higher costs and reduced system efficiency.
• Complexity − Memory protection adds complexity to the operating system, making development,
testing, and maintenance more difficult.
• Memory Fragmentation − Virtual memory can cause memory fragmentation, where real memory is
broken into inadequate, pseudo contiguous blocks.
• Limitation − Memory protection is not foolproof and can be circumvented in certain situations. For
example, a malicious user might exploit vulnerabilities in the OS to gain access to another process's
memory area.
• Compatibility Issues − Some older software programs may be incompatible with memory protection
features, limiting the operating system's ability to protect memory from unauthorized access.
--------------------------------------------------------------------------------------------------------------------------
Topic : Paging And Segmentation
Paging is a non-contiguous memory allocation technique in which secondary memory and the main
memory is divided into equal size partitions. The partitions of the secondary memory are
called pages while the partitions of the main memory are called frames . They are divided into equal
size partitions to have maximum utilization of the main memory and avoid external fragmentation.
Example: We have a process P having process size as 4B, page size as 1B. Therefore there will we
four pages(say, P0, P1, P2, P3) each of size 1B. Also, when this process goes into the main memory
for execution then depending upon the availability, it may be stored in non-contiguous fashion in the
main memory frame as shown below:
Translation of logical Address into physical Address
As a CPU always generates a logical address and we need a physical address for accessing the main
memory. This mapping is done by the MMU(memory management Unit) with the help of the page
table . Lets first understand some of the basic terms then we will see how this translation is done.
Logical Address: The logical address consists of two parts page number and page offset.
1. Page Number: It tells the exact page of the process which the CPU wants to access.
2. Page Offset: It tells the exact word on that page which the CPU wants to read.
Logical Address = Page Number + Page Offset
Physical Address: The physical address consists of two parts frame number and page offset.
1. Frame Number: It tells the exact frame where the page is stored in physical memory.
2. Page Offset: It tells the exact word on that page which the CPU wants to read. It requires no
translation as the page size is the same as the frame size so the place of the word which CPU wants
access will not change.
Physical Address = Frame Number + Page Offset
• Page table: A page stable contains the frame number corresponding to the page number of some
specific process. So, each process will have its own page table. A register called Page Table Base
Register(PTBR) which holds the base value of the page table.
The CPU generates the logical address which contains the page number and the page offset . The
PTBR register contains the address of the page table. Now, the page table helps in determining
the frame number corresponding to the page number. Now, with the help of frame number and the
page offset the physical address is determined and the page is accessed in the main memory.
Advantages of Paging
1. There is no external fragmentation as it allows us to store the data in a non-contiguous way.
2. Swapping is easy between equal-sized pages and frames.
Disadvantages of Paging
1. As the size of the frame is fixed, so it may suffer from internal fragmentation. It may happen that the
process is too small and it may not acquire the entire frame size.
2. The access time increases because of paging as the main memory has to be now accessed two times.
First, we need to access the page table which is also stored in the main memory and second, combine
the frame number with the page offset and then get the physical address of the page which is again
stored in the main memory.
3. For every process, we have an independent page table and maintaining the page table is extra overhead.
Segmentation
In paging, we were blindly diving the process into pages of fixed sizes but in segmentation, we divide
the process into modules for better visualization of the process. Here each segment or module consists
of the same type of functions. For example, the main function is included in one segment, library
function is kept in other segments, and so on. As the size of segments may vary, so memory is divided
into variable size parts.
Translation of logical Address into physical Address
As a CPU always generates a logical address and we need a physical address for accessing the main
memory. This mapping is done by the MMU(memory management Unit) with the help of the segment
table .
Lets first understand some of the basic terms then we will see how this translation is done.
• Logical Address: The logical address consists of two parts segment number and page offset.
1. Segment Number: It tells the specific segment of the process from which the CPU wants to read
the data.
2. Segment Offset: It tells the exact word in that segment which the CPU wants to read.
Logical Address = Segment Number + Segment Offset
• Physical Address: The physical address is obtained by adding the base address of the segment to the
segment offset.
• Segment table: A segment table stores the base address of each segment in the main memory. It has
two parts i.e. Base and Limit . Here, base indicates the base address or starting address of the segment
in the main memory. Limit tells the size of that segment. A register called Segment Table Base
Register(STBR) which holds the base value of the segment table. The segment table is also stored in
the main memory itself.
How is the translation done?
The CPU generates the logical address which contains the segment number and the
segment offset . STBR register contains the address of the segment table. Now, the segment table helps
in determining the base address of the segment corresponding to the page number. Now, the segment
offset is compared with the limit corresponding to the Base. If the segment offset is greater than
the limit then it is an invalid address. This is because the CPU is trying to access a word in the segment
and this value is greater than the size of the segment itself which is not possible. If the segment offset
is less than or equal to the limit then only the request is accepted. The physical address is generated by
adding the base address of the segment to the segment offset.

Advantages of Segmentation
1. The size of the segment table is less compared to the size of the page table.
2. There is no internal fragmentation.
Disadvantages of Segmentation
1. When the processes are loaded and removed ( during swapping ) from the main memory then free
memory spaces are broken into smaller pieces and this causes external fragmentation.
2. Here also the time to access the data increases as due to segmentation the main memory has to be now
accessed two times. First, we need to access the segment table which is also stored in the main memory
and second, combine the base address of the segment with the segment offset and then get the physical
address which is again stored in the main memory.
Topic: Page Replacement Algorithms in Operating Systems
In an operating system that uses paging for memory management, a page replacement algorithm is
needed to decide which page needs to be replaced when a new page comes in. Page replacement
becomes necessary when a page fault occurs and there are no free page frames in memory. However,
another page fault would arise if the replaced page is referenced again. Hence it is important to replace
a page that is not likely to be referenced in the immediate future. y. If no page frame is free, the virtual
memory manager performs a page replacement operation to replace one of the pages existing in
memory with the page whose reference caused the page fault. It is performed as follows: The virtual
memory manager uses a page replacement algorithm to select one of the pages currently in memory
for replacement, accesses the page table entry of the selected page to mark it as “not present” in
memory, and initiates a page-out operation for it if the modified bit of its page table entry indicates
that it is a dirty page.
Page Fault: A page fault happens when a running program accesses a memory page that is mapped
into the virtual address space but not loaded in physical memory. Since actual physical memory is
much smaller than virtual memory, page faults happen. In case of a page fault, Operating System might
have to replace one of the existing pages with the newly needed page. Different page replacement
algorithms suggest different ways to decide which page to replace. The target for all algorithms is to
reduce the number of page faults.
Page Replacement Algorithms:
1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of
the queue. When a page needs to be replaced page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the number of
page faults.

Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page
Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in
memory so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is also not available in
memory so it replaces the oldest page slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not
available so it replaces 0 1 page fault.
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of
page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we
consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9 total page faults, but if
we increase slots to 4, we get 10-page faults.
2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find
number of page fault.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not used
for the longest duration of time in the future.—>1 Page fault. 0 is already there so —> 0 Page
fault. 4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
Optimal page replacement is perfect, but not possible in practice as the operating system cannot know
future requests. The use of Optimal Page replacement is to set up a benchmark so that other
replacement algorithms can be analyzed against it.
3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames.
Find number of page faults.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it is least
recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
4. Most Recently Used (MRU): In this algorithm, page will be replaced which has been used recently.
Belady’s anomaly can occur in this algorithm.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so–> 0 page fault
when 3 comes it will take place of 0 because it is most recently used —>1 Page fault
when 0 comes it will take place of 3 —>1 Page fault
when 4 comes it will take place of 0 —>1 Page fault
2 is already in memory so —> 0 Page fault
when 3 comes it will take place of 2 —>1 Page fault
when 0 comes it will take place of 3 —>1 Page fault
when 3 comes it will take place of 0 —>1 Page fault
when 2 comes it will take place of 3 —>1 Page fault
when 3 comes it will take place of 2 —>1 Page fault
-------------------------------------------------------------------------------------------------------------------------
Topic: Virtual Memory in Operating System

Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as
though it were part of the main memory. The addresses a program may use to reference memory are
distinguished from the addresses the memory system uses to identify physical storage sites and
program-generated addresses are translated automatically to the corresponding machine addresses.
A memory hierarchy, consisting of a computer system’s memory and a disk, that enables a process
to operate with only some portions of its address space in memory. A virtual memory is what its
name indicates- it is an illusion of a memory that is larger than the real memory. We refer to the
software component of virtual memory as a virtual memory manager. The basis of virtual memory
is the noncontiguous memory allocation model. The virtual memory manager removes some
components from memory to make room for other components.
The size of virtual storage is limited by the addressing scheme of the computer system and the
amount of secondary memory available not by the actual number of main storage locations.
It is a technique that is implemented using both hardware and software. It maps memory addresses
used by a program, called virtual addresses, into physical addresses in computer memory.
1. All memory references within a process are logical addresses that are dynamically translated
into physical addresses at run time. This means that a process can be swapped in and out of the main
memory such that it occupies different places in the main memory at different times during the course
of execution.
2. A process may be broken into a number of pieces and these pieces need not be continuously
located in the main memory during execution. The combination of dynamic run-time address
translation and the use of a page or segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or segments are present
in the main memory during execution. This means that the required pages need to be loaded into
memory whenever required. Virtual memory is implemented using Demand Paging or
Demand Segmentation.
Demand Paging
The process of loading the page into memory on demand (whenever a page fault occurs) is known
as demand paging. The process includes the following steps are as follows:
1. If the CPU tries to refer to a page that is currently not available in the main memory, it generates
an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed the OS must
bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address space. The page
replacement algorithms are used for the decision-making of replacing the page in physical address
space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place the process
back into the ready state.
Hence whenever a page fault occurs these steps are followed by the operating system and the required
page is brought into memory.
Advantages of Virtual Memory
• More processes may be maintained in the main memory: Because we are going to load only
some of the pages of any particular process, there is room for more processes. This leads to more
efficient utilization of the processor because it is more likely that at least one of the more numerous
processes will be in the ready state at any particular time.
• A process may be larger than all of the main memory: One of the most fundamental restrictions
in programming is lifted. A process larger than the main memory can be executed because of demand
paging. The OS itself loads pages of a process in the main memory as required.
• It allows greater multiprogramming levels by using less of the available (primary) memory for
each process.
• It has twice the capacity for addresses as main memory.
• It makes it possible to run more applications at once.
• Users are spared from having to add memory modules when RAM space runs out, and applications
are liberated from shared memory management.
• When only a portion of a program is required for execution, speed has increased.
• Memory isolation has increased security.
• It makes it possible for several larger applications to run at once.
• Memory allocation is comparatively cheap.
• It doesn’t require outside fragmentation.
• It is efficient to manage logical partition workloads using the CPU.
• Automatic data movement is possible.
Disadvantages of Virtual Memory
• It can slow down the system performance, as data needs to be constantly transferred between the
physical memory and the hard disk.
• It can increase the risk of data loss or corruption, as data can be lost if the hard disk fails or if there
is a power outage while data is being transferred to or from the hard disk.
• It can increase the complexity of the memory management system, as the operating system needs
to manage both physical and virtual memory.
Page Fault Service Time: The time taken to service the page fault is called page fault service time.
The page fault service time includes the time taken to perform all the above six steps.
Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m

Swapping
Swapping is a process out means removing all of its pages from memory, or marking them so that
they will be removed by the normal page replacement process. Suspending a process ensures that it
is not runnable while it is swapped out. At some later time, the system swaps back the process from
the secondary storage to the main memory. When a process is busy swapping pages in and out then
this situation is called thrashing.

You might also like