0% found this document useful (0 votes)
9 views

Chapter-3 MSc-cs

Chapter 3 discusses memory management systems, detailing how data is stored in binary format and the impact of memory size on CPU utilization. It covers memory allocation methods such as contiguous memory allocation, swapping, paging, and segmentation, highlighting their advantages and disadvantages. The chapter emphasizes the importance of efficient memory management techniques to optimize system performance and resource utilization.

Uploaded by

sutarpayal2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Chapter-3 MSc-cs

Chapter 3 discusses memory management systems, detailing how data is stored in binary format and the impact of memory size on CPU utilization. It covers memory allocation methods such as contiguous memory allocation, swapping, paging, and segmentation, highlighting their advantages and disadvantages. The chapter emphasizes the importance of efficient memory management techniques to optimize system performance and resource utilization.

Uploaded by

sutarpayal2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter-3

Memory Management System


Memory:
omputer memory can be defined as a collection of some data represented in the binary
format. On the basis of various functions, memory can be classified into various
categories.

A computer device that is capable to store any information or data temporally or


permanently, is called storage device.

How Data is being stored in a computer system?


Machine understands only binary language that is 0 or 1. Computer converts every
data into binary language first and then stores it into the memory.

That means if we have a program line written as int α = 10 then the computer converts
it into the binary language and then store it into the memory blocks.

The representation of inti = 10 is shown below.

The binary representation of 10 is 1010. Here, we are considering 32 bit system


therefore, the size of int is 2 bytes i.e. 16 bit. 1 memory block stores 1 bit. If we are
using signed integer then the most significant bit in the memory array is always a
signed bit.
Signed bit value 0 represents positive integer while 1 represents negative integer. Here,
the range of values that can be stored using the memory array is -32768 to +32767.

well, we can enlarge this range by using unsigned int. in that case, the bit which is now
storing the sign will also store the bit value and therefore the range will be 0 to 65,535.

Let's consider,
Process Size = 4 MB
Main memory size = 4 MB
The process can only reside in the main memory at any time.
If the time for which the process does IO is P,

Then,

CPU utilization = (1-P)


let's say,
P = 70%
CPU utilization = 30 %
Now, increase the memory size, Let's say it is 8 MB.
Process Size = 4 MB
Two processes can reside in the main memory at the same time.
Let's say the time for which, one process does its IO is P,

Then

CPU utilization = (1-P^2)


let's say P = 70 %
CPU utilization = (1-0.49) =0.51 = 51 %

Therefore, we can state that the CPU utilization will be increased if the memory size
gets increased.

Swapping:
Swapping is a memory management scheme in which any process can be temporarily
swapped from main memory to secondary memory so that the main memory can be
made available for other processes. It is used to improve main memory utilization. In
secondary memory, the place where the swapped-out process is stored is called swap
space.

The purpose of the swapping in operating system is to access the data present in the
hard disk and bring it to RAM so that the application programs can use it. The thing
to remember is that swapping is used only when data is not present in RAM.

Although the process of swapping affects the performance of the system, it helps to
run larger and more than one process. This is the reason why swapping is also referred
to as memory compaction.

The concept of swapping has divided into two more concepts: Swap-in and Swap-out.

o Swap-out is a method of removing a process from RAM and adding it to the


hard disk.
o Swap-in is a method of removing a program from a hard disk and putting it
back into the main memory or RAM.

Example: Suppose the user process's size is 2048KB and is a standard hard disk where
swapping has a data transfer rate of 1Mbps. Now we will calculate how long it will take
to transfer from main memory to secondary memory.

User process size is 2048Kb


Data transfer rate is 1Mbps = 1024 kbps
Time = process size / transfer rate
= 2048 / 1024
= 2 seconds
= 2000 milliseconds

Now taking swap-in and swap-out time, the process will take 4000 milliseconds.
Advantages of Swapping
1. It helps the CPU to manage multiple processes within a single main memory.
2. It helps to create and use virtual memory.
3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore,
processes do not have to wait very long before they are executed.
4. It improves the main memory utilization.
Disadvantages of Swapping
1. If the computer system loses power, the user may lose all information related to the
program in case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the number
of Page Fault and decrease the overall processing performance.

Contiguous Memory Allocation:


An operating system memory allocation method is contiguous memory allocation.
What, however, is memory allocation? A software or process requires memory space
in order to be run. As a result, a process must be given a specific amount of memory
that corresponds to its needs. Memory allocation is the term for this procedure.

Contiguous memory allocation is one of these memory allocation strategies. We use


this strategy to allocate contiguous blocks of memory to each process, as the name
suggests. Therefore, we allot a continuous segment from the entirely empty area to
the process based on its size whenever a process requests to reach the main memory.

Fix-size Partitioning Method


Each process in this method of contiguous memory allocation is given a fixed size
continuous block in the main memory. This means that the entire memory will be
partitioned into continuous blocks of fixed size, and each time a process enters the
system, it will be given one of the available blocks. Because each process receives a
block of memory space that is the same size, regardless of the size of the process.
Static partitioning is another name for this approach.
Three processes in the input queue in the figure above require memory space
allocation. The memory has fixed-sized chunks because we are using the fixed size
partition technique. In addition to the 4MB process, the first process, which is 3MB in
size, is given a 5MB block. The second process, which is 1MB in size, is also given a
5MB block. So, it doesn't matter how big the process is. The same fixed-size memory
block is assigned to each.

It is obvious that under this system, the number of continuous blocks into which the
memory will be partitioned will be determined by the amount of space each block
covers, and this, in turn, will determine how many processes can remain in the main
memory at once.

The degree of multiprogramming refers to the number of processes that can run
concurrently in memory. Therefore, the number of blocks formed in the RAM
determines the system's level of multiprogramming.

Advantages

A fixed-size partition system has the following benefits:

o This strategy is easy to employ because each block is the same size. Now all
that is left to do is allocate processes to the fixed memory blocks that have been
divided up.
o It is simple to keep track of how many memory blocks are still available, which
determines how many further processes can be allocated memory.
o This approach can be used in a system that requires multiprogramming since
numerous processes can be maintained in memory at once.

Disadvantages

Although the fixed-size partitioning strategy offers numerous benefits, there are a few
drawbacks as well:

o We won't be able to allocate space to a process whose size exceeds the block
since the size of the blocks is fixed.
o The amount of multiprogramming is determined by block size, and only as
many processes can run simultaneously in memory as there are available blocks.
o We must assign the process to this block if the block's size is more than that of
the process; nevertheless, this will leave a lot of free space in the block. This
open area might have been used to facilitate another procedure.

Flexible Partitioning Method


No fixed blocks or memory partitions are created while using this style of contiguous
memory allocation technique. Instead, according on its needs, each process is given a
variable-sized block. This indicates that if space is available, this amount of RAM is
allocated to a new process whenever it requests it. As a result, each block's size is
determined by the needs and specifications of the process that uses it.
There are no partitions with set sizes in the aforementioned diagram. Instead, the first
process is given just 3MB of RAM because it requires that much. The remaining 3
processes are similarly given only the amount of space that is necessary for them.

This method is also known as dynamic partitioning because the blocks' sizes are
flexible and determined as new processes start.

Advantages

A variable-size partitioning system has the following benefits:

o There is no internal fragmentation because the processes are given blocks of


space according to their needs. Therefore, this technique does not waste RAM.
o How many processes are in the memory at once and how much space they take
up will determine how many processes can be running simultaneously. As a
result, it will vary depending on the situation and be dynamic.
o Even a large process can be given space because there are no blocks with set
sizes.

Disadvantages

Despite the variable-size partition scheme's many benefits, there are a few drawbacks
as well:
o This method is dynamic, hence it is challenging to implement a variable-size
partition scheme.
o It is challenging to maintain record of processes and available memory space.

Techniques for Contiguous Memory Allocation Input Queues


So far, we've examined two different contiguous memory allocation strategies. But
what transpires when a fresh process needs to be given a location in the main memory?
How is the block or segment that it will receive chosen?

Continuous blocks of memory assigned to processes cause the main memory to always
be full. A procedure, however, leaves behind an empty block termed as a hole after it
is finished. A new procedure could potentially be implemented in this area. As a result,
there are processes and holes in the main memory, and each one of these holes might
be assigned to a new process that comes in.

First-Fit

This is a fairly straightforward technique where we start at the beginning and assign
the first hole, which is large enough to meet the needs of the process. The first-fit
technique can also be applied so that we can pick up where we left off in our previous
search for the first-fit hole.

Best-Fit

The goal of this greedy method, which allocates the smallest hole that meets the needs
of the process, is to minimise any memory that would otherwise be lost due to internal
fragmentation in the event of static partitioning. Therefore, in order to select the
greatest match for the procedure without wasting memory, we must first sort the holes
according to their diameters.

Worst-Fit

The Best-Fit strategy is in opposition to this one. The largest hole is chosen to be
assigned to the incoming process once the holes are sorted based on size. The theory
behind this allocation is that because the process is given a sizable hole, it will have a
lot of internal fragmentation left over. As a result, a hole will be left behind that can
house a few additional processes.

Contiguous Memory Allocation's advantages and disadvantages


Contiguous memory allocation has a range of benefits and drawbacks. The following
are a few benefits and drawbacks:
Advantages

o The number of memory blocks remaining, which affects how many further
processes can be given memory space, is easy to keep track of.
o Contiguous memory allocation has good read performance since the entire file
can be read from the disc in a single process.
o The contiguous allocation works well and is easy to set up.

Disadvantages

o Fragmentation is not an issue because each new file can be written to the disk's
end after the preceding one.
o In order to choose the proper hole size while creating a new file, it needs know
its final size.
o The extra space in the holes would need to be compressed or used once the
diskis full.

Paging:
Paging is a storage mechanism used to retrieve processes from the secondary storage
into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The
main memory will also be divided in the form of frames.

One page of the process is to be stored in one of the frames of the memory. The pages
can be stored at the different locations of the memory but the priority is always to find
the contiguous frames or holes.

Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must
be equal. Considering the fact that the pages are mapped to the frames in Paging,
page size needs to be as same as frame size.
Structure of the Page table:
Page Table is a data structure used by the virtual memory system to store the mapping
between logical addresses and physical addresses.

Logical addresses are generated by the CPU for the pages of the processes therefore
they are generally used by the processes.

Physical addresses are the actual frame address of the memory. They are generally
used by the hardware or more specifically by RAM subsystems.

Physical Address Space = M words


Logical Address Space = L words
Page Size = P words

Physical Address = log 2 M = m bits


Logical Address = log 2 L = l bits
page offset = log 2 P = p bits
The CPU always accesses the processes through their logical addresses. However, the
main memory recognizes physical address only.

In this situation, a unit named as Memory Management Unit comes into the picture. It
converts the page number of the logical address to the frame number of the physical
address. The offset remains same in both the addresses.

To perform this task, Memory Management unit needs a special kind of mapping
which is done by page table. The page table stores all the Frame numbers
corresponding to the page numbers of the page table.

In other words, the page table maps the page number to its actual location (frame
number) in the memory.

In the image given below shows, how the required word of the frame is accessed with
the help of offset.

Segmentation:
In Operating Systems, Segmentation is a memory management technique in which the
memory is divided into the variable size parts. Each part is known as a segment which
can be allocated to a process.

The details about each segment are stored in a table called a segment table. Segment
table is stored in one (or many) of the segments.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment


2. Limit: It is the length of the segment.

Why Segmentation is required?


Till now, we were using Paging as our main memory management technique. Paging
is more close to the Operating system rather than the User. It divides all the processes
into the form of pages regardless of the fact that a process can have some relative
parts of functions which need to be loaded in the same page.

Operating system doesn't care about the User's view of the process. It may divide the
same function into different pages and those pages may or may not be loaded at the
same time into the memory. It decreases the efficiency of the system.

It is better to have segmentation which divides the process into the segments. Each
segment contains the same type of functions such as the main function can be
included in one segment and the library functions can be included in the other
segment.

Translation of Logical address into physical


address by segment table
CPU generates a logical address which contains two parts:

1. Segment Number
2. Offset
For Example:

Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for
the segment offset so the maximum segment size is 4096 and the maximum number
of segments that can be refereed is 16.

When a program is loaded into memory, the segmentation system tries to locate space
that is large enough to hold the first segment of the process, space information is
obtained from the free list maintained by memory manager. Then it tries to locate
space for other segments. Once adequate space is located for all the segments, it loads
them into their respective areas.

The operating system also generates a segment map table for each program.

With the help of segment map tables and hardware assistance, the operating system
can easily translate a logical address into physical address on execution of a program.

The Segment number is mapped to the segment table. The limit of the respective
segment is compared with the offset. If the offset is less than the limit then the address
is valid otherwise it throws an error as the address is invalid.
In the case of valid addresses, the base address of the segment is added to the offset
to get the physical address of the actual word in the main memory.

The above figure shows how address translation is done in case of segmentation.

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.

Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.

Virtual Memory Management:


A computer can address more memory than the amount
physically installed on the system. This extra memory is actually
called virtual memory and it is a section of a hard disk that's set
up to emulate the computer's RAM.

The main visible advantage of this scheme is that programs can


be larger than physical memory. Virtual memory serves two
purposes. First, it allows us to extend the use of physical memory
by using disk. Second, it allows us to have memory protection,
because each virtual address is translated to a physical address.

Advantages of Virtual Memory

1. The degree of Multiprogramming will be increased.


2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.

Disadvantages of Virtual Memory


1. The system becomes slower since swapping takes time.
2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use.

Demand Paging

A demand paging system is quite similar to a paging system with


swapping where processes reside in secondary memory and
pages are loaded only on demand, not in advance. When a
context switch occurs, the operating system does not copy any of
the old program’s pages out to the disk or any of the new
program’s pages into the main memory Instead, it just begins
executing the new program after loading the first page and
fetches that program’s pages as they are referenced.
While executing a program, if the program references a page
which is not available in the main memory because it was
swapped out a little ago, the processor treats this invalid memory
reference as a page fault and transfers control from the program
to the operating system to demand the page back into the
memory.

Advantages

Following are the advantages of Demand Paging −

• Large virtual memory.


• More efficient use of memory.
• There is no limit on degree of multiprogramming.

Disadvantages

• Number of tables and the amount of processor overhead for


handling page interrupts are greater than in the case of the
simple paged management techniques.
Page Replacement Algorithm

Page replacement algorithms are the techniques using which an


Operating System decides which memory pages to swap out,
write to disk when a page of memory needs to be allocated.
Paging happens whenever a page fault occurs and a free page
cannot be used for allocation purpose accounting to reason that
pages are not available or the number of free pages is lower than
required pages.

When the page that was selected for replacement and was paged
out, is referenced again, it has to read in from disk, and this
requires for I/O completion. This process determines the quality
of the page replacement algorithm: the lesser the time waiting
for page-ins, the better is the algorithm.

A page replacement algorithm looks at the limited information


about accessing the pages provided by hardware, and tries to
select which pages should be replaced to minimize the total
number of page misses, while balancing it with the costs of
primary storage and processor time of the algorithm itself. There
are many different page replacement algorithms. We evaluate an
algorithm by running it on a particular string of memory reference
and computing the number of page faults,

Reference String

The string of memory references is called reference string.


Reference strings are generated artificially or by tracing a given
system and recording the address of each memory reference. The
latter choice produces a large number of data, where we note two
things.

• For a given page size, we need to consider only the page


number, not the entire address.
• If we have a reference to a page p, then any immediately
following references to page p will never cause a page fault.
Page p will be in memory after the first reference; the
immediately following references will not fault.
• For example, consider the following sequence of addresses
− 123,215,600,1234,76,96
• If page size is 100, then the reference string is 1,2,6,12,0,0
First In First Out (FIFO) algorithm
• Oldest page in main memory is the one which will be
selected for replacement.
• Easy to implement, keep a list, replace pages from the tail
and add new pages at the head.

Optimal Page algorithm


• An optimal page-replacement algorithm has the lowest
page-fault rate of all algorithms. An optimal page-
replacement algorithm exists, and has been called OPT or
MIN.
• Replace the page that will not be used for the longest period
of time. Use the time when a page is to be used.

Least Recently Used (LRU) algorithm


• Page which has not been used for the longest time in main
memory is the one which will be selected for replacement.
• Easy to implement, keep a list, replace pages by looking
back into time.

Page Buffering algorithm


• To get a process start quickly, keep a pool of free frames.
• On page fault, select a page to be replaced.
• Write the new page in the frame of free pool, mark the page
table and restart the process.
• Now write the dirty page out of disk and place the frame
holding replaced page in free pool.
Least frequently Used(LFU) algorithm
• The page with the smallest count is the one which will be
selected for replacement.
• This algorithm suffers from the situation in which a page is
used heavily during the initial phase of a process, but then
is never used again.
Most frequently Used(MFU) algorithm
• This algorithm is based on the argument that the page with
the smallest count was probably just brought in and has yet
to be used.
K

Copy-On-Write (COW) memory management is a memory optimization


technique employed by operating systems to reduce overheads when creating
new processes. It facilitates multiple processes to share the same memory
pages until one process modifies them. Upon modification, the operating
system creates a duplicate copy of the original page, which is exclusively
granted to the modifying process, while the other processes continue to share
the original page. This technique is especially advantageous while creating new
processes, as it enables the new process to share the memory pages of the
parent process until it requires modifying them. By significantly saving
memory and reducing the time needed to create the new process, COPY ON
WRITE memory management has become a standard feature in modern
operating systems such as Linux, macOS, and Windows. Its effectiveness is
particularly prominent in scenarios where multiple processes are created and
need to share memory, such as in virtualized environments or cloud
computing.

The Copy on Write Mechanism

The Copy on Write mechanism is a memory management technique used by


modern operating systems to optimize memory usage and reduce overhead
when creating new processes. Copy on Write works by allowing multiple
processes to share the same memory pages until one of the processes modifies
the page. When a modification occurs, the operating system creates a copy of
the original page and gives it exclusively to the modifying process, while the
other processes continue to share the original page.

To understand how the Copy on Write mechanism works, it is important to


understand how memory is shared among processes. In modern operating
systems, memory is allocated to processes in the form of virtual memory
pages. Each page is typically 4 KB in size and is mapped to a physical memory
location by the operating system's memory manager.

When a process is created, the operating system allocates a set of virtual


memory pages to the process. These pages are initially marked as read-only,
and are shared among all processes that have access to them. When a process
tries to modify a read-only page, the operating system triggers the Copy on
Write mechanism.

When a process modifies a read-only page, the operating system creates a


copy of the original page and gives it exclusively to the modifying process. The
original page remains read-only and is shared among all other processes that
have access to it. The operating system updates the virtual memory mapping
of the modifying process to point to the new copy of the page, and the process
can now write to the page without affecting any other processes.

The Copy on Write mechanism is particularly effective in scenarios where a


large number of processes are created and need to share memory. By allowing
processes to share memory pages, the operating system can reduce the
amount of physical memory needed to support multiple processes. This can
save a significant amount of memory and reduce the time needed to create
new processes.

However, the Copy on Write mechanism is not without its drawbacks. One
potential issue is the overhead associated with creating a new copy of a page.
This overhead can become significant when many processes are modifying the
same page frequently. Additionally, the increased memory usage associated
with creating multiple copies of a page can be a concern in some scenarios.

Advantages of Copy on Write

The Copy on Write mechanism offers several advantages in modern operating


systems, including −
1. Reduced memory usage

By allowing processes to share memory pages, the operating system can


reduce the amount of physical memory needed to support multiple processes.
This can be particularly important in scenarios where there are many processes
running concurrently, as it can help reduce overall memory usage and improve
system performance.

2. Faster process creation time

Because the Copy on Write mechanism allows new processes to share memory
pages with existing processes, the time needed to create a new process is
reduced. This can be particularly beneficial in scenarios where many processes
need to be created and destroyed frequently, such as in web servers or cloud
computing environments.

3. Improved performance in virtualized environments

The Copy on Write mechanism is particularly useful in virtualized


environments, where multiple virtual machines may be running on a single
physical server. By allowing virtual machines to share memory pages, the
operating system can reduce the amount of memory needed to support each
virtual machine, which can improve overall system performance.

Overall, the Copy on Write mechanism offers significant advantages in terms


of memory usage, performance, and system scalability. It is a widely used
technique in modern operating systems and has become an important part of
the memory management strategies used by operating system developers.

Drawbacks of Copy on Write

The Copy on Write mechanism is not without its drawbacks. One potential issue
is the overhead associated with creating a new copy of a page. This overhead
can become significant when many processes are modifying the same page
frequently. Additionally, the increased memory usage associated with creating
multiple copies of a page can be a concern in some scenarios. The major
drawbacks of this mechanism are as follows −

1. Overhead associated with creating a new copy of a page


When a process modifies a read-only page, the Copy on Write mechanism
creates a new copy of the page and gives it exclusively to the modifying
process. This process involves overhead in terms of memory allocation and
page copying, which can slow down system performance in some cases.

2. Increased memory usage when multiple processes


modify the same page frequently

If multiple processes modify the same page frequently, the Copy on Write
mechanism may create multiple copies of the page, which can lead to
increased memory usage. This can become a concern in scenarios where
memory usage is limited or where many processes are frequently modifying
the same pages.

3. Complexity of implementation

The Copy on Write mechanism is a complex technique that requires careful


implementation to ensure that it functions correctly. This can make it more
difficult to develop and maintain operating systems that use this technique.

4. Potential for security vulnerabilities

Because the Copy on Write mechanism involves sharing memory pages


between processes, there is a potential for security vulnerabilities to arise. For
example, a process could intentionally modify a shared memory page to cause
a security exploit in another process that shares the same page.

Despite these potential drawbacks, the Copy on Write mechanism remains a


widely used technique in modern operating systems due to its many
advantages. Operating system developers must carefully consider the benefits
and drawbacks of the Copy on Write mechanism when designing and
implementing their memory management strategies.

Allocation Of Frames:
The main memory of the operating system is divided into various frames. The process
is stored in these frames, and once the process is saved as a frame, the CPU may run
it. As a result, the operating system must set aside enough frames for each process. As
a result, the operating system uses various algorithms in order to assign the frame.

here are mainly five ways of frame allocation algorithms in the OS. These are as follows:
1. Equal Frame Allocation
2. Proportional Frame Allocation
3. Priority Frame Allocation
4. Global Replacement Allocation
5. Local Replacement Allocation

Equal Frame Allocation


In equal frame allocation, the processes are assigned equally among the processes in the OS.
For example, if the system has 30 frames and 7 processes, each process will get 4 frames. The
2 frames that are not assigned to any system process may be used as a free-frame buffer pool
in the system.

Disadvantage

In a system with processes of varying sizes, assigning equal frames to each process makes
little sense. Many allotted empty frames will be wasted if many frames are assigned to a
small task.

Proportional Frame Allocation


The proportional frame allocation technique assigns frames based on the size needed for
execution and the total number of frames in memory.

The allocated frames for a process pi of size si are ai = (si/S)*m, in which S represents the
total of all process sizes, and m represents the number of frames in the system.

Disadvantage

The only drawback of this algorithm is that it doesn't allocate frames based on priority.
Priority frame allocation solves this problem.

Priority Frame Allocation


Priority frame allocation assigns frames based on the number of frame allocations and the
processes. Suppose a process has a high priority and requires more frames that many frames
will be allocated to it. Following that, lesser priority processes are allocated.

Global Replacement Allocation


When a process requires a page that isn't currently in memory, it may put it in and select a
frame from the all frames sets, even if another process is already utilizing that frame. In other
words, one process may take a frame from another.
Advantages

Process performance is not hampered, resulting in higher system throughput.

Disadvantages

The process itself may not solely control the page fault ratio of a process. The paging
behavior of other processes also influences the number of pages in memory for a process.

Local Replacement Allocation


When a process requires a page that isn't already in memory, it can bring it in and assign it a
frame from its set of allocated frames.

Advantages

The paging behavior of a specific process has an effect on the pages in memory and
the page fault ratio.

Disadvantages

A low priority process may obstruct a high priority process by refusing to share its
frames.

Global Vs. Local Replacement Allocation


The number of frames assigned to a process does not change using a local
replacement strategy. On the other hand, using global replacement, a process can
choose only frames granted to other processes and enhance the number of frames
allocated.

You might also like