0% found this document useful (0 votes)
22 views35 pages

Os Unit-Iv

Unit IV of the Operating Systems document covers Memory Management, detailing techniques such as Swapping, Contiguous Memory Allocation, Paging, and Segmentation. It explains the importance of memory management in optimizing system performance, the various memory allocation methods, and the advantages and disadvantages of paging and segmentation. Additionally, it discusses the translation of logical addresses to physical addresses and the role of the Translation Lookaside Buffer (TLB) in improving memory access efficiency.

Uploaded by

kruthiksai34882
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views35 pages

Os Unit-Iv

Unit IV of the Operating Systems document covers Memory Management, detailing techniques such as Swapping, Contiguous Memory Allocation, Paging, and Segmentation. It explains the importance of memory management in optimizing system performance, the various memory allocation methods, and the advantages and disadvantages of paging and segmentation. Additionally, it discusses the translation of logical addresses to physical addresses and the role of the Translation Lookaside Buffer (TLB) in improving memory access efficiency.

Uploaded by

kruthiksai34882
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Operating Systems-UNIT-IV

OPERATING SYSTEMS
UNIT-IV:
Memory Management: Swapping, Contiguous memory allocation, Paging, Segmentation.
Virtual memory management - Demand paging, copy-on-write, page-replacement, Thrashing.

-----------------------------------------------------------------------------------------------------------------------

What is Memory Management?


Memory Management is the process of controlling and coordinating computer memory, assigning
portions known as blocks to various running programs to optimize the overall performance of the
system.

It is the most important function of an operating system that manages primary memory. It helps
processes to move back and forward between the main memory and execution disk. It helps OS to
keep track of every memory location, irrespective of whether it is allocated to some process or it
remains free.
Why Use Memory Management?
Here, are reasons for using memory management:

 It allows you to check how much memory needs to be allocated to processes that decide
which processor should get memory at what time.
 Tracks whenever inventory gets freed or unallocated. According to it will update the status.
 It allocates the space to application routines.
 It also make sure that these applications do not interfere with each other.
 Helps protect different processes from each other
 It places the programs in memory so that memory is utilized to its full extent.

Memory Management Techniques


Here, are some most crucial memory management techniques:

Single Contiguous Allocation


It is the easiest memory management technique. In this method, all types of computer’s memory
except a small portion which is reserved for the OS is available for one application. For example, MS-
DOS operating system allocates memory in this way. An embedded system also runs on a single
application.

Partitioned Allocation
It divides primary memory into various memory partitions, which is mostly contiguous areas of
memory. Every partition stores all the information for a specific task or job. This method consists of
allotting a partition to a job when it starts & unallocated when it ends.

Paged Memory Management

1|Page
Operating Systems-UNIT-IV

This method divides the computer’s main memory into fixed-size units known as page frames. This
hardware memory management unit maps pages into frames which should be allocated on a page
basis.

Segmented Memory Management


Segmented memory is the only memory management method that does not provide the user’s
program with a linear and contiguous address space.

Segments need hardware support in the form of a segment table. It contains the physical address of
the section in memory, size, and other data like access protection bits and status.

What is Swapping?
Swapping is a method in which the process should be swapped temporarily from the main memory to
the backing store. It will be later brought back into the memory for continue execution.

Backing store is a hard disk or some other secondary storage device that should be big enough in
order to accommodate copies of all memory images for all users. It is also capable of offering direct
access to these memory images.

Benefits of Swapping
Here, are major benefits/pros of swapping:

 It offers a higher degree of multiprogramming.


 Allows dynamic relocation. For example, if address binding at execution time is being used,
then processes can be swap in different locations. Else in case of compile and load time
bindings, processes should be moved to the same location.
 It helps to get better utilization of memory.
 Minimum wastage of CPU time on completion so it can easily be applied to a priority-based
scheduling method to improve its performance.

2|Page
Operating Systems-UNIT-IV

What is Memory allocation?


Memory allocation is a process by which computer programs are assigned memory or space.

Here, main memory is divided into two types of partitions

1. Low Memory – Operating system resides in this type of memory.


2. High Memory– User processes are held in high memory.

Partition Allocation
Memory is divided into different blocks or partitions. Each process is allocated according to the
requirement. Partition allocation is an ideal method to avoid internal fragmentation.

Below are the various partition allocation schemes :

 First Fit: In this type fit, the partition is allocated, which is the first sufficient block from the
beginning of the main memory.
 Best Fit: It allocates the process to the partition that is the first smallest partition among the
free partitions.
 Worst Fit: It allocates the process to the partition, which is the largest sufficient freely
available partition in the main memory.
 Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition
from the last allocation point.

Paging in Operating Systems

Paging permits the physical address space of a process to be non-contiguous. It is a fixed-size


partitioning scheme. In the Paging technique, the secondary memory and main memory are divided
into equal fixed-size partitions.

Paging solves the problem of fitting memory chunks of varying sizes onto the backing store and this
problem is suffered by many memory management schemes.

Paging helps to avoid external fragmentation and the need for compaction.

Basic Method of Paging

The paging technique divides the physical memory(main memory) into fixed-size blocks that are
known as Frames and also divide the logical memory(secondary memory) into blocks of the same
size that are known as Pages.

This technique keeps the track of all the free frames.

3|Page
Operating Systems-UNIT-IV

The Frame has the same size as that of a Page. A frame is basically a place where a (logical) page can
be (physically) placed.

Each process is mainly divided into parts where the size of each part is the same as the page size.

There is a possibility that the size of the last part may be less than the page size.

 Pages of a process are brought into the main memory only when there is a requirement
otherwise they reside in the secondary storage.

 One page of a process is mainly stored in one of the frames of the memory. Also, the pages
can be stored at different locations of the memory but always the main priority is to find
contiguous frames.

Let us now cover the concept of translating a logical address into the physical address:

Translation of Logical Address into Physical Address

Before moving on further there are some important points to note:

 The CPU always generates a logical address.

 In order to access the main memory always a physical address is needed.

The logical address generated by CPU always consists of two parts:

1. Page Number(p)

2. Page Offset (d)


4|Page
Operating Systems-UNIT-IV

where,

Page Number is used to specify the specific page of the process from which the CPU wants to read
the data. and it is also used as an index to the page table.

and Page offset is mainly used to specify the specific word on the page that the CPU wants to read.

Now let us understand what is Page Table?

Page Table in OS

The Page table mainly contains the base address of each page in the Physical memory. The base
address is then combined with the page offset in order to define the physical memory address which
is then sent to the memory unit.

Thus page table mainly provides the corresponding frame number (base address of the frame) where
that page is stored in the main memory.

As we have told you above that the frame number is combined with the page offset and forms
the required physical address.

So, The physical address consists of two parts:

1. Page offset(d)

2. Frame Number(f)

where,The Frame number is used to indicate the specific frame where the required page is stored.

and Page Offset indicates the specific word that has to be read from that page.

The Page size (like the frame size) is defined with the help of hardware. It is important to note here
that the size of the page is typically the power of 2 that varies between 512 bytes and 16 MB per
page and it mainly depends on the architecture of the computer.

If the size of logical address space is 2 raised to the power m and page size is 2 raised to the power
n addressing units then the high order m-n bits of logical address designates the page number and
the n low-order bits designate the page offset.

The logical address is as follows:

where p indicates the index into the page table, and d indicates the displacement within the page.

5|Page
Operating Systems-UNIT-IV

The above diagram indicates the translation of the Logical address into the Physical address.
The PTBR in the above diagram means page table base register and it basically holds the base address
for the page table of the current process.

The PTBR is mainly a processor register and is managed by the operating system. Commonly, each
process running on a processor needs its own logical address space.

But there is a problem with this approach and that is with the time required to access a user memory
location. Suppose if we want to find the location i, we must first find the index into the page table by
using the value in the PTBR offset by the page number for I. And this task requires memory access. It
then provides us the frame number which is combined with the page offset in order to produce the
actual address. After that, we can then access the desired place in the memory.

With the above scheme, two memory accesses are needed in order to access a byte( one for the
page-table entry and one for byte). Thus memory access is slower by a factor of 2 and in most cases,
this scheme slowed by a factor of 2.

Translation of look-aside buffer(TLB)

There is the standard solution for the above problem that is to use a special, small, and fast-lookup
hardware cache that is commonly known as Translation of look-aside buffer(TLB).

 TLB is associative and high-speed memory.

 Each entry in the TLB mainly consists of two parts: a key(that is the tag) and a value.

 When associative memory is presented with an item, then the item is compared with all keys
simultaneously. In case if the item is found then the corresponding value is returned.

 The search with TLB is fast though the hardware is expensive.

 The number of entries in the TLB is small and generally lies in between 64 and 1024.

TLB is used with Page Tables in the following ways:

6|Page
Operating Systems-UNIT-IV

The TLB contains only a few of the page-table entries. Whenever the logical address is generated by
the CPU then its page number is presented to the TLB.

 If the page number is found, then its frame number is immediately available and is used in
order to access the memory. The above whole task may take less than 10 percent longer than
would if an unmapped memory reference were used.

 In case if the page number is not in the TLB (which is known as TLB miss), then a memory
reference to the Page table must be made.

 When the frame number is obtained it can be used to access the memory. Additionally, page
number and frame number is added to the TLB so that they will be found quickly on the next
reference.

 In case if the TLB is already full of entries then the Operating system must select one for
replacement.

 TLB allows some entries to be wired down, which means they cannot be removed from the
TLB. Typically TLB entries for the kernel code are wired down.

Paging Hardware With TLB

Advantages of Paging

Given below are some advantages of the Paging technique in the operating system:

 Paging mainly allows to storage of parts of a single process in a non-contiguous fashion.

 With the help of Paging, the problem of external fragmentation is solved.

 Paging is one of the simplest algorithms for memory management.

7|Page
Operating Systems-UNIT-IV

Disadvantages of the Paging technique are as follows:

 In Paging, sometimes the page table consumes more memory.

 Internal fragmentation is caused by this technique.

 There is an increase in time taken to fetch the instruction since now two memory accesses are
required.

Paging Hardware

Every address generated by CPU mainly consists of two parts:

1. Page Number(p)

2. Page Offset (d)

where,

Page Number is used as an index into the page table that generally contains the base address of each
page in the physical memory.

Page offset is combined with base address in order to define the physical memory address which is
then sent to the memory unit.

If the size of logical address space is 2 raised to the power m and page size is 2 raised to the power
n addressing units then the high order m-n bits of logical address designates the page number and
the n low-order bits designate the page offset.

The logical address is as follows:

where p indicates the index into the page table, and d indicates the displacement within the page.
The Page size is usually defined by the hardware. The size of the page is typically the power of 2 that
varies between 512 bytes and 16 MB per page.

8|Page
Operating Systems-UNIT-IV

Now its time to cover an example of Paging:

Paging Example

In order to implement paging, one of the simplest methods is to implement the page table as a set of
registers. As the size of registers is limited but the size of the page table is usually large thus page
table is kept in the main memory.

There is no External fragmentation caused due to this scheme; Any frame that is free can be allocated
to any process that needs it. But the internal fragmentation is still there.

 If any process requires n pages then at least n frames are required.

 The first page of the process is loaded into the first frame that is listed on the free-frame list,
and then the frame number is put into the page table.

9|Page
Operating Systems-UNIT-IV

The frame table is a data structure that keeps the information of which frames are allocated or which
frames are available and many more things. This table mainly has one entry for each physical page
frame.

The Operating system maintains a copy of the page table for each process in the same way as it
maintains the copy of the instruction counter and register contents. Also, this copy is used to
translate logical addresses to physical addresses whenever the operating system maps a logical
address to a physical address manually.

This copy is also used by the CPU dispatcher in order to define the hardware page table whenever a
process is to be allocated to the CPU.

Segmentation in Operating Systems


Segmentation is another way of dividing the addressable memory. It is another scheme of memory
management and it generally supports the user view of memory. The Logical address space is
basically the collection of segments. Each segment has a name and a length.

Basically, a process is divided into segments. Like paging, segmentation divides or segments the
memory. But there is a difference and that is while the paging divides the memory into a fixed
size and on the other hand, segmentation divides the memory into variable segments these are then
loaded into logical memory space.

A Program is basically a collection of segments. And a segment is a logical unit such as:

 main program

 procedure

 function

10 | P a g e
Operating Systems-UNIT-IV

 method

 object

 local variable and global variables.

 symbol table

 common block

 stack

 arrays

Types of Segmentation: Given below are the types of Segmentation:

 Virtual Memory Segmentation


With this type of segmentation, each process is segmented into n divisions and the most
important thing is they are not segmented all at once.

 Simple Segmentation
With the help of this type, each process is segmented into n divisions and they are all together
segmented at once exactly but at the runtime and can be non-contiguous (that is they may be
scattered in the memory).

Characteristics of Segmentation

Some characteristics of the segmentation technique are as follows:

 The Segmentation partitioning scheme is variable-size.

 Partitions of the secondary memory are commonly known as segments.

 Partition size mainly depends upon the length of modules.

 Thus with the help of this technique, secondary memory and main memory are divided into
unequal-sized partitions.

Need of Segmentation

One of the important drawbacks of memory management in the Operating system is the separation
of the user's view of memory and the actual physical memory. and Paging is a technique that
provides the separation of these two memories.

User'view is basically mapped onto physical storage. And this mapping allows differentiation between
Physical and logical memory.

11 | P a g e
Operating Systems-UNIT-IV

It is possible that the operating system divides the same function into different pages and those
pages may or may not be loaded into the memory at the same time also Operating system does not
care about the User's view of the process. Due to this technique system's efficiency decreases.

Segmentation is a better technique because it divides the process into segments.

User's View of a Program

Given below figure shows the user's view of segmentation:

logical address

Basic Method

A computer system that is using segmentation has a logical address space that can be viewed as
multiple segments. And the size of the segment is of the variable that is it may grow or shrink. As we
had already told you that during the execution each segment has a name and length. And the address
mainly specifies both thing name of the segment and the displacement within the segment.

Therefore the user specifies each address with the help of two quantities: segment name and offset.

For simplified Implementation segments are numbered; thus referred to as segment number rather
than segment name.

Thus the logical address consists of two tuples:

<segment-number,offset>

where,

Segment Number(s):
Segment Number is used to represent the number of bits that are required to represent the segment.

12 | P a g e
Operating Systems-UNIT-IV

Offset(d)
Segment offset is used to represent the number of bits that are required to represent the size of the
segment.

Segmentation Architecture

Segment Table

A Table that is used to store the information of all segments of the process is commonly known as
Segment Table. Generally, there is no simple relationship between logical addresses and physical
addresses in this scheme.

 The mapping of a two-dimensional Logical address into a one-dimensional Physical address is


done using the segment table.

 This table is mainly stored as a separate segment in the main memory.

 The table that stores the base address of the segment table is commonly known as the
Segment table base register (STBR)

In the segment table each entry has :

1. Segment Base/base address:


The segment base mainly contains the starting physical address where the segments reside in
the memory.

2. Segment Limit:
The segment limit is mainly used to specify the length of the segment.

Segment Table Base Register(STBR)


The STBR register is used to point the segment table's location in the memory.

Segment Table Length Register(STLR)


This register indicates the number of segments used by a program. The segment number s is legal
if s<STLR

Segmentation Hardware

13 | P a g e
Operating Systems-UNIT-IV

Given below figure shows the segmentation hardware :

The logical address generated by CPU consist of two parts:

Segment Number(s): It is used as an index into the segment table.

Offset(d): It must lie in between '0' and 'segment limit'.In this case, if the Offset exceeds the segment
limit then the trap is generated.

Thus; correct offset+segment base= address in Physical memory

and segment table is basically an array of base-limit register pair.

Advantages of Segmentation

The Advantages of the Segmentation technique are as follows:

 In the Segmentation technique, the segment table is mainly used to keep the record of
segments. Also, the segment table occupies less space as compared to the paging table.

 There is no Internal Fragmentation.

 Segmentation generally allows us to divide the program into modules that provide better
visualization.

 Segments are of variable size.

Disadvantages of Segmentation

Some disadvantages of this technique are as follows:

 Maintaining a segment table for each process leads to overhead

 This technique is expensive.

14 | P a g e
Operating Systems-UNIT-IV

 The time is taken in order to fetch the instruction increases since now two memory accesses
are required.

 Segments are of unequal size in segmentation and thus are not suitable for swapping.

 This technique leads to external fragmentation as the free space gets broken down into
smaller pieces along with the processes being loaded and removed from the main memory
then this will result in a lot of memory waste.

Example of Segmentation

Given below is the example of the segmentation, There are five segments numbered from 0 to 4.
These segments will be stored in Physical memory as shown. There is a separate entry for each
segment in the segment table which contains the beginning entry address of the segment in the
physical memory( denoted as the base) and also contains the length of the segment(denoted as
limit).

Segment 2 is 400 bytes long and begins at location 4300. Thus in this case a reference to byte 53 of
segment 2 is mapped onto the location 4300 (4300+53=4353). A reference to segment 3, byte 85 is
mapped to 3200(the base of segment 3)+852=4052.

A reference to byte 1222 of segment 0 would result in the trap to the OS, as the length of this
segment is 1000 bytes.

What is Segmentation?
15 | P a g e
Operating Systems-UNIT-IV

Segmentation method works almost similarly to paging. The only difference between the two is that
segments are of variable-length, whereas, in the paging method, pages are always of fixed size.

A program segment includes the program’s main function, data structures, utility functions, etc. The
OS maintains a segment map table for all the processes. It also includes a list of free memory blocks
along with its size, segment numbers, and its memory locations in the main memory or virtual
memory.

What is Dynamic Loading?


Dynamic loading is a routine of a program which is not loaded until the program calls it. All routines
should be contained on disk in a relocatable load format. The main program will be loaded into
memory and will be executed. Dynamic loading also provides better memory space utilization.

What is Dynamic Linking?


Linking is a method that helps OS to collect and merge various modules of code and data into a single
executable file. The file can be loaded into memory and executed. OS can link system-level libraries
into a program that combines the libraries at load time. In Dynamic linking method, libraries are
linked at execution time, so program code size can remain small.

Difference Between Static and Dynamic Loading


Static Loading Dynamic Loading
Static loading is used when you want to load In a Dynamically loaded program, references will
your program statically. Then at the time of be provided and the loading will be done at the
compilation, the entire program will be linked time of execution.
and compiled without need of any external
module or program dependency.
At loading time, the entire program is loaded Routines of the library are loaded into memory
into memory and starts its execution. only when they are required in the program.

Difference Between Static and Dynamic Linking


Here, are main difference between Static vs. Dynamic Linking:

Static Linking Dynamic Linking


Static linking is used to combine all other When dynamic linking is used, it does not need
modules, which are required by a program into to link the actual module or library with the
a single executable code. This helps OS prevent program. Instead of it use a reference to the
any runtime dependency. dynamic module provided at the time of
compilation and linking.

16 | P a g e
Operating Systems-UNIT-IV

n computers, an address is used to identify a location in the computer memory. In operating


systems, there are two types of addresses, namely, logical address and physical address. A logical
address is the virtual address that is generated by the CPU. A user can view the logical address of a
computer program. On the other hand, a physical address is one that represents a location in the
computer memory. A user cannot view the physical address of a program.
Read this article to find out more about logical and physical address and how they are different from
each other.
What is a Logical Address?
The logical address is a virtual address created by the CPU of the computer system. The logical
address of a program is generated when the program is running. A group of several logical address is
referred to a logical address space. The logical address is basically used as a reference to access the
physical memory locations.
In computer systems, a hardware device named memory management unit (MMU) is used to map
the logical address to its corresponding physical address. However, the logical address of a program
is visible to the computer user.
What is a Physical Address?
The physical address of a computer program is one that represents a location in the memory unit of
the computer. The physical address is not visible to the computer user. The MMU of the system
generates the physical address for the corresponding logical address.
The physical address is accessed through the corresponding logical address because a user cannot
directly access the physical address. For running a computer program, it requires a physical memory
space. Therefore, the logical address has to be mapped with the physical address before the
execution of the program.

Difference between Logical and Physical Address in Operating System


The following table highlights all the major differences between logical and physical address in
operating system −

S.
Logical Address Physical Address
No.

1. This address is generated by the This address is a location in the memory unit.
CPU.

2. The address space consists of the set This address is a set of all physical addresses that
of all logical addresses. are mapped to the corresponding logical
addresses.

3. These addresses are generated by It is computed using Memory Management Unit


CPU with reference to a specific (MMU).
program.

4. The user has the ability to view the The user can’t view the physical address of
logical address of a program. program directly.

17 | P a g e
Operating Systems-UNIT-IV

5. The user can use the logical address The user can indirectly access the physical
in order to access the physical address.
address.

Virtual memory management - Demand paging, copy-on-write, page-replacement, Thrashing:

Demand Paging in Operating Systems


Every process in the virtual memory contains lots of pages and in some cases, it
might not be efficient to swap all the pages for the process at once. Because it
might be possible that the program may need only a certain page for the
application to run. Let us take an example here, suppose there is a 500 MB
application and it may need as little as 100MB pages to be swapped, so in this
case, there is no need to swap all pages at once.

The demand paging system is somehow similar to the paging system with
swapping where processes mainly reside in the main memory(usually in the hard
disk). Thus demand paging is the process that solves the above problem only by
swapping the pages on Demand. This is also known as lazy swapper( It never
swaps the page into the memory unless it is needed).

Swapper that deals with the individual pages of a process are referred to
as Pager.

Demand Paging is a technique in which a page is usually brought into the main
memory only when it is needed or demanded by the CPU. Initially, only those
pages are loaded that are required by the process immediately. Those pages that
are never accessed are thus never loaded into the physical memory.

Figure: Transfer of a Paged Memory to the contiguous disk space.

Whenever a page is needed? make a reference to it;

 If the reference is invalid then abort it.

 If the page is Not-in-memory then bring it to memory.


18 | P a g e
Operating Systems-UNIT-IV

Valid-Invalid Bit
Some form of hardware support is used to distinguish between the pages that are
in the memory and the pages that are on the disk. Thus for this purpose Valid-
Invalid scheme is used:

 With each page table entry, a valid-invalid bit is


associated( where 1 indicates in the memory and 0 indicates not in the
memory)

 Initially, the valid-invalid bit is set to 0 for all table entries.

1. If the bit is set to "valid", then the associated page is both legal and is in
memory.

2. If the bit is set to "invalid" then it indicates that the page is either not
valid or the page is valid but is currently not on the disk.

 For the pages that are brought into the memory, the page table is
set as usual.

 But for the pages that are not currently in the memory, the page
table is either simply marked as invalid or it contains the address of
the page on the disk.

During the translation of address, if the valid-invalid bit in the page table entry is
0 then it leads to page fault.

The above figure is to indicates the page table when some pages are not in the
main memory.

How Demand Paging Works?


First of all the components that are involved in the Demand paging process are
as follows:
19 | P a g e
Operating Systems-UNIT-IV

 Main Memory

 CPU

 Secondary Memory

 Interrupt

 Physical Address space

 Logical Address space

 Operating System

 Page Table

1. If a page is not available in the main memory in its active state; then a
request may be made to the CPU for that page. Thus for this purpose, it has
to generate an interrupt.

2. After that, the Operating system moves the process to the blocked state as
an interrupt has occurred.

3. Then after this, the Operating system searches the given page in the
Logical address space.

4. And Finally with the help of the page replacement algorithms, replacements
are made in the physical address space. Page tables are updated
simultaneously.

5. After that, the CPU is informed about that update and then asked to go
ahead with the execution and the process gets back into its ready state.

When the process requires any of the pages that are not loaded into the memory,
a page fault trap is triggered and the following steps are followed,

1. The memory address which is requested by the process is first checked, to


verify the request made by the process.

2. If it is found to be invalid, the process is terminated.

3. In case the request by the process is valid, a free frame is located, possibly
from a free-frame list, where the required page will be moved.

4. A new operation is scheduled to move the necessary page from the disk to
the specified memory location. ( This will usually block the process on an
I/O wait, allowing some other process to use the CPU in the meantime. )

5. When the I/O operation is complete, the process's page table is updated
with the new frame number, and the invalid bit is changed to valid.
20 | P a g e
Operating Systems-UNIT-IV

6. The instruction that caused the page fault must now be restarted from the
beginning.

Advantages of Demand Paging


The benefits of using the Demand Paging technique are as follows:

 With the help of Demand Paging, memory is utilized efficiently.

 Demand paging avoids External Fragmentation.

 Less Input/Output is needed for Demand Paging.

 This process is not constrained by the size of physical memory.

 With Demand Paging it becomes easier to share the pages.

 With this technique, portions of the process that are never called are never
loaded.

 No compaction is required in demand Paging.

Disadvantages of Demand paging


Drawbacks of Demand Paging are as follows:

 There is an increase in overheads due to interrupts and page tables.

 Memory access time in demand paging is longer.

Pure Demand Paging


In some cases when initially no pages are loaded into the memory, pages in such
cases are only loaded when are demanded by the process by generating page
faults. It is then referred to as Pure Demand Paging.

 In the case of pure demand paging, there is not even a single page that is
loaded into the memory initially. Thus pure demand paging causes the
page fault.
21 | P a g e
Operating Systems-UNIT-IV

 When the execution of the process starts with no pages in the memory,
then the operating system sets the instruction pointer to the first
instruction of the process and that is on a non-memory resident page and
then in this case the process immediately faults for the page.

 After that when this page is brought into the memory then the process
continues its execution, page fault is necessary until every page that it
needs is in the memory.

 And at this point, it can execute with no more faults.

 This scheme is referred to as Pure Demand Paging: means never bring a


page into the memory until it is required.

Advantages :
 Reduces memory requirement
 Swap time is also reduced.
 increases the degree of multiprogramming(cpu utilization time increases )

Disadvantages:
 Page fault rate increases for bigger programs .
 If the size of swap file is big it is difficult for main memory

Copy-On-Write:

Copy-On-Write (COW) memory management is a memory optimization technique employed by


operating systems to reduce overheads when creating new processes. It facilitates multiple
processes to share the same memory pages until one process modifies them. Upon modification, the
operating system creates a duplicate copy of the original page, which is exclusively granted to the
modifying process, while the other processes continue to share the original page. This technique is
especially advantageous while creating new processes, as it enables the new process to share the
memory pages of the parent process until it requires modifying them. By significantly saving memory
and reducing the time needed to create the new process, COPY ON WRITE memory management has
become a standard feature in modern operating systems such as Linux, macOS, and Windows. Its
effectiveness is particularly prominent in scenarios where multiple processes are created and need
to share memory, such as in virtualized environments or cloud computing.

Copy-on-Write in Operating System


Copy-on-Write(CoW) is mainly a resource management technique that allows
the parent and child process to share the same pages of the memory initially.
If any process either parent or child modifies the shared page, only then the page
is copied.

22 | P a g e
Operating Systems-UNIT-IV

The CoW is basically a technique of efficiently copying the data resources in the
computer system. In this case, if a unit of data is copied but is not modified
then "copy" can mainly exist as a reference to the original data.

But when the copied data is modified, then at that time its copy is
created(where new bytes are actually written )as suggested from the name of the
technique.

The main use of this technique is in the implementation of the fork system call in
which it shares the virtual memory/pages of the Operating system.

Recall in the UNIX(OS), the fork() system call is used to create a duplicate
process of the parent process which is known as the child process.

 The CoW technique is used by several Operating systems like Linux,


Solaris, and Windows XP.

 The CoW technique is an efficient process creation technique as only the


pages that are modified are copied.

Free pages in this technique are allocated from a pool of zeroed-out pages.

The Copy on Write(CoW) Technique


The main intention behind the CoW technique is that whenever a parent process
creates a child process both parent and child process initially will share the same
pages in the memory.

 These shared pages between parent and child process will be marked as
copy-on-write which means that if the parent or child process will attempt
to modify the shared pages then a copy of these pages will be created and
the modifications will be done only on the copy of pages by that process
and it will not affect other processes.

Now its time to take a look at the basic example of this technique:

Let us take an example where Process A creates a new process that is Process B,
initially both these processes will share the same pages of the memory.

23 | P a g e
Operating Systems-UNIT-IV

Figure: Above figure indicates parent and child process sharing the same pages

Now, let us assume that process A wants to modify a page in the memory. When
the Copy-on-write(CoW) technique is used, only those pages that are modified
by either process are copied; all the unmodified pages can be easily shared by
the parent and child process.

Figure: After Page Z is modified by Process A

Whenever it is determined that a page is going to be duplicated using the copy-


on-write technique, then it is important to note the location from where the free
pages will be allocated. There is a pool of free pages for such requests; provided
by many operating systems.

And these free pages are allocated typically when the stack/heap for a process
must expand or when there are copy-on-write pages to manage.

These pages are typically allocated using the technique that is known as Zero-
fill-on-demand. And the Zero-fill-on-demand pages are zeroed-out before
being allocated and thus erasing the previous content.

Advantages of Copy on Write


The Copy on Write mechanism offers several advantages in modern operating systems, including −

1. Reduced memory usage


By allowing processes to share memory pages, the operating system can reduce the amount of
physical memory needed to support multiple processes. This can be particularly important in
scenarios where there are many processes running concurrently, as it can help reduce overall
memory usage and improve system performance.

2. Faster process creation time


Because the Copy on Write mechanism allows new processes to share memory pages with existing
processes, the time needed to create a new process is reduced. This can be particularly beneficial in
scenarios where many processes need to be created and destroyed frequently, such as in web
servers or cloud computing environments.

3. Improved performance in virtualized environments

24 | P a g e
Operating Systems-UNIT-IV

The Copy on Write mechanism is particularly useful in virtualized environments, where multiple
virtual machines may be running on a single physical server. By allowing virtual machines to share
memory pages, the operating system can reduce the amount of memory needed to support each
virtual machine, which can improve overall system performance.

Overall, the Copy on Write mechanism offers significant advantages in terms of memory usage,
performance, and system scalability. It is a widely used technique in modern operating systems and
has become an important part of the memory management strategies used by operating system
developers.

Drawbacks of Copy on Write


The Copy on Write mechanism is not without its drawbacks. One potential issue is the overhead
associated with creating a new copy of a page. This overhead can become significant when many
processes are modifying the same page frequently. Additionally, the increased memory usage
associated with creating multiple copies of a page can be a concern in some scenarios. The major
drawbacks of this mechanism are as follows −

1. Overhead associated with creating a new copy of a page


When a process modifies a read-only page, the Copy on Write mechanism creates a new copy of the
page and gives it exclusively to the modifying process. This process involves overhead in terms of
memory allocation and page copying, which can slow down system performance in some cases.

2. Increased memory usage when multiple processes modify the same page frequently
If multiple processes modify the same page frequently, the Copy on Write mechanism may create
multiple copies of the page, which can lead to increased memory usage. This can become a concern
in scenarios where memory usage is limited or where many processes are frequently modifying the
same pages.

3. Complexity of implementation
The Copy on Write mechanism is a complex technique that requires careful implementation to
ensure that it functions correctly. This can make it more difficult to develop and maintain operating
systems that use this technique.

4. Potential for security vulnerabilities


Because the Copy on Write mechanism involves sharing memory pages between processes, there is
a potential for security vulnerabilities to arise. For example, a process could intentionally modify a
shared memory page to cause a security exploit in another process that shares the same page.

Despite these potential drawbacks, the Copy on Write mechanism remains a widely used technique
in modern operating systems due to its many advantages. Operating system developers must
carefully consider the benefits and drawbacks of the Copy on Write mechanism when designing and
implementing their memory management strategies.

Thrashing:
Look at any process that does not have “enough” frames. If the process does not have the number of
frames it needs to support pages in active use, it will quickly page-fault. At this point, it must replace
25 | P a g e
Operating Systems-UNIT-IV

some page. However, since all its pages are in active use, it must replace a page that will be needed
again right away. Consequently, it quickly faults again, and again, and again, replacing pages that it
must bring back in immediately. This high paging activity is called thrashing. A process is thrashing if it
is spending more time paging than executing.
Cause of Thrashing :
Thrashing results in severe performance problems.
The operating system monitors CPU utilization. If CPU utilization is too low, we increase the degree of
multiprogramming by introducing a new process to the system. A global page-replacement algorithm
is used; it replaces pages without regard to the process to which they belong. Now suppose that a
process enters a new phase in its execution and needs more frames. It starts faulting and taking
frames away from other processes. These processes need those pages, however, and so they also
fault, taking frames from other processes. These faulting processes must use the paging device to
swap pages in and out. As they queue up for the paging device, the ready queue empties. As
processes wait for the paging device, CPU utilization decreases. The CPU scheduler sees the
decreasing CPU utilization and increases the degree of multiprogramming as a result. The new
process tries to get started by taking frames from running processes, causing more page faults and a
longer queue for the paging device. As a result, CPU utilization drops even further, and the CPU
scheduler tries to increase the degree of multiprogramming even more. Thrashing has occurred, and
system throughput plunges. The pagefault rate increases tremendously. As a result, the effective
memory-access time increases. No work is getting done, because the processes are spending all their
time paging.

Thrashing in Operating System


In case, if the page fault and swapping happens very frequently at a higher rate,
then the operating system has to spend more time swapping these pages. This
state in the operating system is termed thrashing. Because of thrashing the CPU
utilization is going to be reduced.

Let's understand by an example, if any process does not have the number of
frames that it needs to support pages in active use then it will quickly page fault.
And at this point, the process must replace some pages. As all the pages of the
process are actively in use, it must replace a page that will be needed again right
away. Consequently, the process will quickly fault again, and again, and again,
replacing pages that it must bring back in immediately. This high paging activity
by a process is called thrashing.

During thrashing, the CPU spends less time on some actual productive work
spend more time swapping.

26 | P a g e
Operating Systems-UNIT-IV

Figure: Thrashing

Causes of Thrashing
Thrashing affects the performance of execution in the Operating system. Also,
thrashing results in severe performance problems in the Operating system.

When the utilization of CPU is low, then the process scheduling mechanism tries
to load many processes into the memory at the same time due to which degree
of Multiprogramming can be increased. Now in this situation, there are more
processes in the memory as compared to the available number of frames in the
memory. Allocation of the limited amount of frames to each process.

Whenever any process with high priority arrives in the memory and if the frame is
not freely available at that time then the other process that has occupied the
frame is residing in the frame will move to secondary storage and after that this
free frame will be allocated to higher priority process.

We can also say that as soon as the memory fills up, the process starts spending
a lot of time for the required pages to be swapped in. Again the utilization of the
CPU becomes low because most of the processes are waiting for pages.

Thus a high degree of multiprogramming and lack of frames are two main causes
of thrashing in the Operating system.

27 | P a g e
Operating Systems-UNIT-IV

This phenomenon is illustrated in Figure, in which CPU utilization is plotted against the degree of
multiprogramming. As the degree of multiprogramming increases, CPU utilization also increases,
although more slowly, until a maximum is reached. If the degree of multiprogramming is increased
even further, thrashing sets in, and CPU utilization drops sharply. At this point, to increase CPU
utilization and stop thrashing, we must decrease the degree of multiprogramming.
Figure: Thrashing
We can limit the effects of thrashing by using a local replacement algorithm (or priority replacement
algorithm). With local replacement, if one process starts thrashing, it cannot steal frames from
another process and cause the latter to thrash as well. However, the problem is not entirely solved. If
processes are thrashing, they will be in the queue for the paging device most of the time. The average
service time for a page fault will increase because of the longer average queue for the paging device.
Thus, the effective access time will increase even for a process that is not thrashing. To prevent
thrashing, we must provide a process with as many frames as it needs.
we can know how many frames it “needs” by several techniques. The working-set strategy starts by
looking at how many frames a process is actually using. This approach defines the locality model of
process execution. The locality model states that, as a process executes, it moves from locality to
locality. A locality is a set of pages that are actively used together. A program is generally composed
of several different localities, which may overlap. For example, when a function is called, it defines a
new locality. In this locality, memory references are made to the instructions of the function call, its
local variables, and a subset of the global variables. When we exit the function, the process leaves
this locality, since the local variables and instructions of the function are no longer in active use. We
may return to this locality later. we see that localities are defined by the program structure and its
data structures. The locality model states that all programs will exhibit this basic memory reference
structure.
We allocate enough frames to a process to accommodate its current locality. It will fault for the pages
in its locality until all these pages are in memory; then, it will not fault again until it changes localities.
If we do not allocate enough frames to accommodate the size of the current locality, the process will
thrash, since it cannot keep in memory all the pages that it is actively using.
Page-Fault Frequency:
The working-set model is successful, and knowledge of the working set can be useful for prepaging ,
but it seems a clumsy way to control thrashing. A strategy that uses the page-fault frequency (PFF)
takes a more direct approach. The specific problem is how to prevent thrashing. Thrashing has a high
page-fault rate. Thus, we want to control the page-fault rate. When it is too high, we know that the
process needs more frames. Conversely, if the page-fault rate is too low, then the process may have
too many frames. We can establish upper and lower bounds on the desired page-fault rate. If the
actual page-fault rate exceeds the upper limit, we allocate the process another frame. If the page-
fault rate falls below the lower limit, we remove a frame from the process. Thus, we can directly
measure and control the page-fault rate to prevent thrashing.

This lesson will introduce you to the concept of page replacement, which is used in memory
management. You will understand the definition, need and various algorithms related to page
replacement.

28 | P a g e
Operating Systems-UNIT-IV

A computer system has a limited amount of memory. Adding more memory physically is very
costly. Therefore most modern computers use a combination of both hardware and software to
allow the computer to address more memory than the amount physically present on the system.
This extra memory is actually called Virtual Memory.

Virtual Memory is a storage allocation scheme used by the Memory Management Unit(MMU) to
compensate for the shortage of physical memory by transferring data from RAM to disk storage. It
addresses secondary memory as though it is a part of the main memory. Virtual Memory makes the
memory appear larger than actually present which helps in the execution of programs that are
larger than the physical memory.

Virtual Memory can be implemented using two methods :

 Paging

 Segmentation

In this blog, we will learn about the paging part.

Paging
Paging is a process of reading data from, and writing data to, the secondary storage. It is a memory
management scheme that is used to retrieve processes from the secondary memory in the form of
pages and store them in the primary memory. The main objective of paging is to divide each
process in the form of pages of fixed size. These pages are stored in the main memory in frames.
Pages of a process are only brought from the secondary memory to the main memory when they
are needed.

When an executing process refers to a page, it is first searched in the main memory. If it is not
present in the main memory, a page fault occurs.

** Page Fault is the condition in which a running process refers to a page that is not loaded in the
main memory.

In such a case, the OS has to bring the page from the secondary storage into the main memory. This
may cause some pages in the main memory to be replaced due to limited storage. A Page
Replacement Algorithm is required to decide which page needs to be replaced.

Page Replacement Algorithm


Page Replacement Algorithm decides which page to remove, also called swap out when a new page
needs to be loaded into the main memory. Page Replacement happens when a requested page is
29 | P a g e
Operating Systems-UNIT-IV

not present in the main memory and the available space is not sufficient for allocation to the
requested page.

When the page that was selected for replacement was paged out, and referenced again, it has to
read in from disk, and this requires for I/O completion. This process determines the quality of the
page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.

A page replacement algorithm tries to select which pages should be replaced so as to minimize the
total number of page misses. There are many different page replacement algorithms. These
algorithms are evaluated by running them on a particular string of memory reference and
computing the number of page faults. The fewer is the page faults the better is the algorithm for
that situation.

** If a process requests for page and that page is found in the main memory then it is called page
hit , otherwise page miss or page fault .

Some Page Replacement Algorithms :

 First In First Out (FIFO)

 Least Recently Used (LRU)

 Optimal Page Replacement

First In First Out (FIFO)


This is the simplest page replacement algorithm. In this algorithm, the OS maintains a queue that
keeps track of all the pages in memory, with the oldest page at the front and the most recent page
at the back.

When there is a need for page replacement, the FIFO algorithm, swaps out the page at the front of
the queue, that is the page which has been in the memory for the longest time.

For Example:

Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4(i.e.
maximum 4 pages in a frame).

30 | P a g e
Operating Systems-UNIT-IV

Total Page Fault = 9

Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty slots in order
of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.

When 5 comes, it is not available in memory so page fault occurs and it replaces the oldest page in
memory, i.e., 1.

When 1 comes, it is not available in memory so page fault occurs and it replaces the oldest page in
memory, i.e., 2.

When 3,1 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

When 6 comes, it is not available in memory so page fault occurs and it replaces the oldest page in
memory, i.e., 3.

When 3 comes, it is not available in memory so page fault occurs and it replaces the oldest page in
memory, i.e., 4.

When 2 comes, it is not available in memory so page fault occurs and it replaces the oldest page in
memory, i.e., 5.

When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

Page Fault ratio = 9/12 i.e. total miss/total possible cases


31 | P a g e
Operating Systems-UNIT-IV

Advantages

 Simple and easy to implement.

 Low overhead.

Disadvantages

 Poor performance.

 Doesn’t consider the frequency of use or last used time, simply replaces the oldest page.

 Suffers from Belady’s Anomaly(i.e. more page faults when we increase the number of page
frames).

Least Recently Used (LRU)


Least Recently Used page replacement algorithm keeps track of page usage over a short period of
time. It works on the idea that the pages that have been most heavily used in the past are most
likely to be used heavily in the future too.

In LRU, whenever page replacement happens, the page which has not been used for the longest
amount of time is replaced.

For Example

Total Page Fault = 8

Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty slots in order
of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.
32 | P a g e
Operating Systems-UNIT-IV

When 5 comes, it is not available in memory so page fault occurs and it replaces 1 which is the least
recently used page.

When 1 comes, it is not available in memory so page fault occurs and it replaces 2.

When 3,1 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

When 6 comes, it is not available in memory so page fault occurs and it replaces 4.

When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

When 2 comes, it is not available in memory so page fault occurs and it replaces 5.

When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

Page Fault ratio = 8/12

Advantages

 Efficient.

 Doesn't suffer from Belady’s Anomaly.

Disadvantages

 Complex Implementation.

 Expensive.

 Requires hardware support.

Optimal Page Replacement


Optimal Page Replacement algorithm is the best page replacement algorithm as it gives the least
number of page faults. It is also known as OPT, clairvoyant replacement algorithm, or Belady’s
optimal page replacement policy.

In this algorithm, pages are replaced which would not be used for the longest duration of time in
the future, i.e., the pages in the memory which are going to be referred farthest in the future are
replaced.

33 | P a g e
Operating Systems-UNIT-IV

This algorithm was introduced long back and is difficult to implement because it requires future
knowledge of the program behaviour. However, it is possible to implement optimal page
replacement on the second run by using the page reference information collected on the first run.

For Example

Total Page Fault = 6

Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty slots in order
of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.

When 5 comes, it is not available in memory so page fault occurs and it replaces 4 which is going to
be used farthest in the future among 1, 2, 3, 4.

When 1,3,1 comes, they are available in the memory, i.e., Page Hit, so no replacement occurs.

When 6 comes, it is not available in memory so page fault occurs and it replaces 1.

When 3, 2, 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

Page Fault ratio = 6/12

Advantages

 Easy to Implement.

34 | P a g e
Operating Systems-UNIT-IV

 Simple data structures are used.

 Highly efficient.

Disadvantages

 Requires future knowledge of the program.

 Time-consuming.

35 | P a g e

You might also like