0% found this document useful (0 votes)
28 views43 pages

Module 4-Os

Uploaded by

creations400ab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views43 pages

Module 4-Os

Uploaded by

creations400ab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

MODULE 4 – MEMORY MANAGEMENT

MEMORY

The term Memory can be defined as a collection of data in a specific format.


It is used to store instructions and process data.
The memory comprises a large array or group of words or bytes, each with its
own location.
The primary motive of a computer system is to execute programs. These
programs, along with the information they access, should be in the main
memory during execution.
The CPU fetches instructions from memory according to the value of the
program counter.
The memory hierarchy is as follows:-

MAIN MEMORY

Main memory is where programs and data are kept when the processor is actively
using them.
When programs and data become active, they are copied from secondary memory
into main memory where the processor can interact with them.
Main memory is associated with the processor, so moving instructions and
information into and out of the processor is extremely fast.
Main memory is also known as RAM (Random Access Memory). This memory is a
volatile memory. RAM lost its data when a power interruption occurs.

MEMORY MANAGEMENT

In a multiprogramming computer, the operating system resides in a part of


memory and the rest is used by multiple processes.
The task of subdividing the memory among different processes is called memory
management.
Memory management is a method in the operating system to manage operations
between main memory and disk during process execution.
The main aim of memory management is to achieve efficient utilization of
memory.
Why Memory Management is required:
o Allocate and de-allocate memory before and after process execution.
o To keep track of used memory space by processes.
o To minimize fragmentation issues.
o To proper utilization of main memory.
o To maintain data integrity while executing of process.

PROCESS ADDRESS SPACE

The operating system takes care of mapping the logical addresses to physical
addresses at the time of memory allocation to the program.
There are three types of addresses used in a program before and after memory is
allocated –
Virtual (or logical) and physical addresses are the same in compile-time and
load-time address-binding schemes.
Virtual (or logical) and physical addresses differ in execution-time address-
binding scheme.
1. Logical Address
A logical address is an address that is generated by the CPU during program
execution.
The logical address is a virtual address as it does not exist physically, and
therefore, it is also known as a Virtual Address.
This address is used as a reference to access the physical memory location by
CPU.
The set of all logical addresses generated by a program is referred to as a
Logical Address Space or Virtual Address Space.
2. Physical Address
The physical address identifies the physical location of required data in
memory.
The user never directly deals with the physical address but can access it by its
corresponding logical address.
The user program generates the logical address and thinks it is running in it,
but the program needs physical memory for its execution.
Therefore, the logical address must be mapped to the physical address by
MMU before they are used.
The set of all physical addresses corresponding to these logical addresses is
referred to as a Physical Address Space.
Difference Between Logical & Physical Address

Mapping Virtual Addresses To Physical Addresses


Address binding is the process of mapping from one address space to another
address space.
Logical addresses are generated by the CPU during execution, whereas
physical address refers to the location in a physical memory unit (the one
loaded into memory).
Note that users deal only with logical addresses. The MMU translates the
logical address. The output of this process is the appropriate physical address
of the data in RAM.
An address binding can be done in three different ways:
1) Compile Time: An absolute address can be generated if you know
where a process will reside in memory at compile time. That is, a
physical address is generated in the program executable during
compilation. Loading such an executable into memory is very fast. But
if another process occupies the generated address space, then the
program crashes, and it becomes necessary to recompile the program
to use virtual address space.
2) Load Time: If it is not known at the compile time where the process
will reside, then relocated addresses will be generated. The loader
translates the relocated address to an absolute address. The base
address of the process in the main memory is added to all logical
addresses by the loader to generate the absolute address. If the base
address of the process changes, then we need to reload the process
again.
3) Execution Time or Dynamic Address Binding: The instructions are
already loaded into memory and are processed by the CPU. Additional
memory may be allocated or reallocated at this time. This process is
used if the process can be moved from one memory to another during
execution (dynamic linking done during load or run time). e.g., -
Compaction.
Multistep processing of a user program

Memory Management Unit (MMU)

 The run-time mapping between the virtual and physical addresses is done by a
hardware device known as MMU.
o In the above diagram, the base register is termed the Relocation register.
o The Relocation Register is a special register in the CPU and is used for
the mapping of logical addresses used by a program to physical addresses
of the system's main memory.
o The value in the relocation register is added to every address that is
generated by the user process at the time when the address is sent to the
memory.
o Suppose the base is at 14000, then an attempt by the user to address
location 0 is relocated dynamically to 14000; thus access to
location 356 is mapped to 14356.
o User program always deals with the logical addresses. The Memory
Mapping unit mainly converts the logical addresses into the physical
addresses.
o There are two types of addresses that we have:
1) Logical addresses (lies in the range 0 to max).
2) Physical addresses(lies in the range R+0 to R+max for the base
value R)
 What is Loading?
o To bring the program from secondary memory to main memory is called
Loading.
o It is performed by a loader.
o It is a special program that takes the input of executable files from the linker,
loads it to the main memory, and prepares this code for execution by a
computer.
o There are two types of loading in the operating system:
1) Static loading:- Loading the entire program into a fixed address. It
requires more memory space.
2) Dynamic loading:- The entire program and all data of a process must be
in physical memory for the process to execute. So, the size of a process is
limited to the size of physical memory. To gain proper memory utilization,
dynamic loading is used. In dynamic loading, a routine is not loaded until
it is called. All routines are residing on disk in a relocatable load format.
One of the advantages of dynamic loading is that unused routine is never
loaded. This loading is useful when a large amount of code is needed to
handle it efficiently.
 What is Linking?
o Establishing the linking between all the modules or all the functions of the
program in order to continue the program execution is called linking.
o It is done by a linker.
o Linking is a process of collecting and maintaining pieces of code and data into
a single file.
o It takes object modules from the assembler as input and forms an executable
file as output which is the input for the loader.
o There are two types of linking in the operating system:
1) Static linking:- In static linking, the linker combines all necessary
program modules into a single executable program. So there is no runtime
dependency. Some operating systems support only static linking, in which
system language libraries are treated like any other object module.
2) Dynamic linking:- The basic concept of dynamic linking is similar to
dynamic loading. In dynamic linking, “Stub” is included for each
appropriate library routine reference. A stub is a small piece of code.
When the stub is executed, it checks whether the needed routine is already
in memory or not. If not available then the program loads the routine into
memory.
 Static Library & Shared Library
On the basis of execution and storage of this code library is classified in two types
i.e. Static library and Shared library.
Static Library
Static library is the library in which all the code to execute the file is in one
executable file and this file get copied into a target application by a compiler,
linker, or binder, producing an object file and a stand-alone executable.
Shared Library
Shared library is the library in which only the address of library is mentioned
in the target program while all the functions and code of library are in a
special place in memory space, and every program can access them, without
having multiple copies of them.
 Overlays
o The concept of overlays is that whenever a process is running it will not use
the complete program at the same time, it will use only some part of it.
o Then overlays concept says that whatever part you required, you load it and
once the part is done, then you just unloads it, means just pull it back and get
the new part you required and run it.
o Advantages:-
i. Reduce memory requirement
ii. Reduce time requirement
o Disadvantages:-
i. Overlap map must be specified by programmer
ii. Programmer must know memory requirement
iii. Overlapped module must be completely disjoint
iv. Programming design of overlays structure is complex and not possible in
all cases
o Example:-
The best example of overlays is assembler. Consider the assembler has 2
passes, 2 pass means at any time it will be doing only one thing, either the 1st
pass or the 2nd pass. This means it will finish 1st pass first and then 2nd pass.
Let assume that available main memory size is 150KB and total code size is
200KB.

Pass 1.......................70KB
Pass 2.......................80KB
Symbol table.................30KB
Common routine...............20KB
Thus, we can define two overlays:-

1) Overlay A=Pass1+Symbol Table+Common Routine (70+30+20=120KB)


2) Overlay B=Pass 2+Symbol Table+Common Routine (80+30+20=130KB)
SWAPPING

It is a memory management scheme that temporarily swaps out an idle or blocked


process from the main memory to secondary memory (Backing Store) which
ensures proper memory utilization and memory availability for those processes
which are ready to be executed.

There are two important concepts in the process of swapping which are as follows:
i. Swap In:- The method of removing a process from secondary memory (Hard
Drive) and restoring it to the main memory (RAM ) for execution is known
as the Swap In method.
ii. Swap Out:- It is a method of bringing out a process from the main
memory(RAM) and sending it to the secondary memory(hard drive) so that
the processes with higher priority or more memory consumption will be
executed is known as the Swap Out method.
Note:- Swap In and Swap Out method is done by Medium Term Scheduler(MTS).

A swapping allows more processes to be run and can be fit into memory at one
time. Thus it increases the degree of multiprogramming.
Swapping is also known as Roll-out Roll in, because if a higher priority process
arrives and wants service, the memory manager can swap out the lower priority
process and then load and execute the higher priority process. After finishing
higher priority work, the lower priority process swapped back in memory and
continued to the execution process.
Whenever the CPU scheduler decides to execute a process, it calls the dispatcher.
The dispatcher checks to see whether the next process in the ready queue is in
memory. If it is not, and if there is no free memory region, the dispatcher swaps
out a process currently in memory and swaps in the desired process. It then
reloads registers and transfers control to the selected process.

MEMORY MANAGEMENT TECHNIQUES


1) CONTIGUOUS MEMORY ALLOCATION
In a Contiguous memory management scheme, each program occupies a
contiguous block of storage locations, i.e., a set of memory locations with
consecutive addresses. This scheme contains two different allocation methods:-
Single-partition and Multiple-partition allocation.
 Single-Partition Allocation
o In this method, the memory is usually divided into two partitions: one for the
resident operating system, and one for the user processes. We may place the
operating system in either low memory or high memory. With this approach
each process is contained in a single contiguous section of memory (only one
process residing in main memory at a time).

<- Low Memory

<- High Memory


o Relocation registers used to protect user processes from each other, and from
changing operating-system code and data.
o Relocation/Base register contains value of smallest physical address.
o Limit register contains range of logical addresses.
o Each logical address must be less than the limit register.
o MMU maps logical address dynamically.
Suppose that Limit Register = 500 & Base Register = 14000
1) If the Logical Address = 346, then perform the comparison 346 <
500.Since the result is true, it performs the addition 14000 + 346 =
14346.So the result 14346 is the physical address.
2) If the Logical Address = 556, then perform the comparison 556 <
500.Since the result is false, it produce an error and stop the
operation.

o Advantages:-
1) Simple to implement.
2) Easy to manage and design.
3) In a Single contiguous memory management scheme, once a
process is loaded, it is given full processor's time, and no other
processor will interrupt it.
o Disadvantages:-
1) Wastage of memory space due to unused memory as the process
is unlikely to use all the available memory space.
2) It cannot be executed if the program is too large to fit the entire
available main memory space.
3) It does not support multiprogramming, i.e., it cannot handle
multiple programs simultaneously.
 Multiple-Partition Allocation
o The single Contiguous memory management scheme is inefficient as it
limits computers to execute only one program at a time resulting in
wastage in memory space and CPU time.
o The problem of inefficient CPU use can be overcome using
multiprogramming that allows more than one program to run
concurrently. To switch between two processes, the operating systems
need to load both processes into the main memory.
o The operating system needs to divide the available main memory into
multiple parts to load multiple processes into the main memory. Thus
multiple processes can reside in the main memory simultaneously. This
method is known as Multiple-partition allocation.
o The multiple partitioning schemes can be of two types:-
1) Fixed/Static Partitioning
2) Variable/Dynamic Partitioning
 Fixed/Static Partitioning
 The main memory is divided into several fixed-sized partitions. The
partitions can be of equal or unequal sizes.
 Here partitions are made before execution or during system configure.
 Each partition can hold a single process. The number of partitions
determines the degree of multiprogramming, i.e., the maximum
number of processes in memory.
 Whenever we have to allocate a process into memory then a free
partition that is big enough to hold the process is found. Then the
memory is allocated to the process. If there is no free space available
then the process waits in the queue to be allocated memory.
 In fixed partitioning,
1. The partitions cannot overlap.
2. A process must be contiguously present in a partition for the
execution.
 Advantages:-
1. Simple to implement.
2. Easy to manage and design.
 Disadvantages:-
1. Internal Fragmentation:- If the size of the process is lesser then
the total size of the partition then some size of the partition get
wasted and remain unused. This is wastage of the memory and
called internal fragmentation. As shown in the image below, the
4 MB partition is used to load only 3 MB process and the
remaining 1 MB got wasted.
2. External Fragmentation:- The total unused space of various
partitions cannot be used to load the processes even though there
is space available but not in the contiguous form. As shown in
the image below, the remaining 1 MB space of each partition
cannot be used as a unit to store a 4 MB process. Despite of the
fact that the sufficient space is available to load the process,
process will not be loaded.
3. Limitation on the size of the process:- If the process size is
larger than the size of maximum sized partition then that process
cannot be loaded into the memory. Therefore, a limitation can be
imposed on the process size that is it cannot be larger than the
size of the largest partition.
4. Degree of multiprogramming is less:- By Degree of multi
programming, we simply mean the maximum number of
processes that can be loaded into the memory at the same time.
In fixed partitioning, the degree of multiprogramming is fixed
and very less due to the fact that the size of the partition cannot
be varied according to the size of processes.
 Variable/Dynamic Partitioning
 Dynamic partitioning tries to overcome the problems caused by fixed
partitioning. In this technique, the partition size is not declared initially.
It is declared at the time of process loading.
 The first partition is reserved for the operating system. The remaining
space is divided into parts.
 The size of each partition will be equal to the size of the process. The
partition size varies according to the need of the process so that the
internal fragmentation can be avoided.

 Advantages:-
1. No Internal Fragmentation:- Given the fact that the partitions in
dynamic partitioning are created according to the need of the
process, It is clear that there will not be any internal
fragmentation because there will not be any unused remaining
space in the partition.
2. No Limitation on the size of the process:- In Fixed partitioning,
the process with the size greater than the size of the largest
partition could not be executed due to the lack of sufficient
contiguous memory. Here, In Dynamic partitioning, the process
size can't be restricted since the partition size is decided
according to the process size.
3. Degree of multiprogramming is dynamic:- Due to the absence of
internal fragmentation, there will not be any unused space in the
partition hence more processes can be loaded in the memory at
the same time.
 Disadvantages:-
1. External Fragmentation:- Absence of internal fragmentation
doesn't mean that there will not be external fragmentation. Let's
consider three processes P1 (5 MB) and P3 (3 MB) and P5 (8
MB) are being loaded in the respective partitions of the main
memory. After some time P1 and P3 got completed and their
assigned space is freed. Now there are two unused partitions (5
MB and 3 MB) available in the main memory but they cannot be
used to load an 8 MB process (P5) in the memory since they are
not contiguously located.

2. Complex Memory Allocation:- In Fixed partitioning, the list of


partitions is made once and will never change but in dynamic
partitioning, the allocation and deallocation is very complex
since the partition size will be varied every time when it is
assigned to a new process. OS has to keep track of all the
partitions. Due to the fact that the allocation and deallocation are
done very frequently in dynamic memory allocation and the
partition size will be changed at each time, it is going to be very
difficult for OS to manage everything.
Compaction

o Compaction is a technique to minimize the probability of external


fragmentation.
o In compaction, all the free partitions are made contiguous and all the loaded
partitions are brought together.
o By applying this technique, we can store the bigger processes in the memory.
The free partitions are merged which can now be allocated according to the
needs of new processes. This technique is also called defragmentation.

STATIC MEMORY DYNAMIC MEMORY


ALLOCATION ALLOCATION
Memory is allocated before the Memory is allocated during the
execution of the program begins execution of the program.
(During Compilation).
Variables remain permanently Allocated only when program unit
allocated. is active.
In this type of allocation Memory In this type of allocation Memory
cannot be resized after the initial can be dynamically expanded and
allocation. shrunk as necessary.
Implemented using stacks. Implemented using heap.
Faster execution than Dynamic. Slower execution than static.
It is less efficient than Dynamic It is more efficient than Static
allocation strategy. allocation strategy.
Implementation of this type of Implementation of this type of
allocation is simple. allocation is complicated.
Memory cannot be reuse when it Memory can be freed when it is no
is no longer needed. longer needed & reuse or reallocate
during execution.
Partition Selection Algorithm

 First fit: Allocate the first hole that is big enough. Searching can start either at
the beginning of the set of holes or where the previous first-fit search ended.
We can stop searching as soon as we find a free hole that is large enough.

 Best fit: Allocate the smallest hole that is big enough. We must search the
entire list, unless the list is ordered by size. This strategy produces the smallest
leftover hole.

 Worst fit: Allocate the largest hole. Again, we must search the entire list,
unless it is sorted by size. This strategy produces the largest leftover hole,
which may be more useful than the smaller leftover hole from a best-fit
approach.

2) NON-CONTIGUOUS MEMORY ALLOCATION

o In a Non-Contiguous memory management scheme, the program is divided


into different blocks and loaded at different portions of the memory that need
not necessarily be adjacent to one another.
o It allows a process to obtain multiple memory blocks in various locations in
memory based on its requirements. The non-contiguous memory allocation
also reduces memory wastage caused by internal and external fragmentation
because it uses the memory holes created by internal and external
fragmentation.
o The two methods for making a process's physical address space non-
contiguous are paging and segmentation. Non-contiguous memory allocation
divides the process into blocks (pages or segments) that are allocated to
different areas of memory space based on memory availability.

PAGING

o Paging is a fixed size partitioning scheme.


o In paging, secondary memory and main memory are divided into equal
fixed size partitions.
o The partitions of secondary memory are called as pages.
o The partitions of main memory are called as frames.
o Each process is divided into parts where size of each part is same as page
size.
o The pages of process are stored in the frames of main memory depending
upon their availability.

o Example:- Consider a process is divided into 4 pages P0, P1, P2 and P3.
CPU always generates a logical address. A physical address is needed to
access the main memory.
 CPU generates a logical address consisting of two parts-
i. Page Number:- It specifies the specific page of the process
from which CPU wants to read the data.
ii. Offset:- It specifies the specific word on the page that CPU
wants to read.
 For the page number generated by the CPU, Page Table provides
the corresponding frame number (base address of the frame) where
that page is stored in the main memory.
 The frame number combined with the offset forms the required
physical address.
i. Frame number:- It specifies the specific frame where the
required page is stored.
ii. Offset:- It specifies the specific word that has to be read
from that page.
 A page table base register (PTBR) holds the base address for the
page table of the current process.
 The following diagram illustrates the translation of logical address
into physical address-
Formula for Address Calculation:-

Logical address space = 2m bytes


Page size = 2n bytes
Page number (p) = (m-n) bytes
Page offset (d) = n bytes
(m-n) bytes at high end represents Page number
n bytes at low end represents Page offset

for eg:- Consider the following,


Logical address = 10517
Logical address space = 25
Page size = 23
Therefore,
5-3 = 2 bytes at high end (10) represents page number
3 bytes at low end (517) represents page offset
It means that the code is at 517th line of page number 10.

 Frame Table: It stores the details about the availability and allocation of frames.
 Disadvantages of Paging:- It increases the effective access time due to increased
number of memory accesses- One memory access is required to get the frame
number from the page table. Another memory access is required to get the word
from the page.
 Translation Lookaside Buffer (TLB)
o Translation Lookaside Buffer (TLB) is a solution that tries to reduce
the effective access time.
o Being hardware, the access time of TLB is very less as compared to
the main memory.
o Translation Lookaside Buffer (TLB) consists of two columns-
1) Page Number
2) Frame Number

o In a paging scheme using TLB, The logical address generated by the


CPU is translated into the physical address using following steps-
1) CPU generates a logical address consisting of two parts- Page
Number and Page Offset
2) TLB is checked to see if it contains an entry for the referenced
page number. The referenced page number is compared with the
TLB entries all at once. Now, two cases are possible-
i. Case-01: If there is a TLB hit- If TLB contains an entry for
the referenced page number, a TLB hit occurs. In this case,
TLB entry is used to get the corresponding frame number for
the referenced page number.
ii. Case-02: If there is a TLB miss- If TLB does not contain an
entry for the referenced page number, a TLB miss occurs. In
this case, page table is used to get the corresponding frame
number for the referenced page number. Then, TLB is
updated with the page number and frame number for future
references.
3) After the frame number is obtained, it is combined with the page
offset to generate the physical address. Then, physical address is
used to read the required word from the main memory.
o Flowchart

o Note:-
 Initially, TLB is empty. So, TLB misses are frequent. With every
access from the page table, TLB is updated. After some time, TLB
hits increases and TLB misses reduces.
 The time taken to update TLB after getting the frame number from
the page table is negligible. TLB is then again updated with the
currently running process.
 The time taken to update TLB after getting the frame number from
the page table is negligible. Also, TLB is updated in parallel while
fetching the word from the main memory.
o Advantages:-
1. TLB reduces the effective access time.
2. Only one memory access is required when TLB hit occurs.
o Disadvantages:-
1. TLB can hold the data of only one process at a time.
2. When context switches occur frequently, the performance of TLB
degrades due to low hit ratio.
3. As it is a special hardware, it involves additional cost.

 Protection in Paging
o A bit or bits can be added to the page table to classify a page as read-
write, read-only, read-write-execute, or some combination of these sorts
of things. Then each memory reference can be checked to ensure it is
accessing the memory in the appropriate mode.
o Valid-Invalid bit: Each page table entry has a valid-invalid bit (v->in-
memory, i-> not-in-memory). Initially it is set to „i‟ for entries.
o During address translation, if valid-invalid bit is „i‟ then it could be:
i. Illegal reference (Outside process address space) -> aborts the
process.
ii. Legal reference but not in memory (Page fault) -> brings the page.
o Many processes do not use all of the page table available to them,
particularly in modern systems with very large potential page tables.
Rather than waste memory by creating a full-size page table for every
process, some systems use a page-table length register, PTLR, to specify
the length of the page table.

 Sharing in Paging
o An advantage of paging is the possibility of sharing common code. This
is important in a time sharing environment. If the code is reentrant code
(or pure code); it can be shared.
o Reentrant code is non-self-modifying code; it never changes during
execution. Thus, two or more processes can execute the same code at the
same time. Each process has its own copy of registers and data storage to
hold the data for the process‟s execution. The data for different processes
will be different.
o Only one copy of the editor need to be kept in physical memory. Each
user‟s page table maps onto the same physical copy of the editor, but data
pages are mapped onto different frames. Other heavily used programs
such as compilers, window systems, run – time libraries, database
systems can also be shared. To be sharable, the code must be reentrant.
o The sharing of memory among processes on a system is similar to the
sharing of the address space of the task by threads. Shared memory can
also be described as a method of inter process communication. Some
OS‟s implement shared memory using shared pages.

SEGMENTATION

o Like Paging, Segmentation is another non-contiguous memory allocation


technique.
o In segmentation, process is not divided blindly into fixed size pages.
o Rather, the process is divided into modules for better visualization.
o Segmentation is a variable size partitioning scheme.
o Why Segmentation is required?
 Paging is more close to the Operating system rather than the User.
It divides all the processes into the form of pages regardless of the
fact that a process can have some relative parts of functions which
need to be loaded in the same page.
 Operating system doesn't care about the User's view of the process.
It may divide the same function into different pages and those
pages may or may not be loaded at the same time into the memory.
It decreases the efficiency of the system.
 It is better to have segmentation which divides the process into the
segments. Each segment contains the same type of functions such
as the main function can be included in one segment and the
library functions can be included in the other segment.
o In segmentation, secondary memory and main memory are divided into
partitions of unequal size. The size of partitions depends on the length of
modules.
o The partitions of secondary memory are called as segments.
o Consider a program is divided into 5 segments as-

 Segment Table
o Segment table is a table that stores the information about each segment
of the process.
o Segment table is stored as a separate segment in the main memory.
o Segment table has two columns:
i. Limit indicates the length or size of the segment.
ii. Base indicates the base address or starting address of the segment
in the main memory.
o Segment Table Base Register (STBR) stores the base address of the
segment table.
o Segment Table Length Register (STLR) indicates the number of
segments used by a program.
 Translating Logical Address into Physical Address-
o CPU always generates a logical address. A physical address is needed to
access the main memory.
o Following steps are followed to translate logical address into physical
address-
1) CPU generates a logical address consisting of two parts- Segment
number and Segment offset; Segment Number specifies the
specific segment of the process from which CPU wants to read
the data. Segment Offset specifies the specific word in the
segment that CPU wants to read.
2) For the generated segment number, corresponding entry is located
in the segment table. Then, segment offset is compared with the
limit (size) of the segment. Now, two cases are possible-
i. Case-01: Segment Offset >= Limit: If segment offset is found
to be greater than or equal to the limit, a trap is generated.
ii. Case-02: Segment Offset < Limit: If segment offset is found
to be smaller than the limit, then request is treated as a valid
request. The segment offset must always lie in the range [0,
limit-1]. Then, segment offset is added with the base address
of the segment. The result obtained after addition is the
address of the memory location storing the required word.

o Advantages:-
1. It allows to divide the program into modules which provides
better visualization.
2. Segment table consumes less space as compared to Page Table
in paging.
3. It solves the problem of internal fragmentation.
o Disadvantages:-
1. There is an overhead of maintaining a segment table for each
process.
2. The time taken to fetch the instruction increases since now two
memory accesses is required.
3. Segments of unequal size are not suited for swapping.
4. It suffers from external fragmentation as the free space gets
broken down into smaller pieces with the processes being loaded
and removed from the main memory.
 Protection in Segmentation
o With each entry in segment table associate: Validation bit:- 0 for
illegal segment and 1 for legal segment
o Protection bits associated with segments for Read/write/execute
privileges
 Sharing in Segmentation
o An advantage of segmentation is the possibility of sharing common
code.
o Segment-table-entries of two different processes point to the same
physical locations.
VIRTUAL MEMORY

o Virtual Memory is a storage scheme that provides user an illusion of


having a very big main memory.
o In this scheme secondary memory can be addressed as though it were part
of the main memory and only the required pages or portions of processes
are loaded into the main memory.
o The main visible advantage of this scheme is that programs can be larger
than physical memory.
o Page is not loaded until it is accessed.
o Better memory utilization because unused routine is never loaded.

o Benefits of having Virtual Memory


1. Large programs can be written, as the virtual space available is
huge compared to physical memory.
2. Virtual memory allows address spaces to be shared by several
processes.
3. More physical memory available, as programs are stored on virtual
memory, so they occupy very less space on actual physical
memory. Therefore, the Logical address space can be much larger
than that of physical address space.
o Virtual memory can be implemented by using the following techniques:-
i. Demand Paging
ii. Demand Segmentation
 Demand Paging
o A demand paging system is quite similar to a paging system with swapping
where processes reside in secondary memory and pages are loaded only on
demand, not in advance.
o In demand paging, the pager (Swapper that deals with the individual pages
of a process) brings only those necessary pages into the memory instead of
swapping in a whole process. Hence, it is also called lazy swapper because
the swapping of pages is done only when the CPU requires it.

o Whenever a page is needed? make a reference to it;


i. If the reference is invalid then abort it.
ii. If the page is Not-in-memory then bring it to memory.
 Valid-Invalid Bit
 With each page table entry, a valid-invalid bit is associated( where 1
indicates in the memory and 0 indicates not in the memory)
 Initially, the valid-invalid bit is set to 0 for all table entries.
 If the bit is set to "valid", then the associated page is both legal and is in
memory.
 If the bit is set to "invalid" then it indicates that the page is either not
valid or the page is valid but is currently not on the disk.
 Page Fault
While executing a program, if the program references a page which is not
available in the main memory because it was swapped out a little ago, the
processor treats this invalid memory reference as a page fault and
transfers control from the program to the operating system to demand the
page back into the memory.
 How does Demand Paging Work?
o The demand paging system depends on the page table implementation
because the page table helps map logical memory to physical memory.
o Bitwise operators are implemented in the page table to indicate that pages
are ok or not (valid or invalid).
o All valid pages exist in primary memory, and invalid pages exist
in secondary memory.
o Now all processes go-ahead to access all pages, and then the following
things will be happened, such as:
1. Chеck an internal table for this process to dеtеrminе whether the
rеfеrеncе was a valid or invalid memory access.
2. If the page is ok (Valid), then all processing instructions work as
normal. If any page is found invalid, then a page fault issue arises.
3. Wе find a frее framе.
4. Wе schedule a disk οpеratiοn tο rеad thе dеsirеd pagе intο thе
nеwly allοcatеd framе.
5. Whеn thе disk rеad is cοmplеtе, wе mοdify thе intеrnal tablе kеpt
with thе prοcеss and thе pagе tablе tο indicatе that thе pagе is nοw
in mеmοry.
6. Wе rеstart thе instructiοn that was intеrruptеd by thе illеgal
addrеss trap. Thе prοcеss can nοw accеss thе pagе as thοugh it had
always bееn mеmοry.
 Advantages of Demand Paging
1. Large virtual memory.
2. More efficient use of memory.
3. There is no limit on degree of multiprogramming.
4. Demand paging avoids External Fragmentation.
5. No compaction is required in demand Paging.
 Disadvantages of Demand Paging
1. There is an increase in overheads due to interrupts and page tables.
2. Memory access time in demand paging is longer.

PAGE REPLACEMENT

o When an executing process refers to a page, it is first searched in the main


memory. If it is not present in the main memory, a page fault occurs. Page
Fault is the condition in which a running process refers to a page that is not
loaded in the main memory. In such a case, the OS has to bring the page
from the secondary storage into the main memory by using the following
steps:
i. Find the location of the desired page on the secondary memory.
ii. Find a free frame in main memory:-
 If there is a free frame, use it
 If there is no free frame, use page replacement algorithm to
select a victim frame to be replaced.
iii. Read the desired page into the (newly) free frame. Update the page
and frame tables.
iv. Restart the process.
Page replacement is a process of swapping out an existing page from the
frame of a main memory and replacing it with the required page. Page
replacement is required when-

 All the frames of main memory are already occupied.


 Thus, a page has to be replaced to create a room for the required
page.
 Page Replacement Algorithms-
Page replacement algorithms help to decide which page must be swapped
out from the main memory to create a room for the incoming page. A good
page replacement algorithm is one that minimizes the number of page faults.
 Reference String
o The string of memory references is called reference string. Reference
strings are generated artificially or by tracing a given system and
recording the address of each memory reference.
o For a given page size, we need to consider only the page number, not
the entire address.
o For example, consider the following sequence of addresses −
123,215,600,1234,76,96
o If page size is 100, then the reference string is 1,2,6,12,0,0

1. First-In First-Out Page Replacement Algorithm (FIFO)


o As the name suggests, this algorithm works on the principle of “First in
First out“.
o First In First Out Page Replacement Algorithm is the simplest algorithm
for page replacement. It maintains a queue to keep track of all the pages.
We always add a new page to the end of a queue. When the queue is full
and there is a Page Fault, we remove the page present at the front of the
queue and add a new page at the end of the queue.
o In this manner we maintain the first in first out technique, that is, the page
which is inserted into the memory first will be removed from the memory
first.
o Example:- Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3,
1, 6, 3, 2, 3 with frame size 4(i.e. maximum 4 pages in a frame).
Total Page Fault = 9

Page Fault ratio = Total Miss/Total Possible Cases

Page Fault ratio = 9/12

 Belady’s Anomaly
 It states that for some page reference strings, the page fault rate
may increases as the number of allocated frames increases.
 For example consider the page reference string: 1, 2, 3, 4, 1, 2,
5, 1, 2, 3, 4, 5.

 Advantages:-
1) Simple to understand and implement
2) Does not cause more overhead
 Disadvantages:-
1) Poor performance
2) Doesn‟t use the frequency of the last used time and just simply
replaces the oldest page.
3) Suffers from Belady‟s anomaly.
2. Least Recently Used Page Replacement (LRU)
o It keeps the track of usage of pages over a period of time.
o This algorithm works on the basis of the principle of locality of a
reference (Caching Technique) which states that a program has a
tendency to access the same set of memory locations repetitively over a
short period of time.
o So pages that have been used heavily in the past are most likely to be
used heavily in the future also.
o In this algorithm, when a page fault occurs, then the page that has not
been used for the longest duration of time (that is least recently used one)
is replaced by the newly requested page.
o Example:- Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1,
3, 1, 6, 3, 2, 3 with frame size 4(i.e. maximum 4 pages in a frame).

Total Page Fault = 8

Page Fault ratio = Total Miss/Total Possible Cases

Page Fault ratio = 8/12

 Advantages:-

1) Efficient
2) Doesn't suffer from Belady‟s Anomaly.
 Disadvantages:-

1) Complex Implementation.
2) Expensive.
3) Requires hardware support.

3. Optimal Page Replacement


o This algorithm says that if we know which page we are gonna use in
future then we can optimize the page replacement technique.
o Optimal page replacement is the best page replacement algorithm as this
algorithm results in the least number of page faults.
o In this algorithm, the pages are replaced with the ones that will not be
used for the longest duration of time in the future.

Total Page Fault = 9

Page Fault ratio = Total Miss/Total Possible Cases

Page Fault ratio = 9/20

 Advantages:-
1) Excellent efficiency
2) Used as the benchmark for other algorithms
 Disadvantages:-
1) More time consuming
2) Need future awareness of the programs, which is not possible
every time

 Thrashing
To know more clearly about thrashing, first, we need to know about page
fault and swapping.
 Page fault: We know every program is divided into some pages. A
page fault occurs when a program attempts to access data or code in
its address space but is not currently located in the system RAM.
 Swapping: Whenever a page fault happens, the operating system
will try to fetch that page from secondary memory and try to swap
it with one of the pages in RAM. This process is called swapping.
Thrashing is when the page fault and swapping happens very frequently
at a higher rate, and then the operating system has to spend more time
swapping these pages. This state in the operating system is known as
thrashing. Because of thrashing, the CPU utilization is going to be
reduced or negligible.

You might also like