0% found this document useful (0 votes)
4 views50 pages

Unit-4 OS Notes

The document discusses memory management in operating systems, detailing physical and logical address spaces, and the concepts of bare machines and resident monitors. It explains fixed and variable partitioning techniques for multiprogramming, highlighting their advantages and disadvantages, as well as the importance of memory protection and security measures. Additionally, it covers paging as a method for managing memory allocation for processes, illustrating the concept with examples.

Uploaded by

happy John
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views50 pages

Unit-4 OS Notes

The document discusses memory management in operating systems, detailing physical and logical address spaces, and the concepts of bare machines and resident monitors. It explains fixed and variable partitioning techniques for multiprogramming, highlighting their advantages and disadvantages, as well as the importance of memory protection and security measures. Additionally, it covers paging as a method for managing memory allocation for processes, illustrating the concept with examples.

Uploaded by

happy John
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Unit-4

Memory Management
Physical Address Space
Physical address space in a system can be defined as the size of the main memory. It is really important to
compare the process size with the physical address space. The process size must be less than the physical address
space.
Calculation: -
Physical Address Space = Size of the Main Memory
If, physical address space = 64 KB = 26 KB = 26 * 210 Bytes = 216 bytes
Let us consider, word size = 8 Bytes = 23 Bytes
Hence, Physical address space (in words) = (216) / (23) = 213 Words(=N) (i.e. log2 N= log2 213=13)
Therefore, Physical Address = 13 bits
In General, If Physical Address Space = N Words
then, Physical Address = log2 N
Logical Address Space
Logical address space can be defined as the size of the process. The size of the process should be less enough so
that it can reside in the main memory.
Calculation: -
Physical Address Space = Size of the process
If, logical address space = 128 MB = 27 * 210 bytes = 227 bytes
Let us consider, word size = 4 Bytes = 22 Bytes
Hence, logical address space (in words) = (227) /(22) = 225 Words(=L) (i.e. log2 L= log2 225=25)
Therefore, logical Address = 25 bits
In General, If logical Address Space = L Words
then, logical Address = log2 L
What is a Word?
The Word is the smallest unit of the memory. It is the collection of bytes. Every operating system defines
different word sizes after analyzing the n-bit address that is inputted to the decoder and the 2 ^ n memory
locations that are produced from the decoder.
Basic bare machine
Bare Machine is a logic hardware in the computer system which can execute the programs in the processor
without using the Operating System. we cannot execute any process inside the processor without the Operating
System. But, with the Bare Machine, it is possible.
But the only drawback was that the Bare Machine accepts the program and instructions in Machine
Language. Due to this, only the trained people who were qualified in the computer field and were able to
understand and instruct the computer in Machine language were able to operate on a computer.
Resident Monitor
 The Resident Monitor is a code which runs on Bare Machine.
 Its acts like an operating system which controls everything inside a processor and performs all the
functions.
 The Resident Monitor is thus also known as the Job Sequencer because like the Operating system, it also
sequences the jobs and sends it to the processor for execution.
 After the jobs are scheduled, the Resident Monitor loads the Programs one by one into the main memory
according to their sequence.
Parts of Resident Monitor: - The Resident Monitors are divided into 4 parts: -
1. Control Language Interpreter
2. Loader
3. Device Driver
4. Interrupt Processing

1. Control Language Interpreter


The job of the Control Language Interpreter is to read and carry out the instructions line by line to the next level.
2. Loader: - The Loader is the main part of the Resident Monitor. As the name suggests, it Loads all the required
system and application programs into the main memory.
3. Device Driver: - The Device Driver Takes care of all the Input-Output devices connected with the system. So,
all the communication that takes place between the user and the system is handled by the Device Driver. It
simply acts as an intermediate between the requests and the response, requests that are made by the user to the
system, and they respond that the system produces to fulfill these requests.
4. Interrupt Processing: - It processes the all occurred interrupt to the system.
Multiprogramming with fixed partitions: -

Multi-programming with fixed partitioning is a contiguous memory management technique in which the main
memory is divided into fixed sized partitions which can be of equal or unequal size. Whenever we have to
allocate a process memory then a free partition that is big enough to hold the process is found. Then the
memory is allocated to the process. If there is no free space available then the process waits in the queue to be
allocated memory. It is one of the oldest memory management techniques which is easy to implement.
In fixed partitioning,
1. The partitions cannot overlap.
2. A process must be contiguously present in a partition for the execution.
Advantages of fixed partitioning: -
 The main memory is divided into partitions of equal or different sizes.
 The operating system always resides in the first partition while the other partitions can be used to store
user processes.
 The memory is assigned to the processes in contiguous way.
Disadvantages of fixed partitioning: -
1. Internal Fragmentation: - If the size of the process is lesser then the total size of the partition then some size
of the partition gets wasted and remain unused. This is wastage of the memory and called internal fragmentation.
As shown in the image below, the 4 MB partition is used to load only 3 MB process and the remaining 1
MB got wasted.
2. External Fragmentation: - The total unused space of various partitions cannot be used to load the processes
even though there is space available but not in the contiguous form.
As shown in the image below, the remaining 1 MB space of each partition cannot be used as a unit to
store a 4 MB process. Despite of the fact that the sufficient space is available to load the process, process will not
be loaded.
3. Limitation on the size of the process: - If the process size is larger than the size of maximum sized partition
then that process cannot be loaded into the memory. Therefore, a limitation can be imposed on the process size
that is it cannot be larger than the size of the largest partition.
4. Degree of multiprogramming is less: - By Degree of multi programming, we simply mean the maximum
number of processes that can be loaded into the memory at the same time. In fixed partitioning, the degree of
multiprogramming is fixed and very less due to the fact that the size of the partition cannot be varied according
to the size of processes.

Multiprogramming with Variable partitions: -


Multi-programming with variable partitioning is a contiguous memory management technique in which the
main memory is not divided into partitions and the process is allocated a chunk of free memory that is big
enough for it to fit. The space which is left is considered as the free space which can be further used by other
processes. It also provides the concept of compaction. In compaction the spaces that are free and the spaces
which not allocated to the process are combined and single large memory space is made.
 In this technique, the partition size is not declared initially. It is declared at the time of process loading.
 The first partition is reserved for the operating system. The remaining space is divided into parts. The size of
each partition will be equal to the size of the process. The partition size varies according to the need of the
process so that the internal fragmentation can be avoided.

Advantages of Dynamic Partitioning over fixed partitioning


1. No Internal Fragmentation: - It is clear that there will not be any internal fragmentation because there will
not be any unused remaining space in the partition.
2. No Limitation on the size of the process: - In Fixed partitioning, the process with the size greater than the
size of the largest partition could not be executed due to the lack of sufficient contiguous memory. Here, In
Dynamic partitioning, the process size can't be restricted since the partition size is decided according to the
process size.
3. Degree of multiprogramming is dynamic: - Due to the absence of internal fragmentation, there will not be
any unused space in the partition hence more processes can be loaded in the memory at the same time.
Disadvantages of dynamic partitioning
1. External Fragmentation: - Absence of internal fragmentation doesn't mean that there will not be external
fragmentation.
Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being loaded in the
respective partitions of the main memory.
After some time P1 and P3 got completed and their assigned space is freed. Now there are two unused
partitions (1 MB and 1 MB) available in the main memory but they cannot be used to load a 2 MB process in the
memory since they are not contiguously located.
The rule says that the process must be contiguously present in the main memory to get executed. We
need to change this rule to avoid external fragmentation.
Memory protection scheme in O.S.: -
A mechanism that controls the access of programs, processes, or users to the resources defined by a computer
system is referred to as protection. You may utilize protection as a tool for multi-programming operating
systems, allowing multiple users to safely share a common logical namespace, including a directory or files.
It needs the protection of computer resources like the software, memory, processor, etc. Users should take
protective measures as a helper to multiprogramming OS so that multiple users may safely use a common logical
namespace like a directory or data. Protection may be achieved by maintaining confidentiality, honesty and
availability in the OS. It is critical to secure the device from unauthorized access, viruses, worms, and other
malware.
Need of Protection in Operating System: - Various needs of protection in the operating system are as follows:
1. There may be security risks like unauthorized reading, writing, modification, or preventing the system
from working effectively for authorized users.
2. It helps to ensure data security, process security, and program security against unauthorized user access
or program access.
3. It is important to ensure no access rights' breaches, no viruses, no unauthorized access to the existing
data.
4. Its purpose is to ensure that only the systems' policies access programs, resources, and data.
Goals of Protection in Operating System: - Various goals of protection in the operating system are as follows:
1. The policies define how processes access the computer system's resources, such as the CPU, memory,
software, and even the operating system.
2. It is the responsibility of both the operating system designer and the app programmer. Although, these
policies are modified at any time.
3. Protection is a technique for protecting data and processes from harmful or intentional infiltration.
4. It contains protection policies either established by itself, set by management or imposed individually by
programmers to ensure that their programs are protected to the greatest extent possible.
5. It also provides a multiprogramming OS with the security that its users expect when sharing common
space such as files or directories.
Role of Protection in Operating System: -
1. Its main role is to provide a mechanism for implementing policies that define the use of resources in a
computer system.
2. Some rules are set during the system's design, while others are defined by system administrators to secure
their files and programs.
3. Every program has distinct policies for using resources, and these policies may change over time.
4. Therefore, system security is not the responsibility of the system's designer, and the programmer must
also design the protection technique to protect their system against infiltration.
Domain of Protection: - Various domains of protection in operating system are as follows:
1. The protection policies restrict each process's access to its resource handling. A process is obligated to
use only the resources necessary to fulfil its task within the time constraints and in the mode in which it is
required. It is a process's protected domain.
2. Processes and objects are abstract data types in a computer system, and these objects have operations that
are unique to them. A domain component is defined as <object, {set of operations on object}>.
3. Each domain comprises a collection of objects and the operations that may be implemented on them. A
domain could be made up of only one process, procedure, or user. If a domain is linked with a procedure,
changing the domain would mean changing the procedure ID. Objects may share one or more common
operations.
Security measures of Operating System: - There are various security measures of the operating system that the
users may take. Some of them are as follows:
1.Memory Protection using Keys: The keys are based on the use of special codes as we can verify the
compliance between using arrays of memory cells and the number of running programs. This key method
gives the users a process to impose page-based protections without any modification in the page tables.
2. Memory Protection using Rings: In CS, the domains related to ordered protection are called Protection
Rings. This method helps in improving fault tolerance and provides security. These rings are arranged in a
hierarchy from most privileged to least privileged. In the single-level sharing OS, each and every segment has
a protection ring for the process of reading, writing, and executing operations of the process.
3 Capability-based addressing: It is a method of protecting the memory that cannot be seen in modern
commercial computers. Here, the pointers (objects consisting of a memory address) are restored by the
capabilities objects that can only be created with the protected instructions and may only execute by a kernel,
or by another process that is authorized to execute.
4. Memory Protection using masks: The masks are used in the protection of memory during the organization
of paging. In this method, before the implementation, the page numbers are indicated to each program and are
reserved for the placement of its directives.
5. Memory Protection using Segmentation: It is a method of dividing the system memory into different
segments. The data structures of x86 architecture of OS like local descriptor table and global descriptor table
are used in the protection of memory.
6. Memory Protection using Simulated segmentation: With this technique, we can monitor the program for
interpreting the machine code instructions of system architectures. Through this, the simulator can help in
protecting the memory by using a segmentation using the scheme and validating the target address of every
instruction in real-time.
7. Memory Protection using Dynamic tainting: Dynamic tainting is a technique that consists of marking and
tracking certain data in a program at runtime as it protects the process from illegal memory accesses. In
tainting technique, we taint a program to mark two kinds of data i.e., memory in the data space and the
pointers.
System Authentication: - One-time passwords, encrypted passwords, and cryptography are used to
create a strong password and a formidable authentication source.
1. One-time Password: - It is a combination of two passwords that allow the user access. The system creates a
random number, and the user supplies a matching one. An algorithm generates a random number for the system
and the user, and the output is matched using a common function.
2. Encrypted Passwords: - It is also a very effective technique of authenticating access. Encrypted data is
passed via the network, which transfers and checks passwords, allowing data to pass without interruption or
interception.
3. Cryptography: - It's another way to ensure that unauthorized users can't access data transferred over a
network.
 It aids in the data secure transmission.
 It introduces the concept of a key to protecting the data.
 The key is crucial in this situation.
 When a user sends data, he encodes it using a computer that has the key, and the receiver must decode the
data with the same key.
 As a result, even if the data is stolen in the middle of the process, there's a good possibility the
unauthorized user won't be able to access it.
Paging in OS
 Paging is a storage mechanism used to retrieve processes from the secondary storage into the main
memory in the form of pages.
 The main idea behind the paging is to divide each process in the form of pages. The main memory will
also be divided in the form of frames.
 One page of the process is to be stored in one of the frames of the memory.
 The pages can be stored at the different locations of the memory but the priority is always to find the
contiguous frames or holes.
 Pages of the process are brought into the main memory only when they are required otherwise, they
reside in the secondary storage.
 Different operating system defines different frame sizes.
 The sizes of each frame must be equal. Considering the fact that the pages are mapped to the frames in
Paging, page size needs to be as same as frame size.

Example-1
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory will be divided
into the collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided into
pages of 1 KB each so that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous way.
Frames, pages and the mapping between the two is shown in the image below.
Example-2
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames become empty and
therefore other pages can be loaded in that empty place. The process P5 of size 8 KB (8 pages) is waiting inside
the ready queue.
Given the fact that, we have 8 noncontiguous frames available in the memory and paging provides the
flexibility of storing the process at the different places. Therefore, we can load the pages of process P5 in the
place of P2 and P4.
Page Table in OS: - Page Table is a data structure used by the virtual memory system to store the mapping
between logical addresses and physical addresses.
 Logical addresses are generated by the CPU for the pages of the processes therefore they are generally
used by the processes.
 Physical addresses are the actual frame address of the memory. They are generally used by the hardware
or more specifically by RAM subsystems.
The image given below considers-
Physical Address Space = M words Physical Address = log 2 M = m bits
Logical Address Space = L words Logical Address = log 2 L = l bits
Page Size = P words page offset = log 2 P = p bits
Size of the page table
However, the part of the process which is being executed by the CPU must be present in the main memory
during that time period. The page table must also be present in the main memory all the time because it has the
entry for all the pages.
The size of the page table depends upon the number of entries in the table and the bytes stored in one entry.
Let's consider,
Logical Address, log 2 L=l = 24 bits
Logical Address space, L=2l => L=224 => L=224 bytes
Let's say, Page size= P= 4 KB = 22 * 210 Bytes = 212 bytes
Page offset, p= log 2 P= log 2 212 =12 bits
Number of bits in a page = Logical Address - Page Offset = (l-p) = (24 – 12) = 12 bits
Number of pages = 2(l-p) = 2(24-12) = 212 = 22 * 210 bytes = 4 KB
Let's say, Page table entry = 1 Byte
Therefore, the size of the page table = 4 KB X 1 Byte = 4 KB
 The CPU always accesses the processes through their logical addresses. However, the main memory
recognizes physical address only.
 Memory Management Unit converts the page number of the logical address to the frame number of the
physical address. The offset remains same in both the addresses.
 To perform this task, Memory Management unit needs a special kind of mapping which is done by page
table. The page table stores all the Frame numbers corresponding to the page numbers of the page table.

Mapping from page table to main memory


In operating systems, there is always a requirement of mapping from logical address to the physical address.
However, this process involves various steps which are defined as follows.
1. Generation of logical address: - CPU generates logical address for each page of the process. This contains
two parts: page number and offset.
2. Scaling: - To determine the actual page number of the process, CPU stores the page table base in a special
register. Each time the address is generated, the value of the page table base is added to the page number to get
the actual location of the page entry in the table. This process is called scaling.
3. Generation of physical Address: - The frame number of the desired page is determined by its entry in the
page table. A physical address is generated which also contains two parts : frame number and offset. The Offset
will be similar to the offset of the logical address therefore it will be copied from the logical address.
4. Getting Actual Frame Number: - The frame number and the offset from the physical address is mapped to
the main memory in order to get the actual word address.
Segmentation in OS :-
Segmentation is a memory management technique in which the memory is divided into the variable size parts.
Each part is known as a segment which can be allocated to a process.
The details about each segment are stored in a table called a segment table. Segment table is stored in one (or
many) of the segments. Segment table contains mainly two information about segment:
1. Base: It is the base address of the segment
2. Limit: It is the length of the segment.
Why Segmentation is required?
 Till now, we were using Paging as our main memory management technique.
 Paging is closer to the Operating system rather than the User.
 Operating system doesn't care about the User's view of the process.
 It may divide the same function into different pages and those pages may or may not be loaded at the
same time into the memory.
 It decreases the efficiency of the system.
It is better to have segmentation which divides the process into the segments. Each segment contains the same
type of functions such as the main function can be included in one segment and the library functions can be
included in the other segment.
Translation of Logical address into physical address by segment table
CPU generates a logical address which contains two parts:
1. Segment Number 2. Offset
The operating system also generates a segment map table for each program

The Segment number is mapped to the segment table. The limit of the respective segment is compared
with the offset. If the offset is less than the limit then the address is valid otherwise it throws an error as the
address is invalid. In the case of valid addresses, the base address of the segment is added to the offset to get the
physical address of the actual word in the main memory.
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
Disadvantages of Segmentation
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.
Difference between Paging and Segmentation

S.No. Paging Segmentation

1 Non-Contiguous memory allocation Non-contiguous memory allocation

2 Paging divides program into fixed size pages. Segmentation divides program into variable
size segments.

3 OS is responsible Compiler is responsible.

4 Paging is faster than segmentation Segmentation is slower than paging

5 Paging is closer to Operating System Segmentation is closer to User

6 It suffers from internal fragmentation It suffers from external fragmentation

7 There is no external fragmentation There is no external fragmentation

8 Logical address is divided into page number and Logical address is divided into segment number
page offset and segment offset

9 Page table is used to maintain the page information. Segment Table maintains the segment
information

10 Page table entry has the frame number and some Segment table entry has the base address of the
flag bits to represent details about pages. segment and some protection bits for the
segments.

Segmented Paging
Pure segmentation is not very popular and not being used in many of the operating systems. However,
Segmentation can be combined with Paging to get the best features out of both the techniques.
In Segmented Paging, the main memory is divided into variable size segments which are further divided into
fixed size pages.

1. Pages are smaller than segments.


2. Each Segment has a page table which means every program has multiple page tables.
3. The logical address is represented as Segment Number (base address), Page number and page offset.
Segment Number → It points to the appropriate Segment Number.
Page Number → It Points to the exact page within the segment
Page Offset → Used as an offset within the page frame
Each Page table contains the various information about every page of the segment. The Segment Table
contains the information about every segment. Each segment table entry points to a page table entry and every
page table entry is mapped to one of the page within a segment.
Translation of logical address to physical address
 The CPU generates a logical address which is divided into two parts: Segment Number and Segment
Offset.
 The Segment Offset must be less than the segment limit. Offset is further divided into Page number and
Page Offset.
 To map the exact page number in the page table, the page number is added into the page table base.
 The actual frame number with the page offset is mapped to the main memory to get the desired word in
the page of the certain segment of the process.
Advantages of Segmented Paging
1. It reduces memory usage.
2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.
Disadvantages of Segmented Paging
1. Internal Fragmentation will be there.
2. The complexity level will be much higher as compare to paging.
3. Page Tables need to be contiguously stored in the memory.
Virtual Memory in OS: -
 Virtual Memory is a storage scheme that provides user an illusion of having a very big
main memory. This is done by treating a part of secondary memory as the main memory.
 In this scheme, User can load the bigger size processes than the available main memory by
having the illusion that the memory is available to load the process.
 Instead of loading one big process in the main memory, the Operating System loads the
different parts of more than one process in the main memory.
 By doing this, the degree of multiprogramming will be increased and therefore, the CPU
utilization will also be increased.

How Virtual Memory Works?


In this scheme, whenever some pages need to be loaded in the main memory for the execution
and the memory is not available for those many pages, then in that case, instead of stopping the
pages from entering in the main memory, the OS search for the RAM area that are least used in
the recent times or that are not referenced and copy that into the secondary memory to make the
space for the new pages in the main memory. Since all this procedure happens automatically,
therefore it makes the computer feel like it is having the unlimited RAM.

Snapshot of a virtual memory management system


Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is 1 KB. The main
memory contains 8 frame of 1 KB each. The OS resides in the first two partitions. In the third
partition, 1st page of P1 is stored and the other frames are also shown as filled with the different
pages of processes in the main memory.
The page tables of both the pages are 1 KB size each and therefore they can be fit in one frame
each. The page tables of both the processes contain various information that is also shown in the
image.
The CPU contains a register which contains the base address of page table that is 5 in the
case of P1 and 7 in the case of P2. This page table base address will be added to the page number
of the Logical address when it comes to accessing the actual corresponding entry.
Advantages of Virtual Memory
1. The degree of Multiprogramming will be increased.
2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.
Disadvantages of Virtual Memory
1. The system becomes slower since swapping takes time.
2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use.
Drawbacks of Paging
1. Size of Page table can be very big and therefore it wastes main memory.
2. CPU will take more time to read a single word from the main memory.
How to decrease the page table size
1. The page table size can be decreased by increasing the page size but it will cause internal fragmentation
and there will also be page wastage.
2. Other way is to use multilevel paging but that increases the effective access time therefore this is not a
practical approach.
Demand Paging: - Demand Paging is a popular method of virtual memory management. In demand paging, the
pages of a process which are least used, get stored in the secondary memory.
A page is copied to the main memory when its demand is made or page fault occurs. There are various page
replacement algorithms which are used to determine the pages which will be replaced.
Advantages of Demand Paging
 Efficient use of physical memory: Query paging allows for more efficient use because only the necessary
pages are loaded into memory at any given time.
 Support for larger programs: Programs can be larger than the physical memory available on the system
because only the necessary pages will be loaded into memory.
 Faster program starts: Because only part of a program is initially loaded into memory, programs can start
faster than if the entire program were loaded at once.
 Reduce memory usage: Query paging can help reduce the amount of memory a program needs, which can
improve system performance by reducing the amount of disk I/O required.
Disadvantages of Demand Paging
 Page Fault Overload: The process of swapping pages between memory and disk can cause a
performance overhead, especially if the program frequently accesses pages that are not currently in
memory.
 Degraded performance: If a program frequently accesses pages that are not currently in memory, the
system spends a lot of time swapping out pages, which degrades performance.
 Fragmentation: Query paging can cause physical memory fragmentation, degrading system performance
over time.
 Complexity: Implementing query paging in an operating system can be complex, requiring complex
algorithms and data structures to manage page tables and swap space.
What is a Page Fault?
If the referred page is not present in the main memory, then there will be a miss and the concept is called Page
miss or page fault. The CPU has to access the missed page from the secondary memory. If the number of page
fault is very high then the effective access time of the system will become very high.

Performance of demand paging: - The performance of demand paging depends on various factors-
1. Page Size 2. Page Replacement Algorithm 3. Page Table Size 4. Page Table Organization
5. Translation looks aside buffer (TLB)
 A Translation look aside buffer can be defined as a memory cache which can be used to reduce the time
taken to access the page table again and again.
 It is a memory cache which is closer to the CPU and the time taken by CPU to access TLB is lesser than
that taken to access main memory.
 In other words, we can say that TLB is faster and smaller than the main memory but cheaper and bigger
than the register.
 TLB follows the concept of locality of reference which means that it contains only the entries of those
many pages that are frequently accessed by the CPU.

In translation look aside buffers, there are tags and keys with the help of which, the mapping is done.
 TLB hit is a condition where the desired entry is found in translation look aside buffer. If this happens
then the CPU simply access the actual location in the main memory.
 However, if the entry is not found in TLB (TLB miss) then CPU has to access page table in the main
memory and then access the actual frame in the main memory.
 Therefore, in the case of TLB hit, the effective access time will be lesser as compare to the case of TLB
miss.
 If the probability of TLB hit is P% (TLB hit rate) then the probability of TLB miss (TLB miss rate) will
be (1-P) %.
Therefore, the effective access time can be defined as; EAT = p (t + m) + (1 - p) (t + k * m + m)
Where, p → TLB hit rate, t → time taken to access TLB, m → time taken to access main memory, k = 1, if the
single level paging has been implemented.
By the formula, we come to know that
1. Effective access time will be decreased if the TLB hit rate is increased.
2. Effective access time will be increased in the case of multilevel paging.
Example- Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the
physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical memory.
If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is _________.
Ans- Given, TLB hit ratio(p)= 0.6
Therefore, TLB miss ratio = 0.4 (i.e. 1-.6=.4)
Time taken to access TLB (t) = 10 ms
Time taken to access main memory (m) = 80 ms and k=1because single level paging has been implemented. So,
Effective Access Time (EAT) = 0.6 (10 + 80) + 0.4 (10 + 1*80 + 80)
= 90 * 0.6 + 0.4 *170 = 54+68 =122 ms
6. Page fault rate 0? p ? 1
 if p = 0, no page faults
 if p = 1, every reference results in a page fault.
Inverted Page Table
 Inverted Page Table is the global page table which is maintained by the Operating System for all the
processes. In inverted page table, the number of entries is equal to the number of frames in the main
memory. It can be used to overcome the drawbacks of page table.
 There is always a space reserved for the page regardless of the fact that whether it is present in the main
memory or not. However, this is simply the wastage of the memory if the page is not present.
We can save this wastage by just inverting the page table. We can save the details only for the pages which are
present in the main memory. Frames are the indices and the information saved inside the block will be Process
ID and page number
What is Thrashing?
 If the number of page faults is equal to the number of referred pages or the number of page faults are so
high so that the CPU remains busy in just reading the pages from the secondary memory then the
effective access time will be the time taken by the CPU to read one word from the secondary memory
and it will be so high. The concept is called thrashing.
 If the page fault rate is PF %, the time taken in getting a page from the secondary memory and again
restarting is S (service time) and the memory access time is ma then-
The effective access time can be given as;
EAT = PF * S + (1 - PF) * (ma)
Where, PF= page fault rate, S= service time, ma= memory access time
Comparison between the Paging and Swapping

Features Paging Swapping

Definition It is a memory management method that enables It temporarily transfers a


systems to store and get data from secondary storage process from the primary to
for usage in the RAM. secondary memory.

Basic Paging permits a process's memory address space to be Swapping allows multiple
non-contiguous. programs in the operating
system to run concurrently.

Flexibility Paging is more flexible as the only pages of a process Swapping is less flexible
are moved. because it moves the entire
process back and forth
between RAM and the back
store.

Main Functionality During paging, the pages are equal-size memory Swapping involves processes
chunks that travel between the primary and secondary switching between main
memory. memory and secondary
memory.

Multiprogramming Paging enables more processes to run in the main Compared to paging,
memory. Swapping enables fewer
programs to run in the main
memory.

Workloads Swapping is appropriate for heavy workloads. The paging is appropriate for
light to medium workloads.

Usage Paging allows virtual memory to be implemented. Swapping allows the CPU to
access processes more
quickly.

Processes There are many processes in the main memory during There are some processes in
swapping. the main memory while
paging.
Cache memory organization
 The data or contents of the main memory that are used frequently by CPU are stored in the cache memory
so that the processor can easily access that data in a shorter time.
 Whenever the CPU needs to access memory, it first checks the cache memory. If the data is not found in
cache memory, then the CPU moves into the main memory.
 The cache is the fastest component in the memory hierarchy and approaches the speed of CPU components.
 Cache memory is organized as distinct set of blocks where each set contains a small fixed number of blocks.
Cache memory is placed between the CPU and the main memory. The block diagram for a cache memory can be
represented as:

Characteristics of Cache Memory


 Cache memory is an extremely fast memory type that acts as a buffer between RAM and the CPU.
 Cache Memory holds frequently requested data and instructions so that they are immediately
available to the CPU when needed.
 Cache memory is costlier than main memory or disk memory but more economical than CPU
registers.
 Cache Memory is used to speed up and synchronize with a high-speed CPU.
The basic operation of a cache memory is as follows:

 When the CPU needs to access memory, the cache is examined. If the word is found in the cache, it is
read from the fast memory.
 If the word addressed by the CPU is not found in the cache, the main memory is accessed to read the
word.
 A block of words one just accessed is then transferred from main memory to cache memory. The block
size may vary from one word (the one just accessed) to about 16 words adjacent to the one just accessed.
 The performance of the cache memory is frequently measured in terms of a quantity called hit ratio.
 When the CPU refers to memory and finds the word in cache, it is said to produce a hit.
 If the word is not found in the cache, it is in main memory and it counts as a miss.
 The ratio of the number of hits divided by the total CPU references to memory (hits plus misses) is the hit
ratio.
Levels of memory:
Level 1 It is a type of memory in which data is stored and accepted that are immediately stored in CPU. Most
commonly used register is accumulator, Program counter, address register etc.
Level 2 It is the fastest memory which has faster access time where data is temporarily stored for faster access.
Level 3 It is memory on which computer works currently. It is small in size and once power is off data no longer
stays in this memory.
Level 4 It is external memory which is not as fast as main memory but data stays permanently in this memory.
Cache Performance
When the processor needs to read or write a location in the main memory, it first checks for a corresponding
entry in the cache.
 If the processor finds that the memory location is in the cache, a Cache Hit has occurred and data is
read from the cache.
 If the processor does not find the memory location in the cache, a cache miss has occurred. For a
cache miss, the cache allocates a new entry and copies in data from the main memory, then the
request is fulfilled from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.
Hit Ratio(H) = hit / (hit + miss) = no. of hits/total accesses
Miss Ratio = miss / (hit + miss) = no. of miss/total accesses = 1 - hit ratio(H)
Cache Mapping:
There are three different types of mapping used for the purpose of cache memory which are as follows:
 Direct mapping,
 Associative mapping
 Set-Associative mapping
1. Direct Mapping -
 In direct mapping, the cache consists of normal high-speed random-access memory.
 Each location in the cache holds the data, at a specific address in the cache.
 This address is given by the lower significant bits of the main memory address.
 This enables the block to be selected directly from the lower significant bit of the memory address.
 The remaining higher significant bits of the address are stored in the cache with the data to complete the
identification of the cached data.
 The simplest technique, known as direct mapping, maps each block of main memory into only one
possible cache line. or In Direct mapping, assign each memory block to a specific line in the cache.
 If a line is previously taken up by a memory block when a new block needs to be loaded, the old block
is trashed.
 An address space is split into two parts index field and a tag field. The cache is used to store the tag
field whereas the rest is stored in the main memory.
 Direct mapping`s performance is directly proportional to the Hit ratio.
i = j modulo m
where
i = cache line number
j = main memory block number
m = number of lines in the cache
As shown in the above figure, the address from processor is divided into two field a tag and an index.
The tag consists of the higher significant bits of the address and these bits are stored with the data in cache. The
index consists of the lower significant bit of the address. Whenever the memory is referenced, the following
sequence of events occurs
1. The index is first used to access a word in the cache.
2. The tag stored in the accessed word is read.
3. This tag is then compared with the tag in the address.
4. If two tags are same this indicates cache hit and required data is read from the cache word.
5. If the two tags are not same, this indicates a cache miss. Then the reference is made to the main memory
to find it.
For a memory read operation, the word is then transferred into the cache. It is possible to pass the information to
the cache and the process simultaneously.
Direct mapped cache with a multi-word block
The index part in the address is used to access the cache and the stored tag is compared with required tag
address.
For a read operation, if the tags are same, the word within the block is selected for transfer to the processor. If
tags are not same, the block containing the required word is first transferred to the cache. In direct mapping, the
corresponding blocks with the same index in the main memory will map into the same block in the cache, and
hence only blocks with different indices can be in the cache at the same time.
2. Associative Mapping
 In this type of mapping, associative memory is used to store the content and addresses of the memory
word. Any block can go into any line of the cache. This means that the word id bits are used to identify
which word in the block is needed, but the tag becomes all of the remaining bits. This enables the
placement of any word at any place in the cache memory.
 It is considered to be the fastest and most flexible mapping form. In associative mapping, the index bits
are zero.
 An associative memory can be considered as a memory unit whose stored data can be identified for
access by the content of the data itself rather than by an address or memory location.
 Associative memory is often referred to as Content Addressable Memory (CAM).
 When a write operation is performed on associative memory, no address or memory location is given to
the word.
 The memory itself is capable of finding an empty unused location to store the word.
 On the other hand, when the word is to be read from an associative memory, the content of the word, or
part of the word, is specified. The words which match the specified content are located by the memory
and are marked for reading.
The following diagram shows the block representation of an Associative memory.

 From the block diagram, we can say that an associative memory consists of a memory array and logic for
'm' words with 'n' bits per word.
 The functional registers like the argument register A and key register K each have n bits, one for each bit
of a word. The match register M consists of m bits, one for each memory word.
 The words which are kept in the memory are compared in parallel with the content of the argument
register.
 The key register (K) provides a mask for choosing a particular field or key in the argument word.
 If the key register contains a binary value of all 1's, then the entire argument is compared with each
memory word. Otherwise, only those bits in the argument that have 1's in their corresponding position of
the key register are compared.
 Thus, the key provides a mask for identifying a piece of information which specifies how the reference to
memory is made.
 The cells present inside the memory array are marked by the letter C with two subscripts. The first
subscript gives the word number and the second specifies the bit position in the word. For instance, the
cell Cij is the cell for bit j in word i.
 A bit Aj in the argument register is compared with all the bits in column j of the array provided that Kj =
1. This process is done for all columns j = 1, 2, 3......, n.
 If a match occurs between all the unmasked bits of the argument and the bits in word i, the corresponding
bit Mi in the match register is set to 1. If one or more unmasked bits of the argument and the word do not
match, Mi is cleared to 0.

Application of Cache Memory


Here are some of the applications of Cache Memory.
1. Primary Cache: A primary cache is always located on the processor chip. This cache is small and
its access time is comparable to that of processor registers.
2. Secondary Cache: Secondary cache is placed between the primary cache and the rest of the
memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is also housed on the
processor chip.
3. Spatial Locality of Reference: Spatial Locality of Reference says that there is a chance that the
element will be present in close proximity to the reference point and next time if again searched
then more close proximity to the point of reference.
4. Temporal Locality of Reference: Temporal Locality of Reference uses the Least recently used
algorithm will be used. Whenever there is page fault occurs within a word will not only load the
word in the main memory but the complete page fault will be loaded because the spatial locality of
reference rule says that if you are referring to any word next word will be referred to in its register
that’s why we load complete page table so the complete block will be loaded.
Advantages of Cache Memory
 Cache Memory is faster in comparison to main memory and secondary memory.
 Programs stored by Cache Memory can be executed in less time.
 The data access time of Cache Memory is less than that of the main memory.
 Cache Memory stored data and instructions that are regularly used by the CPU, therefore it
increases the performance of the CPU.
Disadvantages of Cache Memory
 Cache Memory is costlier than primary memory and secondary memory.
 Data is stored on a temporary basis in Cache Memory.
 Whenever the system is turned off, data and instructions stored in cache memory get destroyed.
 The high cost of cache memory increases the price of the Computer System.
3. Set Associative Mapping -
In set associative mapping a cache is divided into a set of blocks. The number of blocks in a set is known as
associativity or set size. Each block in each set has a stored tag. This tag together with index completely
identify the block.
Thus, set associative mapping allows a limited number of blocks, with the same index and different tags.

An example of four way set associative cache having four blocks in each set is shown in the following figure
m = v * k
i= j mod v
where
i = cache set number
j = main memory block number
v = number of sets
m = number of lines in the cache number of sets
k = number of lines in each set
Fully associative mapping

In fully associative type of cache memory, each location in cache stores both memory address as well as data.

 Whenever a data is requested, the incoming memory address a simultaneously compared


with all stored addresses using the internal logic the associative memory.
 If a match is found, the corresponding is read out. Otherwise, the main memory is
accessed if address is not found in cache.
 This method is known as fully associative mapping approach because cached data is
related to the main memory by storing both memory address and data in the cache.
Difference between Cache Memory and Virtual Memory

S. Parameter Cache Memory Virtual Memory


N. Difference

1. Definition Cache Memory is the high speed of computer Virtual Memory is a logical unit
memory that reduces the access time of files or of computer memory that
documents from the main memory. increases the capacity of main
memory by storing or executing
programs of larger size than the
main memory in the computer
system.

2. Memory Unit Cache Memory is defined as a memory unit in a Virtual Memory is not defined as
computer system. a memory unit.

3. Size Its size is very small as compared to Virtual Its size is very large as compared
Memory. to the Cache Memory.

4. Speed It is a high-speed memory as compared to It is not a high-speed memory as


Virtual Memory. compared to the Cache Memory.

5. Operation Generally, it stores frequently used data in the The virtual memory keeps those
cache memory to reduce the access time of data or programs that may not
files. completely be placed in the main
memory.

6. Management Cache Memory is controlled by the hardware of Whereas the virtual memory is
a system. control by the Operating System
(OS).

7. Mapping It does not require a mapping structure to It requires a mapping structure to


access the files in Cache Memory. map the virtual address with a
physical address.
Types of Page Replacement Algorithms
There are three types of Page Replacement Algorithms. They are:
1. First In First Out Page Replacement Algorithm
2. Least Recently Used (LRU) Page Replacement Algorithm
3. Optimal Page Replacement Algorithm
Example:
Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a memory with three frames
and calculate number of page faults by using FIFO (First In First Out) Page replacement algorithms.
Reference String:

Number of Page Hits = 8 Number of Page Faults = 12


Calculation: -The Ratio of Page Hit to the Page Fault = 8: 12 - - - > 2: 3 - - - > 0.66
The Page Hit Percentage = 8 *100 / 20 = 40%
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
2. Least Recently Used (LRU) Replacement Algorithm

Example: - Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a


memory with three frames and calculate number of page faults by using Least Recently Used
(LRU) Page replacement algorithms.
Reference String:

Number of Page Hits = 7 Number of Page Faults = 13


The Ratio of Page Hit to the Page Fault = 7: 12 - - - > 0.5833: 1
The Page Hit Percentage = 7 * 100 / 20 = 35%
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 35 = 65%
3. Optimal Page Replacement Algorithm

The OPTIMAL Page Replacement Algorithms works on a certain principle. The principle is:
Replace the Page which is not used in the Longest Dimension of time in future.

Number of Page Hits =8 Number of references Pages= 14


The Page Hit Percentage = 8 * 100 / 14 = 57.14%
The Page Fault Percentage = 100 - Page Hit Percentage = 100 – 57.14 = 42.86%

You might also like