0% found this document useful (0 votes)
36 views

Unit 4 (With Page Number)

Aktu

Uploaded by

Yashi Upadhyay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Unit 4 (With Page Number)

Aktu

Uploaded by

Yashi Upadhyay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

UNIT-4

LECTURE-31
MEMORY MANAGEMENT
In a uni-programming system, main memory is divided into two parts: one part for the
operating system (resident monitor, kernel) and one part for the user program currently
being executed.
In a multiprogramming system, the “user” part of memory must be further subdivided
to accommodate multiple processes. The task of subdivision is carried out dynamically
by the operating system and is known as memory management.
Binding of Instructions and Data to Memory

Address binding of instructions and data to memory addresses can happen at three different
stages.
1. Compile time: The compile time is the time taken to compile the program or
source code. During compilation, if memory location known a priori, then it
generates absolute codes.
2. Load time: It is the time taken to link all related program file and load into the
main memory. It must generate relocatable code if memory location is not known
at compile time.
3. Execution time: It is the time taken to execute the program in main memory by
processor. Binding delayed until run time if the process can be moved during its
execution from one memory segment to another. Need hardware support for
address maps (e.g., base and limit registers).

113
(Multi step processing of a user program.)

Logical- Versus Physical-Address Space

An address generated by the CPU is commonly referred to as a logical address or a virtual


address whereas an address seen by the main memory unit is commonly referred to as a
physical address.
 The set of all logical addresses generated by a program is a logical-address
space whereas the set of all physical addresses corresponding to these logical
addresses is a physical address space.
 Logical and physical addresses are the same in compile-time and load-time
address binding schemes; logical (virtual) and physical addresses differ in
execution-time address binding scheme.
 The Memory Management Unit is a hardware device that maps virtual to physical
address. In MMU scheme, the value in the relocation register is added to every
address generated by a user process at the time it is sent to memory as follows:

114
(Dynamic relocation using a relocation register)

115
Dynamic Loading
 It loads the program and data dynamically into physical memory to obtain better
memory- space utilization.
 With dynamic loading, a routine is not loaded until it is called.
 The advantage of dynamic loading is that an unused routine is never loaded.
 This method is useful when large amounts of code are needed to handle
infrequently occurring cases, such as error routines.
 Dynamic loading does not require special support from the operating system.

Dynamic Linking
 Linking postponed until execution time.
 Small piece of code (stub) used to locate the appropriate memory-resident library
routine.
 Stub replaces itself with the address of the routine and executes the routine.
 Operating system needed to check if routine is in processes memory address.
 Dynamic linking is particularly useful for libraries.

116
LECTURE 32

Overlays
 Keep in memory only those instructions and data that are needed at any given time.
 Needed when process is larger than amount of memory allocated to it.
 Implemented by user, no special support needed from operating system, programming
design of overlay structure is complex.

Swapping
 A process can be swapped temporarily out of memory to a backing store (large
disc), and then brought back into memory for continued execution.
 Roll out, roll in: A variant of this swapping policy is used for priority-based
scheduling algorithms. If a higher-priority process arrives and wants service, the
memory manager can swap out the lower-priority process so that it can load and
execute the higher-priority process. When the higher-priority process finishes, the
lower-priority process can be swapped back in and continued. This variant of
swapping is called roll out, roll in.
 Major part of swap time is transfer time; total transfer time is directly

proportional to the amount of memory swapped. Modified versions of
swapping are found on many systems (UNIX, Linux, and Windows).

117
LECTURE-33

MEMORY ALLOCATION

The main memory must accommodate both the operating system and the various user
processes. We need to allocate different parts of the main memory in the most efficient way
possible. The main memory is usually divided into two partitions: one for the resident
operating system, and one for the user processes. We may place the operating system in
either low memory or high memory. The major factor affecting this decision is the location of
the interrupt vector. Since the interrupt vector is often in low memory, programmers usually
place the operating system in low memory as well.

There are following two ways to allocate memory for user processes:
1. Contiguous memory allocation
2. Noncontiguous memory allocation

1. Contiguous Memory Allocation- Here, all the processes are stored in contiguous
memory locations. To load multiple processes into memory, the Operating System
must divide memory into multiple partitions for those processes.

2. Hardware Support- The relocation-register scheme used to protect user processes


from each other, and from changing operating system code and data. Relocation
register contains value of smallest physical address of a partition and limit register
contains range of that partition.
Each logical address must be less than the limit register.

(Hardware support for relocation and limit registers)

According to size of partitions, the multiple partition schemes are divided into two types:
i. Multiple fixed partition/ multiprogramming with fixed task(MFT)
ii. Multiple variable partition/ multiprogramming with variable task(MVT)

i. Multiple fixed partitions- Main memory is divided into a number of static


partitions at system generation time. In this case, any process whose size is less than

118
or equal to the partition size can be loaded into any available partition. If all
partitions are full and no process is in the Ready or Running state, the operating
system can swap a process out of any of the partitions and load in another process, so
that there is some work for the processor.

Advantages:

 Simple to implement and little operating system overhead.

Disadvantage:

 Inefficient use of memory due to internal fragmentation.


 Maximum number of active processes is fixed.

ii. Multiple Variable Partitions- With this partitioning, the partitions are of
variable length and number. When a process is brought into main memory, it is
allocated exactly as much memory, as it requires and no more.

Advantages:
 No internal fragmentation and more efficient use of main memory.

Disadvantages:
 Inefficient use of processor due to the need for compaction to counter external
fragmentation.
iii. Partition Selection policy- When the multiple memory holes (partitions) are
large enough to contain a process, the operating system must use an algorithm to
select in which hole the process will be loaded. The partition selection algorithm are
as follows:
 First-fit: The OS looks at all sections of free memory. The process is
allocated to the first hole found that is big enough size than the size of
process.
 Next Fit: The next fit search starts at the last hole allocated and The process
is allocated to the next hole found that is big enough size than the size of
process.
 Best-fit: The Best Fit searches the entire list of holes to find the smallest hole
that is big enough size than the size of process.
 Worst-fit: The Worst Fit searches the entire list of holes to find the largest
hole that is big enough size than the size of process.

119
Fragmentation- The wasting of memory space is called fragmentation. There are two
types of fragmentation as follows:

1. External Fragmentation- The total memory space exists to satisfy a request, but
it is not contiguous. This wasted space not allocated to any partition is called
external fragmentation. The external fragmentation can be reduce by compaction.
The goal is to shuffle the memory contents to place all free memory together in
one large block. Compaction is possible only if relocation is dynamic, and is
done at execution time.

2. Internal Fragmentation- The allocated memory may be slightly larger than


requested memory. The wasted space within a partition is called internal
fragmentation. One method to reduce internal fragmentation is to use partitions of
different size.

3. Noncontiguous Memory Allocation- In noncontiguous memory allocation, it is


allowed to store the processes in noncontiguous memory locations. There are
different techniques used to load processes into memory, as follows:

120
LECTURE-34
1. Paging 3. Virtual memory paging(Demand 2. Segmentation paging) etc.
PAGING
Main memory is divided into a number of equal-size blocks, are called frames. Each process
is divided into a number of equal-size block of the same length as frames, are called Pages. A
process is loaded by loading all of its pages into available frames (may not be contiguous).

(Paging hardware)
Process of Translation from logical to physical addresses
 Every address generated by the CPU is divided into two parts: a page number (p)
and a page offset (d). The page number is used as an index into a page table.
 The page table contains the base address of each page in physical memory. This
base address is combined with the page offset to define the physical memory
address that is sent to the memory unit.
 If the size of logical-address space is 2m and a page size is 2n addressing units
(bytes or words), then the high-order (m – n) bits of a logical address designate
the page number and the n low-order bits designate the page offset. Thus, the
logical address is as follows:

Where p is an index into the page table and d is the displacement within the page.

Example: Consider a page size of 4 bytes and a physical memory of 32 bytes (8 pages), we
show how the user's view of memory can be mapped into physical memory. Logical address
0 is page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus,
logical address 0 maps to physical address 20 (= (5 x 4) + 0). Logical address 3 (page 0,
offset 3) maps to physical address 23 (= (5 x 4) + 3). Logical address 4 is page 1, offset 0;
according to the page table, page 1 is mapped to frame6. Thus, logical address 4 maps to
physical address 24 (= (6 x 4) + 0). Logical address 13 maps to physical address 9(= (2 x
4)+1).

121
Hardware Support for Paging:
Each operating system has its own methods for storing page tables. Most operating
systems allocate a page table for each process. A pointer to the page table is stored with
the other register values (like the instruction counter) in the process control block. When
the dispatcher is told to start a process, it must reload the user registers and define the
correct hardware page table values from the stored user page table.

Implementation of Page Table


 Generally, Page table is kept in main memory. The Page Table Base Register
(PTBR) points to the page table. In addition, Page-table length register (PRLR)
indicates size of the page table.
 In this scheme every data/instruction access requires two memory accesses. One for
the page table and one for the data/instruction.
 The two-memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers
(TLBs).

LECTURE 35
122
Paging Hardware With TLB

The TLB is an associative and high-speed memory. Each entry in the TLB consists of
two parts: a key (or tag) and a value. The TLB is used with page tables in the
following way.
 The TLB contains only a few of the page-table entries. When the CPU Generates
a logical address, its page number is presented to the TLB.
 If the page number is found (known as a TLB Hit), its frame number is
immediately available and is used to access memory. It takes only one memory
access.
 If the page number is not in the TLB (known as a TLB miss), a memory
reference to the page table must be made. When the frame number is obtained,
we can use it to access memory. It takes two memory accesses.
 In addition, it stores the page number and frame number to the TLB, so that they
will be found quickly on the next reference.
 If the TLB is already full of entries, the operating system must select one for
replacement by using replacement algorithm.

(Paging hardware with TLB)

The percentage of times that a particular page number is found in the TLB is called the
hit ratio. The effective access time (EAT) is obtained as follows:

EAT= HR x (TLBAT + MAT) + MR x (TLBAT + 2 x MAT)


Where HR: Hit Ratio, TLBAT: TLB access time, MAT: Memory access time, MR: Miss
Ratio.
LECTURE-36

123
Memory protection in Paged Environment:

 Memory protection in a paged environment is accomplished by protection bits


that are associated with each frame. These bits are kept in the page table.
 One bit can define a page to be read-write or read-only. This protection bit can be
checked to verify that no writes are being made to a read-only page. An attempt to
write to a read-only page causes a hardware trap to the operating system (or
memory-protection violation).
 One more bit is attached to each entry in the page table: a valid-invalid bit. When
this bit is set to "valid," this value indicates that the associated page is in the
process' logical address. space, and is a legal (or valid) page. If the bit is set to
"invalid," this value indicates that the page is not in the process' logical-address
space.
 Illegal addresses are trapped by using the valid-invalid bit. The operating system
sets this bit for each page to allow or disallow accesses to that page.

(Valid (v) or invalid (i) bit in a page table)

Structure of the Page Table


There are different structures of page table described as follows:
1. Hierarchical Page Table- When the number of pages is very high, then the page
table takes large amount of memory space. In such cases, we use multilevel
paging scheme for reducing size of page table. A simple technique is a two-level
page table. Since the page table is paged, the page number is further divided into
parts: page number and page offset. Thus, a logical address is as follows:

124
Where pi is an index into the outer page table, and p2 is the displacement within the
page of the outer page table.

Two-Level Page-Table Scheme:

Address translation scheme for a two-level paging architecture:

2. Hashed Page Tables- This scheme is applicable for address space larger than
32bits. In this scheme, the virtual page number is hashed into a page table. This
page table contains a chain of elements hashing to the same location. Virtual page
numbers are compared in this chain searching for a match. If a match is found,
the corresponding physical frame is extracted.

125
3. Inverted Page Table-
 One entry for each real page of memory.
 Entry consists of the virtual address of the page stored in that real memory
Location, with information about the process that owns that page.
 Decreases memory needed to store each page table, but increases time needed to
search the table when a page reference occurs.

Shared Pages

Shared code
 One copy of read-only (reentrant) code shared among processes (i.e., text
editors, compilers, window systems).
 Shared code must appear in same location in the logical address space of all
processes.
Private code and data
 Each process keeps a separate copy of the code and data.
 The pages for the private code and data can appear anywhere in the logical address
space.

126
PRACTICE PROBLEMS BASED ON PAGING AND PAGE TABLE-

Problem-01:
Calculate the size of memory if its address consists of 22 bits and the memory is 2-byte addressable.
We have-

● Number of locations possible with 22 bits = 222 locations

● It is given that the size of one location = 2 bytes

Thus, Size of memory


= 222 x 2 bytes
= 223 bytes
= 8 MB

Problem-02:
Calculate the number of bits required in the address for memory having size of 16 GB. Assume the
memory is 4-byte addressable.

Let ‘n’ number of bits are required. Then, Size of memory = 2 n x 4 bytes. Since, the given memory has size of 16
GB, so we have-
2n x 4 bytes = 16 GB
2n x 4 = 16 G
2n x 22 = 234
2n = 232
∴ n = 32 bits

Problem-03:

Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4 KB, what is
the approximate size of the page table?
Given-

● Size of main memory = 64 MB

● Number of bits in virtual address space = 32 bits

● Page size = 4 KB
We will consider that the memory is byte addressable.

Number of Bits in Physical Address-

Size of main memory

= 64 MB
127
= 226 B
Thus, Number of bits in physical address = 26 bits
Number of Frames in Main Memory-
Number of frames in main memory
= Size of main memory / Frame size
= 64 MB / 4 KB
= 226 B / 212 B
= 214
Thus, Number of bits in frame number = 14 bits

Number of Bits in Page Offset-

We have,
Page size
= 4 KB
= 212 B
Thus, Number of bits in page offset = 12 bits
So, Physical address is-26 BITS

Process Size-

Number of bits in virtual address space = 32 bits


Thus,
Process size
= 232 B
= 4 GB

Number of Entries in Page Table-

Number of pages the process is divided


= Process size / Page size
= 4 GB / 4 KB
= 220 pages
Thus, Number of entries in page table = 220 entries

Page Table Size-


Page table size
= Number of entries in page table x Page table entry size
= Number of entries in page table x Number of bits in frame number
= 220 x 14 bits
= 220
= 220 x 2 bytes

128
= 2 MB

LECTURE 37

SEGMENTATION
Segmentation is a memory-management scheme that supports user view of memory.
A program is a collection of segments. A segment is a logical unit such as: main
program, procedure, function, method, object, local variables, global variables,
common block, stack, symbol table, arrays etc.
A logical-address space is a collection of segments. Each segment has a name and a
length. The user specifies each address by two quantities: a segment name/number
and an offset.
Hence, Logical address consists of a two tuple: <segment-number, offset>
Segment table maps two-dimensional physical addresses and each entry in table
has: base – contains the starting physical address where the segments reside in
memory. Limit specifies the length of the segment. Segment-table base
register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program.

(Diagram of Segmentation Hardware)

The segment number is used as an index into the segment table. The offset d of the
logical address must be between 0 and the segment limit. If it is not, we trap to the
operating system that logical addressing attempt beyond end of segment. If this offset
is legal, it is added to the segment base to produce the address in physical memory of
the desired byte. Consider we have five segments numbered from 0 through 4. The
segments are stored in physical memory as shown in figure. The segment table has a
separate entry for each segment, giving start address in physical memory (or base)
and the length of that segment (or limit). For example, segment 2 is 400 bytes long
and begins at location 4300. Thus, a reference to byte 53 of segment 2 is mapped
onto location 4300 + 53 = 4353.
129
(Segmentation)

130
LECTURE 38

VIRTUAL MEMORY

Virtual memory is a technique that allows the execution of processes that may not be
completely in memory. Only part of the program needs to be in memory for
execution. It means that Logical address space can be much larger than physical
address space. Virtual memory allows processes to easily share files and address
spaces, and it provides an efficient mechanism for process creation.
Virtual memory is the separation of user logical memory from physical memory.
This separation allows an extremely large virtual memory to be provided for
programmers when only a smaller physical memory is available. Virtual memory
makes the task of programming much easier, because the programmer no longer
needs to worry about the amount of physical memory available.

131
(virtual memory that is larger than physical memory)

Virtual memory can be implemented via:

 Demand paging
 Demand segmentation

PRACTICE PROBLEM BASED ON SEGMENTATION-

Given below is the example of the segmentation, There are five segments numbered
from 0 to 4. These segments will be stored in Physical memory as shown. There is a
separate entry for each segment in the segment table which contains the beginning
entry address of the segment in the physical memory( denoted as the base) and also
contains the length of the segment(denoted as limit).

SOLUTION:
Segment 2 is 400 bytes long and begins at location 4300. Thus in this case a
reference to byte 53 of segment 2 is mapped onto the location 4300 (4300+53=4353).
A reference to segment 3, byte 85 is mapped to 3200(the base of segment
3)+852=4052.
A reference to byte 1222 of segment 0 would result in the trap to the OS, as the
length of this segment is 1000 bytes.

Example of Segmentation

In order to comprehend how it functions, let's see an example of segmentation in


OS. Assume there are five segments, numbered 0 through 4, with segment 1 being
the first segment. All of the process segments are initially stored in the physical
memory space before the process is executed. A segment table is also available. The
beginning entry address of each segment is contained in the segment table (denoted
by base). The length of each segment is also included in the segment table (denoted
by limit). Assume there are five segments, numbered 0 through 4, with segment 1
being the first segment. All of the process segments are initially stored in the physical
memory space before the process is executed. A segment table is also available. The
beginning entry address of each segment is contained in the segment table (denoted
by base). The length of each segment is also included in the segment table (denoted
by limit).
Segment 2 starts at position 4300 and is 400 bytes long. As a result, a reference to
byte 53 of segment 2 in this instance is mapped onto the location 4300
(4300+53=4353). Segment 3 reference byte 85 is mapped to 3200 (the segment 3

132
base) +852=4052.
Segment 0 has a length of 1000 bytes, so referencing byte 1222 of that segment
would trigger an OS trap.

133
LETCUTRE -39

DEMAND PAGING

A demand-paging system is similar to a paging system with swapping. Generally,


Processes reside on secondary memory (which is usually a disk). When we want to
execute a process, we swap it into memory. Rather than swapping the entire process
into memory, it swaps the required page. This can be done by a lazy swapper.

A lazy swapper never swaps a page into memory unless that page will be needed. A
swapper manipulates entire processes, whereas a pager is concerned with the
individual pages of a process.

Page transfer Method:

When a process is to be swapped in, the pager guesses which pages will be used
before the process is swapped out again. Instead of swapping in a whole process, the
pager brings only those necessary pages into memory. Thus, it avoids reading into
memory pages that will not be used anyway, decreasing the swap time and the
amount of physical memory needed.

(Transfer of a paged memory to contiguous disk space)

Page Table-
 The valid-invalid bit scheme of Page table can be used for indicating which pages
are currently in memory.
 When this bit is set to "valid", this value indicates that the associated page is both

134
legal and in memory. If the bit is set to "invalid", this value indicates that the
page either is not valid or is valid but is currently on the disk.
 The page-table entry for a page that is brought into memory is set as usual, but
the page table entry for a page that is not currently in memory is simply marked
invalid, or contains the address of the page on disk.

(Page table when some pages are not in main memory)

When a page references an invalid page, then it is called Page Fault. It means that page
is not in main memory. The procedure for handling page fault is as follows:
1. We check an internal table for this process, to determine whether the
reference was a valid or invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid, but we
have not yet brought in that page in to memory.
3. We find a free frame (by taking one from the free-frame list).
4. We schedule a disk operation to read the desired page into the newly allocated
frame.
5. When the disk read is complete, we modify the internal table kept with the
process and the page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the illegal address trap. The
process can now access the page as though it had always been in memory.

135
(Steps in handling a page fault)

Note: The pages are copied into memory, only when they are required. This mechanism
is called Pure Demand Paging.

Performance of Demand Paging

Let p be the probability of a page fault (0< p < 1). Then the effective access time is
Effective access time = (1 - p) x memory access time + p x page fault time
In any case, we are faced with three major components of the page-fault service time:
1. Service the page-fault interrupt.
2. Read in the page.
3. Restart the process.

136
LECTURE 40

PAGE REPLACEMENT

The page replacement is a mechanism that loads a page from disc to memory when a
page of memory needs to be allocated. Page replacement can be described as follows:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim
frame.
c. Write the victim page to the disk; change the page and frame tables accordingly.
3. Read the desired page into the (newly) free frame; change the page and frame tables.
4. Restart the user process.

(Page replacement)

Page Replacement Algorithms:

The page replacement algorithms decide which memory pages to page out (swap
out, write to disk) when a page of memory needs to be allocated. We evaluate an
algorithm by running it on a particular string of memory references and computing
the number of page faults. The string of memory references is called a reference
string. The different page replacement algorithms are described as follows:

137
First-In-First-Out (FIFO) Algorithm:
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue; the oldest page is in the front of the queue.
When a page needs to be replaced page in the front of the queue is selected for removal.

Example-1 Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames. Find the
number of page faults.

(FIFO page-replacement algorithm)

 Initially, all slots are empty, so when 1, 3, 0 came they are allocated
to the empty slots —>3 Page Faults.
 When 3 comes, it is already in memory so —>0 Page Faults.
 Then 5 comes, it is not available in memory so it replaces the oldest
page slot i.e 1.—>1 Page Fault.
 6 comes, it is also not available in memory so it replaces the oldest page slot i.e.
3 —>1 Page Fault.
 Finally, when 3 come it is not available so it replaces 0, 1 page fault

138
LECTURE-41

1. Optimal Page Replacement algorithm:

In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4
page frame. Find number of page fault.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —>4 Page
faults
0 is already there so —>0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of
time in the future.—>1 Page fault.
0 is already there so —
>0 Page fault.. 4 will
takes place of 1 —>1
Page Fault.
Now for the further page reference string —>0 Page fault because they are already
available in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to set
up a benchmark so that other replacement algorithms can be analyzed against it.

139
2. LRU Page Replacement Algorithm

In this algorithm, page will be replaced which is least recently used.

Example-3 Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2


with 4 page frames. Find number of page faults.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —>4
Page faults
0 is already there so —>0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —
>0 Page fault. 4 will takes
place of 1 —>1 Page Fault
Now for the further page reference string —>0 Page fault because they are
already available in the memory.

Topperworld.in

140
LECTURE-42

LRU Approximation Page Replacement algorithm


In this algorithm, Reference bits are associated with each entry in the page table.
Initially, all bits are cleared (to 0) by the operating system. As a user process
executes, the bit associated with each page referenced is set (to 1) by the hardware.
After some time, we can determine which pages have been used and which have not
been used by examining the reference bits. This algorithm can be classified into
different categories as follows:
i.Additional-Reference-Bits Algorithm-

It can keep an 8-bit(1 byte) for each page in a page table in memory. At regular
intervals, a timer interrupt transfers control to the operating system. The operating
system shifts the reference bit for each page into the high order bit of its 8-bit,
shifting the other bits right over 1 bit position, discarding the low- order bit. These 8
bits shift registers contain the history of page use for the last eight time periods.
If we interpret these 8-bits as unsigned integers, the page with the lowest number
is the LRU page, and it can be replaced.

ii.Second-Chance Algorithm-

The basic algorithm of second-chance replacement is a FIFO replacement algorithm.


When a page has been selected, we inspect its reference bit. If the value is 0, we
proceed to replace this page. If the reference bit is set to 1, we give that page a
second chance and move on to select the next FIFO page. When a page gets a second
chance, its reference bit is cleared and its arrival time is reset to the current time.
Thus, a page that is given a second chance will not be replaced until all other pages
are replaced.

3. Counting-Based Page Replacement

We could keep a counter of the number of references that have been made to each
page, and develop the following two schemes.
LFU page replacement algorithm: The least frequently used (LFU) page-
replacement algorithm requires that the page with the smallest count be replaced.
The reason for this selection is that an actively used page should have a large
reference count. ii. MFU page-replacement algorithm: The most frequently used
(MFU) page replacement algorithm is based on the argument that the page with the
largest count be replaced.

141
LECTURE-43

ALLOCATION OF FRAMES

When a page fault occurs, there is a free frame available to store new page into a
frame. While the page swap is taking place, a replacement can be selected, which is
written to the disk as the user process continues to execute. The operating system
allocate all its buffer and table space from the free-frame list for new page.
Two major allocation Algorithm/schemes.
1. Equal allocation
2. Proportional allocation

1. Equal allocation: The easiest way to split m frames among n processes is to


give everyone an equal share, m/n frames. This scheme is called equal
allocation.

2. Proportional allocation: Here, it allocates available memory to each process


according to its size. Let the size of the virtual memory for process pibe si,
and define

S= ∑ Si
Then,
if the total number of available frames is m, we allocate ai frames to process pi,
where ai is approximately
ai=Si/ S x m.

Global Versus Local Allocation

We can classify page-replacement algorithms into two broad categories: global


replacement and local replacement.
Global replacement allows a process to select a replacement frame from the set of all
frames, even if that frame is currently allocated to some other process; one process can
take a frame from another. Local replacement requires that each process select from
only its own set of allocated frames.

142
THRASHING

The system spends most of its time shuttling pages between main memory and secondary
memory due to frequent page faults. This behavior is known as thrashing.
A process is thrashing if it is spending more time paging than executing. This leads to:
low CPU utilization and the operating system thinks that it needs to increase the degree
of multiprogramming.

Thrashing is when the page fault and swapping happens very frequently at a higher
rate, and then the operating system has to spend more time swapping these pages.
This state in the operating system is known as thrashing. Because of thrashing, the
CPU utilization is going to be reduced or negligible.

(Thrashing)

Topperworld.in

143
LECTURE 44

Cache Memory

Cache Memory is a special very high-speed memory. The cache is a smaller and faster
memory that stores copies of the data from frequently used main memory locations. There
are various different independent caches in a CPU, which store instructions and data. The
most important use of cache memory is that it is used to reduce the average time to access
data from the main memory.

Characteristics of Cache Memory

Cache memory is an extremely fast memory type that acts as a buffer between RAM and
the CPU. Cache Memory holds frequently requested data and instructions so that they are
immediately available to the CPU when needed. Cache memory is costlier than main
memory or disk memory but more economical than CPU registers. Cache Memory is used
to speed up and synchronize with a high-speed CPU.

Levels of Memory

Level 1 or Register- It is a type of memory in which data is stored and accepted that are
immediately stored in the CPU. The most commonly used register is Accumulator, Program
counter, Address Register, etc.
Level 2 or Cache memory- It is the fastest memory that has faster access time where data
is temporarily stored for faster access.
Level 3 or Main Memory- It is the memory on which the computer works currently. It is
small in size and once power is off data no longer stays in this memory.
Level 4 or Secondary Memory- It is external memory that is not as fast as the main
memory but data stays permanently in this memory.

Cache Performance
144
When the processor needs to read or write a location in the main memory, it first checks for a
corresponding entry in the cache. If the processor finds that the memory location is in the
cache, a Cache Hit has occurred and data is read from the cache. If the processor does not
find the memory location in the cache, a cache miss has occurred. For a cache miss, the cache
allocates a new entry and copies in data from the main memory, and then the request is
fulfilled from the contents of the cache. The performance of cache memory is frequently
measured in terms of a quantity called Hit ratio.

Cache Mapping
There are three different types of mapping used for the purpose of cache memory, which is as
follows:
Direct Mapping
Associative Mapping
Set-Associative Mapping

1. Direct Mapping

The simplest technique, known as direct mapping, maps each block of main memory into
only one possible cache line. or In Direct mapping, assign each memory block to a specific
line in the cache. If a line is previously taken up by a memory block when a new block needs
to be loaded, the old block is trashed. An address space is split into two parts index field and
a tag field. The cache is used to store the tag field whereas the rest is stored in the main
memory. Direct mapping`s performance is directly proportional to the Hit ratio.

2. Associative Mapping

In this type of mapping, associative memory is used to store the content and addresses of
the memory word. Any block can go into any line of the cache. This means that the word id
bits are used to identify which word in the block is needed, but the tag becomes all of the
remaining bits. This enables the placement of any word at any place in the cache memory.
It is considered to be the fastest and most flexible mapping form. In associative mapping,
the index bits are zero

3. Set-Associative Mapping

This form of mapping is an enhanced form of direct mapping where the drawbacks of
direct mapping are removed. Set associative addresses the problem of possible thrashing in
the direct mapping method. It does this by saying that instead of having exactly one line
that a block can map to in the cache,we will group a few lines together creating a set. Then
a block in memory can map to any one of the lines of a specific set. Set-associative
mapping allows each word that is present in the cache can have two or more words in the
main memory for the same index address. Set associative cache mapping combines the best

145
of direct and associative cache mapping techniques. In set associative mapping the index
bits are given by the set offset bits. In this case, the cache consists of a number of sets, each
of which consists of a number of lines.

Application of Cache Memory

Here are some of the applications of Cache Memory.


Primary Cache: A primary cache is always located on the processor chip. This cache is
small and its access time is comparable to that of processor registers.

Secondary Cache: Secondary cache is placed between the primary cache and the rest of the
memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is also housed on
the processor chip.

Spatial Locality of Reference: Spatial Locality of Reference says that there is a chance that
the element will be present in close proximity to the reference point and next time if again
searched then more close proximity to the point of reference.

Temporal Locality of Reference: Temporal Locality of Reference uses the Least recently
used algorithm will be used. Whenever there is page fault occurs within a word will not only
load the word in the main memory but the complete page fault will be loaded because the
spatial locality of reference rule says that if you are referring to any word next word will be
referred to in its register that’s why we load complete page table so the complete block will
be loaded.

Advantages of Cache Memory

Cache Memory is faster in comparison to main memory and secondary memory.


Programs stored by Cache Memory can be executed in less time.
The data access time of Cache Memory is less than that of the main memory.
Cache Memory stored data and instructions that are regularly used by the CPU, therefore it
increases the performance of the CPU.

Disadvantages of Cache Memory

Cache Memory is costlier than primary memory and secondary memory.


Data is stored on a temporary basis in Cache Memory.
Whenever the system is turned off, data and instructions stored in cache memory get
destroyed.
The high cost of cache memory increases the price of the Computer System.
Locality of Reference
Locality of reference refers to a phenomenon in which a computer program tends to access

146
same set of memory locations for a particular time period. In other words, Locality of
Reference refers to the tendency of the computer program to access instructions whose
addresses are near one another. The property of locality of reference is mainly shown by
loops and subroutine calls in a program.

In case of loops in program control processing unit repeatedly refers to the set of
instructions that constitute the loop.
In case of subroutine calls, every time the set of instructions are fetched from memory.
References to data items also get localized that means same data item is referenced again
and again.

Cache Operation-
It is based on the principle of locality of reference. There are two ways with which data or
instruction is fetched from main memory and get stored in cache memory. These two ways
are the following:

Temporal Locality-
Temporal locality means current data or instruction that is being fetched may be needed
soon. So we should store that data or instruction in the cache memory so that we can avoid
again searching in main memory for the same data.

Spatial Locality–
Spatial locality means instruction or data near to the current memory location that is being
fetched, may be needed soon in the near future. This is slightly different from the temporal
locality. Here we are talking about nearly located memory locations while in temporal
locality we were talking about the actual memory location that was being fetched.
IMPORTANT QUESTIONS

147
Q.1 What are the memory management requirements?

Q.2 Explain static partitioned allocation with partition sizes 300,150, 100, 200, 20. Assuming
first fit method indicate the memory status after memory request for sizes 80, 180, 280, 380,
30.

Q.3 Explain the difference between logical and physical addresses?

Q.4 Explain hierarchical page table and inverted page table.

Q.5 What is segmentation? Explain the basic segmentation method.

Q.6 What is virtual memory? How it is implemented.

Q.7 What is demand paging? Explain it with address translation mechanism used.

Q.8 Consider the following page reference string. 1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2

Q.9 How many page faults would occur for the following replacement algorithm, assuming
four and six frames respectively?

1) LRU page replacement.


2) FIFO page replacement.
Q.10 Describe the term page fault frequency. What is thrashing? How do OS control it?

Q.11 Explain difference between internal external fragmentations in detail.

Q.12 What is swapping? Why does one need to swap areas of memory?

Q.13 Explain how segmented memory management works. Also explain in details address
translation and relocation segmented memory management

Q.14 What is the purpose of a TLB? Explain the TLB lookup with the help of a block
diagram, explaining the hardware required.

Q.15 Compare and contrast the paging with segmentation. In particular, describe issues
related to fragmentation

Q.16 What is the impact of fixed partitioning on fragmentation?

Q.17 Give the relative advantages and disadvantages of load time dynamic linking and run-
time dynamic linking. Differentiate them from static linking

Q.18 What is meant by virtual memory? With the help of a block diagram explain the data
structures used.

Q.19 What is a page and what is a frame. How are the two related?

148
Q.20 Give description of hard-ware support to paging

Q.21 What is a page fault? What action does the OS? Take when a page fault occurs?

149

You might also like