Unit 4 (With Page Number)
Unit 4 (With Page Number)
LECTURE-31
MEMORY MANAGEMENT
In a uni-programming system, main memory is divided into two parts: one part for the
operating system (resident monitor, kernel) and one part for the user program currently
being executed.
In a multiprogramming system, the “user” part of memory must be further subdivided
to accommodate multiple processes. The task of subdivision is carried out dynamically
by the operating system and is known as memory management.
Binding of Instructions and Data to Memory
Address binding of instructions and data to memory addresses can happen at three different
stages.
1. Compile time: The compile time is the time taken to compile the program or
source code. During compilation, if memory location known a priori, then it
generates absolute codes.
2. Load time: It is the time taken to link all related program file and load into the
main memory. It must generate relocatable code if memory location is not known
at compile time.
3. Execution time: It is the time taken to execute the program in main memory by
processor. Binding delayed until run time if the process can be moved during its
execution from one memory segment to another. Need hardware support for
address maps (e.g., base and limit registers).
113
(Multi step processing of a user program.)
114
(Dynamic relocation using a relocation register)
115
Dynamic Loading
It loads the program and data dynamically into physical memory to obtain better
memory- space utilization.
With dynamic loading, a routine is not loaded until it is called.
The advantage of dynamic loading is that an unused routine is never loaded.
This method is useful when large amounts of code are needed to handle
infrequently occurring cases, such as error routines.
Dynamic loading does not require special support from the operating system.
Dynamic Linking
Linking postponed until execution time.
Small piece of code (stub) used to locate the appropriate memory-resident library
routine.
Stub replaces itself with the address of the routine and executes the routine.
Operating system needed to check if routine is in processes memory address.
Dynamic linking is particularly useful for libraries.
116
LECTURE 32
Overlays
Keep in memory only those instructions and data that are needed at any given time.
Needed when process is larger than amount of memory allocated to it.
Implemented by user, no special support needed from operating system, programming
design of overlay structure is complex.
Swapping
A process can be swapped temporarily out of memory to a backing store (large
disc), and then brought back into memory for continued execution.
Roll out, roll in: A variant of this swapping policy is used for priority-based
scheduling algorithms. If a higher-priority process arrives and wants service, the
memory manager can swap out the lower-priority process so that it can load and
execute the higher-priority process. When the higher-priority process finishes, the
lower-priority process can be swapped back in and continued. This variant of
swapping is called roll out, roll in.
Major part of swap time is transfer time; total transfer time is directly
⇒
proportional to the amount of memory swapped. Modified versions of
swapping are found on many systems (UNIX, Linux, and Windows).
117
LECTURE-33
MEMORY ALLOCATION
The main memory must accommodate both the operating system and the various user
processes. We need to allocate different parts of the main memory in the most efficient way
possible. The main memory is usually divided into two partitions: one for the resident
operating system, and one for the user processes. We may place the operating system in
either low memory or high memory. The major factor affecting this decision is the location of
the interrupt vector. Since the interrupt vector is often in low memory, programmers usually
place the operating system in low memory as well.
There are following two ways to allocate memory for user processes:
1. Contiguous memory allocation
2. Noncontiguous memory allocation
1. Contiguous Memory Allocation- Here, all the processes are stored in contiguous
memory locations. To load multiple processes into memory, the Operating System
must divide memory into multiple partitions for those processes.
According to size of partitions, the multiple partition schemes are divided into two types:
i. Multiple fixed partition/ multiprogramming with fixed task(MFT)
ii. Multiple variable partition/ multiprogramming with variable task(MVT)
118
or equal to the partition size can be loaded into any available partition. If all
partitions are full and no process is in the Ready or Running state, the operating
system can swap a process out of any of the partitions and load in another process, so
that there is some work for the processor.
Advantages:
Disadvantage:
ii. Multiple Variable Partitions- With this partitioning, the partitions are of
variable length and number. When a process is brought into main memory, it is
allocated exactly as much memory, as it requires and no more.
Advantages:
No internal fragmentation and more efficient use of main memory.
Disadvantages:
Inefficient use of processor due to the need for compaction to counter external
fragmentation.
iii. Partition Selection policy- When the multiple memory holes (partitions) are
large enough to contain a process, the operating system must use an algorithm to
select in which hole the process will be loaded. The partition selection algorithm are
as follows:
First-fit: The OS looks at all sections of free memory. The process is
allocated to the first hole found that is big enough size than the size of
process.
Next Fit: The next fit search starts at the last hole allocated and The process
is allocated to the next hole found that is big enough size than the size of
process.
Best-fit: The Best Fit searches the entire list of holes to find the smallest hole
that is big enough size than the size of process.
Worst-fit: The Worst Fit searches the entire list of holes to find the largest
hole that is big enough size than the size of process.
119
Fragmentation- The wasting of memory space is called fragmentation. There are two
types of fragmentation as follows:
1. External Fragmentation- The total memory space exists to satisfy a request, but
it is not contiguous. This wasted space not allocated to any partition is called
external fragmentation. The external fragmentation can be reduce by compaction.
The goal is to shuffle the memory contents to place all free memory together in
one large block. Compaction is possible only if relocation is dynamic, and is
done at execution time.
120
LECTURE-34
1. Paging 3. Virtual memory paging(Demand 2. Segmentation paging) etc.
PAGING
Main memory is divided into a number of equal-size blocks, are called frames. Each process
is divided into a number of equal-size block of the same length as frames, are called Pages. A
process is loaded by loading all of its pages into available frames (may not be contiguous).
(Paging hardware)
Process of Translation from logical to physical addresses
Every address generated by the CPU is divided into two parts: a page number (p)
and a page offset (d). The page number is used as an index into a page table.
The page table contains the base address of each page in physical memory. This
base address is combined with the page offset to define the physical memory
address that is sent to the memory unit.
If the size of logical-address space is 2m and a page size is 2n addressing units
(bytes or words), then the high-order (m – n) bits of a logical address designate
the page number and the n low-order bits designate the page offset. Thus, the
logical address is as follows:
Where p is an index into the page table and d is the displacement within the page.
Example: Consider a page size of 4 bytes and a physical memory of 32 bytes (8 pages), we
show how the user's view of memory can be mapped into physical memory. Logical address
0 is page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus,
logical address 0 maps to physical address 20 (= (5 x 4) + 0). Logical address 3 (page 0,
offset 3) maps to physical address 23 (= (5 x 4) + 3). Logical address 4 is page 1, offset 0;
according to the page table, page 1 is mapped to frame6. Thus, logical address 4 maps to
physical address 24 (= (6 x 4) + 0). Logical address 13 maps to physical address 9(= (2 x
4)+1).
121
Hardware Support for Paging:
Each operating system has its own methods for storing page tables. Most operating
systems allocate a page table for each process. A pointer to the page table is stored with
the other register values (like the instruction counter) in the process control block. When
the dispatcher is told to start a process, it must reload the user registers and define the
correct hardware page table values from the stored user page table.
LECTURE 35
122
Paging Hardware With TLB
The TLB is an associative and high-speed memory. Each entry in the TLB consists of
two parts: a key (or tag) and a value. The TLB is used with page tables in the
following way.
The TLB contains only a few of the page-table entries. When the CPU Generates
a logical address, its page number is presented to the TLB.
If the page number is found (known as a TLB Hit), its frame number is
immediately available and is used to access memory. It takes only one memory
access.
If the page number is not in the TLB (known as a TLB miss), a memory
reference to the page table must be made. When the frame number is obtained,
we can use it to access memory. It takes two memory accesses.
In addition, it stores the page number and frame number to the TLB, so that they
will be found quickly on the next reference.
If the TLB is already full of entries, the operating system must select one for
replacement by using replacement algorithm.
The percentage of times that a particular page number is found in the TLB is called the
hit ratio. The effective access time (EAT) is obtained as follows:
123
Memory protection in Paged Environment:
124
Where pi is an index into the outer page table, and p2 is the displacement within the
page of the outer page table.
2. Hashed Page Tables- This scheme is applicable for address space larger than
32bits. In this scheme, the virtual page number is hashed into a page table. This
page table contains a chain of elements hashing to the same location. Virtual page
numbers are compared in this chain searching for a match. If a match is found,
the corresponding physical frame is extracted.
125
3. Inverted Page Table-
One entry for each real page of memory.
Entry consists of the virtual address of the page stored in that real memory
Location, with information about the process that owns that page.
Decreases memory needed to store each page table, but increases time needed to
search the table when a page reference occurs.
Shared Pages
Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text
editors, compilers, window systems).
Shared code must appear in same location in the logical address space of all
processes.
Private code and data
Each process keeps a separate copy of the code and data.
The pages for the private code and data can appear anywhere in the logical address
space.
126
PRACTICE PROBLEMS BASED ON PAGING AND PAGE TABLE-
Problem-01:
Calculate the size of memory if its address consists of 22 bits and the memory is 2-byte addressable.
We have-
Problem-02:
Calculate the number of bits required in the address for memory having size of 16 GB. Assume the
memory is 4-byte addressable.
Let ‘n’ number of bits are required. Then, Size of memory = 2 n x 4 bytes. Since, the given memory has size of 16
GB, so we have-
2n x 4 bytes = 16 GB
2n x 4 = 16 G
2n x 22 = 234
2n = 232
∴ n = 32 bits
Problem-03:
Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4 KB, what is
the approximate size of the page table?
Given-
● Page size = 4 KB
We will consider that the memory is byte addressable.
= 64 MB
127
= 226 B
Thus, Number of bits in physical address = 26 bits
Number of Frames in Main Memory-
Number of frames in main memory
= Size of main memory / Frame size
= 64 MB / 4 KB
= 226 B / 212 B
= 214
Thus, Number of bits in frame number = 14 bits
We have,
Page size
= 4 KB
= 212 B
Thus, Number of bits in page offset = 12 bits
So, Physical address is-26 BITS
Process Size-
128
= 2 MB
LECTURE 37
SEGMENTATION
Segmentation is a memory-management scheme that supports user view of memory.
A program is a collection of segments. A segment is a logical unit such as: main
program, procedure, function, method, object, local variables, global variables,
common block, stack, symbol table, arrays etc.
A logical-address space is a collection of segments. Each segment has a name and a
length. The user specifies each address by two quantities: a segment name/number
and an offset.
Hence, Logical address consists of a two tuple: <segment-number, offset>
Segment table maps two-dimensional physical addresses and each entry in table
has: base – contains the starting physical address where the segments reside in
memory. Limit specifies the length of the segment. Segment-table base
register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program.
The segment number is used as an index into the segment table. The offset d of the
logical address must be between 0 and the segment limit. If it is not, we trap to the
operating system that logical addressing attempt beyond end of segment. If this offset
is legal, it is added to the segment base to produce the address in physical memory of
the desired byte. Consider we have five segments numbered from 0 through 4. The
segments are stored in physical memory as shown in figure. The segment table has a
separate entry for each segment, giving start address in physical memory (or base)
and the length of that segment (or limit). For example, segment 2 is 400 bytes long
and begins at location 4300. Thus, a reference to byte 53 of segment 2 is mapped
onto location 4300 + 53 = 4353.
129
(Segmentation)
130
LECTURE 38
VIRTUAL MEMORY
Virtual memory is a technique that allows the execution of processes that may not be
completely in memory. Only part of the program needs to be in memory for
execution. It means that Logical address space can be much larger than physical
address space. Virtual memory allows processes to easily share files and address
spaces, and it provides an efficient mechanism for process creation.
Virtual memory is the separation of user logical memory from physical memory.
This separation allows an extremely large virtual memory to be provided for
programmers when only a smaller physical memory is available. Virtual memory
makes the task of programming much easier, because the programmer no longer
needs to worry about the amount of physical memory available.
131
(virtual memory that is larger than physical memory)
Demand paging
Demand segmentation
Given below is the example of the segmentation, There are five segments numbered
from 0 to 4. These segments will be stored in Physical memory as shown. There is a
separate entry for each segment in the segment table which contains the beginning
entry address of the segment in the physical memory( denoted as the base) and also
contains the length of the segment(denoted as limit).
SOLUTION:
Segment 2 is 400 bytes long and begins at location 4300. Thus in this case a
reference to byte 53 of segment 2 is mapped onto the location 4300 (4300+53=4353).
A reference to segment 3, byte 85 is mapped to 3200(the base of segment
3)+852=4052.
A reference to byte 1222 of segment 0 would result in the trap to the OS, as the
length of this segment is 1000 bytes.
Example of Segmentation
132
base) +852=4052.
Segment 0 has a length of 1000 bytes, so referencing byte 1222 of that segment
would trigger an OS trap.
133
LETCUTRE -39
DEMAND PAGING
A lazy swapper never swaps a page into memory unless that page will be needed. A
swapper manipulates entire processes, whereas a pager is concerned with the
individual pages of a process.
When a process is to be swapped in, the pager guesses which pages will be used
before the process is swapped out again. Instead of swapping in a whole process, the
pager brings only those necessary pages into memory. Thus, it avoids reading into
memory pages that will not be used anyway, decreasing the swap time and the
amount of physical memory needed.
Page Table-
The valid-invalid bit scheme of Page table can be used for indicating which pages
are currently in memory.
When this bit is set to "valid", this value indicates that the associated page is both
134
legal and in memory. If the bit is set to "invalid", this value indicates that the
page either is not valid or is valid but is currently on the disk.
The page-table entry for a page that is brought into memory is set as usual, but
the page table entry for a page that is not currently in memory is simply marked
invalid, or contains the address of the page on disk.
When a page references an invalid page, then it is called Page Fault. It means that page
is not in main memory. The procedure for handling page fault is as follows:
1. We check an internal table for this process, to determine whether the
reference was a valid or invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid, but we
have not yet brought in that page in to memory.
3. We find a free frame (by taking one from the free-frame list).
4. We schedule a disk operation to read the desired page into the newly allocated
frame.
5. When the disk read is complete, we modify the internal table kept with the
process and the page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the illegal address trap. The
process can now access the page as though it had always been in memory.
135
(Steps in handling a page fault)
Note: The pages are copied into memory, only when they are required. This mechanism
is called Pure Demand Paging.
Let p be the probability of a page fault (0< p < 1). Then the effective access time is
Effective access time = (1 - p) x memory access time + p x page fault time
In any case, we are faced with three major components of the page-fault service time:
1. Service the page-fault interrupt.
2. Read in the page.
3. Restart the process.
136
LECTURE 40
PAGE REPLACEMENT
The page replacement is a mechanism that loads a page from disc to memory when a
page of memory needs to be allocated. Page replacement can be described as follows:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim
frame.
c. Write the victim page to the disk; change the page and frame tables accordingly.
3. Read the desired page into the (newly) free frame; change the page and frame tables.
4. Restart the user process.
(Page replacement)
The page replacement algorithms decide which memory pages to page out (swap
out, write to disk) when a page of memory needs to be allocated. We evaluate an
algorithm by running it on a particular string of memory references and computing
the number of page faults. The string of memory references is called a reference
string. The different page replacement algorithms are described as follows:
137
First-In-First-Out (FIFO) Algorithm:
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue; the oldest page is in the front of the queue.
When a page needs to be replaced page in the front of the queue is selected for removal.
Example-1 Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames. Find the
number of page faults.
Initially, all slots are empty, so when 1, 3, 0 came they are allocated
to the empty slots —>3 Page Faults.
When 3 comes, it is already in memory so —>0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest
page slot i.e 1.—>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e.
3 —>1 Page Fault.
Finally, when 3 come it is not available so it replaces 0, 1 page fault
138
LECTURE-41
In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4
page frame. Find number of page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —>4 Page
faults
0 is already there so —>0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of
time in the future.—>1 Page fault.
0 is already there so —
>0 Page fault.. 4 will
takes place of 1 —>1
Page Fault.
Now for the further page reference string —>0 Page fault because they are already
available in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to set
up a benchmark so that other replacement algorithms can be analyzed against it.
139
2. LRU Page Replacement Algorithm
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —>4
Page faults
0 is already there so —>0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —
>0 Page fault. 4 will takes
place of 1 —>1 Page Fault
Now for the further page reference string —>0 Page fault because they are
already available in the memory.
Topperworld.in
140
LECTURE-42
It can keep an 8-bit(1 byte) for each page in a page table in memory. At regular
intervals, a timer interrupt transfers control to the operating system. The operating
system shifts the reference bit for each page into the high order bit of its 8-bit,
shifting the other bits right over 1 bit position, discarding the low- order bit. These 8
bits shift registers contain the history of page use for the last eight time periods.
If we interpret these 8-bits as unsigned integers, the page with the lowest number
is the LRU page, and it can be replaced.
ii.Second-Chance Algorithm-
We could keep a counter of the number of references that have been made to each
page, and develop the following two schemes.
LFU page replacement algorithm: The least frequently used (LFU) page-
replacement algorithm requires that the page with the smallest count be replaced.
The reason for this selection is that an actively used page should have a large
reference count. ii. MFU page-replacement algorithm: The most frequently used
(MFU) page replacement algorithm is based on the argument that the page with the
largest count be replaced.
141
LECTURE-43
ALLOCATION OF FRAMES
When a page fault occurs, there is a free frame available to store new page into a
frame. While the page swap is taking place, a replacement can be selected, which is
written to the disk as the user process continues to execute. The operating system
allocate all its buffer and table space from the free-frame list for new page.
Two major allocation Algorithm/schemes.
1. Equal allocation
2. Proportional allocation
S= ∑ Si
Then,
if the total number of available frames is m, we allocate ai frames to process pi,
where ai is approximately
ai=Si/ S x m.
142
THRASHING
The system spends most of its time shuttling pages between main memory and secondary
memory due to frequent page faults. This behavior is known as thrashing.
A process is thrashing if it is spending more time paging than executing. This leads to:
low CPU utilization and the operating system thinks that it needs to increase the degree
of multiprogramming.
Thrashing is when the page fault and swapping happens very frequently at a higher
rate, and then the operating system has to spend more time swapping these pages.
This state in the operating system is known as thrashing. Because of thrashing, the
CPU utilization is going to be reduced or negligible.
(Thrashing)
Topperworld.in
143
LECTURE 44
Cache Memory
Cache Memory is a special very high-speed memory. The cache is a smaller and faster
memory that stores copies of the data from frequently used main memory locations. There
are various different independent caches in a CPU, which store instructions and data. The
most important use of cache memory is that it is used to reduce the average time to access
data from the main memory.
Cache memory is an extremely fast memory type that acts as a buffer between RAM and
the CPU. Cache Memory holds frequently requested data and instructions so that they are
immediately available to the CPU when needed. Cache memory is costlier than main
memory or disk memory but more economical than CPU registers. Cache Memory is used
to speed up and synchronize with a high-speed CPU.
Levels of Memory
Level 1 or Register- It is a type of memory in which data is stored and accepted that are
immediately stored in the CPU. The most commonly used register is Accumulator, Program
counter, Address Register, etc.
Level 2 or Cache memory- It is the fastest memory that has faster access time where data
is temporarily stored for faster access.
Level 3 or Main Memory- It is the memory on which the computer works currently. It is
small in size and once power is off data no longer stays in this memory.
Level 4 or Secondary Memory- It is external memory that is not as fast as the main
memory but data stays permanently in this memory.
Cache Performance
144
When the processor needs to read or write a location in the main memory, it first checks for a
corresponding entry in the cache. If the processor finds that the memory location is in the
cache, a Cache Hit has occurred and data is read from the cache. If the processor does not
find the memory location in the cache, a cache miss has occurred. For a cache miss, the cache
allocates a new entry and copies in data from the main memory, and then the request is
fulfilled from the contents of the cache. The performance of cache memory is frequently
measured in terms of a quantity called Hit ratio.
Cache Mapping
There are three different types of mapping used for the purpose of cache memory, which is as
follows:
Direct Mapping
Associative Mapping
Set-Associative Mapping
1. Direct Mapping
The simplest technique, known as direct mapping, maps each block of main memory into
only one possible cache line. or In Direct mapping, assign each memory block to a specific
line in the cache. If a line is previously taken up by a memory block when a new block needs
to be loaded, the old block is trashed. An address space is split into two parts index field and
a tag field. The cache is used to store the tag field whereas the rest is stored in the main
memory. Direct mapping`s performance is directly proportional to the Hit ratio.
2. Associative Mapping
In this type of mapping, associative memory is used to store the content and addresses of
the memory word. Any block can go into any line of the cache. This means that the word id
bits are used to identify which word in the block is needed, but the tag becomes all of the
remaining bits. This enables the placement of any word at any place in the cache memory.
It is considered to be the fastest and most flexible mapping form. In associative mapping,
the index bits are zero
3. Set-Associative Mapping
This form of mapping is an enhanced form of direct mapping where the drawbacks of
direct mapping are removed. Set associative addresses the problem of possible thrashing in
the direct mapping method. It does this by saying that instead of having exactly one line
that a block can map to in the cache,we will group a few lines together creating a set. Then
a block in memory can map to any one of the lines of a specific set. Set-associative
mapping allows each word that is present in the cache can have two or more words in the
main memory for the same index address. Set associative cache mapping combines the best
145
of direct and associative cache mapping techniques. In set associative mapping the index
bits are given by the set offset bits. In this case, the cache consists of a number of sets, each
of which consists of a number of lines.
Secondary Cache: Secondary cache is placed between the primary cache and the rest of the
memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is also housed on
the processor chip.
Spatial Locality of Reference: Spatial Locality of Reference says that there is a chance that
the element will be present in close proximity to the reference point and next time if again
searched then more close proximity to the point of reference.
Temporal Locality of Reference: Temporal Locality of Reference uses the Least recently
used algorithm will be used. Whenever there is page fault occurs within a word will not only
load the word in the main memory but the complete page fault will be loaded because the
spatial locality of reference rule says that if you are referring to any word next word will be
referred to in its register that’s why we load complete page table so the complete block will
be loaded.
146
same set of memory locations for a particular time period. In other words, Locality of
Reference refers to the tendency of the computer program to access instructions whose
addresses are near one another. The property of locality of reference is mainly shown by
loops and subroutine calls in a program.
In case of loops in program control processing unit repeatedly refers to the set of
instructions that constitute the loop.
In case of subroutine calls, every time the set of instructions are fetched from memory.
References to data items also get localized that means same data item is referenced again
and again.
Cache Operation-
It is based on the principle of locality of reference. There are two ways with which data or
instruction is fetched from main memory and get stored in cache memory. These two ways
are the following:
Temporal Locality-
Temporal locality means current data or instruction that is being fetched may be needed
soon. So we should store that data or instruction in the cache memory so that we can avoid
again searching in main memory for the same data.
Spatial Locality–
Spatial locality means instruction or data near to the current memory location that is being
fetched, may be needed soon in the near future. This is slightly different from the temporal
locality. Here we are talking about nearly located memory locations while in temporal
locality we were talking about the actual memory location that was being fetched.
IMPORTANT QUESTIONS
147
Q.1 What are the memory management requirements?
Q.2 Explain static partitioned allocation with partition sizes 300,150, 100, 200, 20. Assuming
first fit method indicate the memory status after memory request for sizes 80, 180, 280, 380,
30.
Q.7 What is demand paging? Explain it with address translation mechanism used.
Q.9 How many page faults would occur for the following replacement algorithm, assuming
four and six frames respectively?
Q.12 What is swapping? Why does one need to swap areas of memory?
Q.13 Explain how segmented memory management works. Also explain in details address
translation and relocation segmented memory management
Q.14 What is the purpose of a TLB? Explain the TLB lookup with the help of a block
diagram, explaining the hardware required.
Q.15 Compare and contrast the paging with segmentation. In particular, describe issues
related to fragmentation
Q.17 Give the relative advantages and disadvantages of load time dynamic linking and run-
time dynamic linking. Differentiate them from static linking
Q.18 What is meant by virtual memory? With the help of a block diagram explain the data
structures used.
Q.19 What is a page and what is a frame. How are the two related?
148
Q.20 Give description of hard-ware support to paging
Q.21 What is a page fault? What action does the OS? Take when a page fault occurs?
149