Module 4
Module 4
Page 1
Operating Systems – BCS303
Page 2
Operating Systems – BCS303
Page 3
Operating Systems – BCS303
Page 4
Operating Systems – BCS303
3.7 Swapping
• A process must be in memory to be executed.
• A process can be
→ swapped temporarily out-of-memory to a backing-store and
→ then brought into memory for continued execution.
• Backing-store is a fast disk which is large enough to accommodate copies of all memory-images for
all users.
• Roll out/Roll in is a swapping variant used for priority-based scheduling algorithms.
Lower-priority process is swapped out so that higher-priority process can be loaded and
executed.
Once the higher-priority process finishes, the lower-priority process can be swapped back in
and continued (Figure 3.12).
Page 5
Operating Systems – BCS303
Page 6
Operating Systems – BCS303
Page 7
Operating Systems – BCS303
3.8.3 Fragmentation
• Two types of memory fragmentation: 1) Internal fragmentation and
2) External fragmentation
1) Internal Fragmentation
• The general approach is to
→ break the physical-memory into fixed-sized blocks and
→ allocate memory in units based on block size (Figure 3.14).
• The allocated-memory to a process may be slightly larger than the requested-memory.
• The difference between requested-memory and allocated-memory is called internal fragmentation i.e.
Unused memory that is internal to a partition.
2) External Fragmentation
• External fragmentation occurs when there is enough total memory-space to satisfy a request but the
available-spaces are not contiguous. (i.e. storage is fragmented into a large number of small holes).
• Both the first-fit and best-fit strategies for memory-allocation suffer from external fragmentation.
• Statistical analysis of first-fit reveals that
→ given N allocated blocks, another 0.5 N blocks will be lost to fragmentation.
This property is known as the 50-percent rule.
• Two solutions to external fragmentation (Figure 3.15):
1) Compaction
The goal is to shuffle the memory-contents to place all free memory together in one large
hole.
Compaction is possible only if relocation is
→ dynamic and
→ done at execution-time.
2) Permit the logical-address space of the processes to be non-contiguous.
This allows a process to be allocated physical-memory wherever such memory is available.
Two techniques achieve this solution:
1) Paging and
2) Segmentation.
Page 8
Operating Systems – BCS303
3.9 Paging
• Paging is a memory-management scheme.
• This permits the physical-address space of a process to be non-contiguous.
• This also solves the considerable problem of fitting memory-chunks of varying sizes onto the
backing-store.
• Traditionally: Support for paging has been handled by hardware.
Recent designs: The hardware & OS are closely integrated.
Page 9
Operating Systems – BCS303
• The page-size (like the frame size) is defined by the hardware (Figure 3.18).
• If the size of the logical-address space is 2 m, and a page-size is 2n addressing-units (bytes or words)
then the high-order m-n bits of a logical-address designate the page-number, and the n low-order bits
designate the page-offset.
Figure 3.18 Free frames (a) before allocation and (b) after allocation
Page 10
Operating Systems – BCS303
Page 11
Operating Systems – BCS303
3.9.3 Protection
• Memory-protection is achieved by protection-bits for each frame.
• The protection-bits are kept in the page-table.
• One protection-bit can define a page to be read-write or read-only.
• Every reference to memory goes through the page-table to find the correct frame-number.
• Firstly, the physical-address is computed. At the same time, the protection-bit is checked to verify
that no writes are being made to a read-only page.
• An attempt to write to a read-only page causes a hardware-trap to the OS (or memory-protection
violation).
Valid Invalid Bit
• This bit is attached to each entry in the page-table (Figure 3.20).
3.9.3.1 Valid bit: The page is in the process’ logical-address space.
3.9.3.2 Invalid bit: The page is not in the process’ logical-address space.
• Illegal addresses are trapped by use of valid-invalid bit.
• The OS sets this bit for each page to allow or disallow access to the page.
Page 12
Operating Systems – BCS303
Page 13
Operating Systems – BCS303
3.10.1Hierarchical Paging
• Problem: Most computers support a large logical-address space (2 32 to 264). In these systems, the
page-table itself becomes excessively large.
Solution: Divide the page-table into smaller pieces.
Two Level Paging Algorithm
• The page-table itself is also paged (Figure 3.22).
• This is also known as a forward-mapped page-table because address translation works from the outer
page-table inwards.
• For example (Figure 3.23):
Consider the system with a 32-bit logical-address space and a page-size of 4 KB.
A logical-address is divided into
→ 20-bit page-number and
→ 12-bit page-offset.
Since the page-table is paged, the page-number is further divided into
→ 10-bit page-number and
→ 10-bit page-offset.
Thus, a logical-address is as follows:
Page 14
Operating Systems – BCS303
Page 15
Operating Systems – BCS303
3.11 SEGMENTATION
Basic method:
Most users do not think memory as a linear array of bytes rather the users thinks memory as a
collection of variable sized segments which are dedicated to a particular use such as code,
data, stack, heap etc.
A logical address is a collection of segments. Each segment has a name and length. The
address specifies both the segment name and the offset within the segments.
The users specifies address by using two quantities: a segment name and an offset.
For simplicity the segments are numbered and referred by a segment number. So the
logical address consists of <segment number, offset>.
Hardware support:
We must define an implementation to map 2D user defined address in to 1D physical
address.
Page 16
Operating Systems – BCS303
This mapping is affected by a segment table. Each entry in the segment table has a
segment base and segment limit. The segment base contains the starting physical address
where the segment resides and limit specifies the length of the segment.
Advantages:
Eliminates fragmentation.
Provides virtual growth.
Allows dynamic segment growth.
Assist dynamic linking.
Segmentation is visible.
Page 17
Operating Systems – BCS303
4.1Virtual Memory
• In many cases, the entire program is not needed.
For example:
→ Unusual error-conditions are almost never executed.
→ Arrays & lists are often allocated more memory than needed.
→ Certain options & features of a program may be used rarely.
• Benefits of executing a program that is only partially in memory.
More programs could be run at the same time.
Programmers could write for a large virtual-address space and need no longer use overlays.
Less I/O would be needed to load/swap programs into memory, so each user program would
run faster.
• Virtual Memory is a technique that allows the execution of processes that are not completely in
memory (Figure 4.1).
VM involves the separation of logical-memory as perceived by users from physical-memory.
VM allows files and memory to be shared by several processes through page-sharing.
Logical-address space can .‟. be much larger than physical-address space.
• Virtual-memory can be implemented by:
1) Demand paging and
2) Demand segmentation.
• The virtual (or logical) address-space of a process refers to the logical (or virtual) view of how a
process is stored in memory.
• Physical-memory may be organized in page-frames and that the physical page-frames assigned to a
process may not be contiguous.
• It is up to the MMU to map logical-pages to physical page-frames in memory.
Figure 4.1 Diagram showing virtual memory that is larger than physical-memory
Page 18
Operating Systems – BCS303
Page 19
Operating Systems – BCS303
• A page-fault occurs when the process tries to access a page that was not brought into memory.
• Procedure for handling the page-fault (Figure 4.4):
1) Check an internal-table to determine whether the reference was a valid or an invalid memory
access.
2) If the reference is invalid, we terminate the process.
If reference is valid, but we have not yet brought in that page, we now page it in.
3) Find a free-frame (by taking one from the free-frame list, for example).
4) Read the desired page into the newly allocated frame.
5) Modify the internal-table and the page-table to indicate that the page is now in memory.
6) Restart the instruction that was interrupted by the trap.
• Pure demand paging: Never bring pages into memory until required.
• Some programs may access several new pages of memory with each instruction, causing multiple
page-faults and poor performance.
• Programs tend to have locality of reference, so this results in reasonable performance from demand
paging.
• Hardware support:
1) Page-table
Mark an entry invalid through a valid-invalid bit.
2) Secondary memory
It holds pages that are not present in main-memory.
It is usually a high-speed disk.
It is known as the swap device (and the section of disk used for this purpose is known as
swap space).
Page 20
Operating Systems – BCS303
4.2.2 Performance
• Demand paging can significantly affect the performance of a computer-system.
• Let p be the probability of a page-fault (0≤p ≤1).
if p = 0, no page-faults.
if p = 1, every reference is a fault.
• effective access time(EAT)=[(1 - p) *memory access]+ [p *page-fault time]
• A page-fault causes the following events to occur:
1) Trap to the OS.
2) Save the user-registers and process-state.
3) Determine that the interrupt was a page-fault. '
4) Check that the page-reference was legal and determine the location of the page on the disk.
5) Issue a read from the disk to a free frame:
a. Wait in a queue for this device until the read request is serviced.
b. Wait for the device seek time.
c. Begin the transfer of the page to a free frame.
6) While waiting, allocate the CPU to some other user.
7) Receive an interrupt from the disk I/O subsystem (I/O completed).
8) Save the registers and process-state for the other user (if step 6 is executed).
9) Determine that the interrupt was from the disk.
10) Correct the page-table and other tables to show that the desired page is now in memory.
11) Wait for the CPU to be allocated to this process again.
12) Restore the user-registers, process-state, and new page-table, and then resume the
interrupted instruction.
4.3 Copy-on-Write
• This technique allows the parent and child processes initially to share the same pages.
• If either process writes to a shared-page, a copy of the shared-page is created.
• For example:
Assume that the child process attempts to modify a page containing portions of the stack,
with the pages set to be copy-on-write.
OS will then create a copy of this page, mapping it to the address space of the child process.
Child process will then modify its copied page & not the page belonging to the parent process.
Page 21
Operating Systems – BCS303
Page 22
Operating Systems – BCS303
• Problem: If no frames are free, 2 page transfers (1 out & 1 in) are required. This situation
→ doubles the page-fault service-time and
→ increases the EAT accordingly.
Solution: Use a modify-bit (or dirty bit).
• Each page or frame has a modify-bit associated with the hardware.
• The modify-bit for a page is set by the hardware whenever any word is written into the page
(indicating that the page has been modified).
• Working:
1) When we select a page for replacement, we examine it‟s modify-bit.
2) If the modify-bit =1, the page has been modified. So, we must write the page to the disk.
3) If the modify-bit=0, the page has not been modified. So, we need not write the page to the
disk, it is already there.
• Advantage:
1) Can reduce the time required to service a page-fault.
• We must solve 2 major problems to implement demand paging:
1) Develop a Frame-allocation algorithm:
If we have multiple processes in memory, we must decide how many frames to allocate to
each process.
2) Develop a Page-replacement algorithm:
We must select the frames that are to be replaced.
Page 23
Operating Systems – BCS303
The first three references(7, 0, 1) cause page-faults and are brought into these empty
frames.
The next reference(2) replaces page 7, because page 7 was brought in first.
Since 0 is the next reference and 0 is already in memory, we have no fault for this reference.
The first reference to 3 results in replacement of page 0, since it is now first in line.
This process continues till the end of string.
There are fifteen faults altogether.
• Advantage:
1) Easy to understand & program.
• Disadvantages:
1) Performance is not always good (Figure 4.8).
2) A bad replacement choice increases the page-fault rate (Belady's anomaly).
• For some algorithms, the page-fault rate may increase as the number of allocated frames increases.
This is known as Belady's anomaly.
• Example: Consider the following reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
For this example, the number of faults for four frames (ten) is greater than the number of faults
for three frames (nine)!
Page 24
Operating Systems – BCS303
The first three references cause faults that fill the three empty frames.
The reference to page 2 replaces page 7, because page 7 will not be used until reference 18.
The page 0 will be used at 5, and page 1 at 14.
With only nine page-faults, optimal replacement is much better than a FIFO algorithm, which
results in fifteen faults.
• Advantage:
1) Guarantees the lowest possible page-fault rate for a fixed number of frames.
• Disadvantage:
1) Difficult to implement, because it requires future knowledge of the reference string.
Page 25
Operating Systems – BCS303
The first five faults are the same as those for optimal replacement.
When the reference to page 4 occurs, LRU sees that of the three frames, page 2 was used
least recently. Thus, the LRU replaces page 2.
The LRU algorithm produces twelve faults.
• Two methods of implementing LRU:
1) Counters
Each page-table entry is associated with a time-of-use field.
A counter(or logical clock) is added to the CPU.
The clock is incremented for every memory-reference.
Whenever a reference to a page is made, the contents of the clock register are copied to the
time-of-use field in the page-table entry for that page.
We replace the page with the smallest time value.
2) Stack
Keep a stack of page-numbers (Figure 4.11).
Whenever a page is referenced, the page is removed from the stack and put on the top.
The most recently used page is always at the top of the stack.
The least recently used page is always at the bottom.
Stack is best implement by a doubly linked-list.
Advantage:
1) Does not suffer from Belady's anomaly.
Disadvantage:
1) Few computer systems provide sufficient h/w support for true LRU page replacement.
Both LRU & OPT are called stack algorithms.
Figure 4.11 Use of a stack to record the most recent page references
Page 26
Operating Systems – BCS303
Page 27
Operating Systems – BCS303
• A circular queue can be used to implement the second-chance algorithm (Figure 4.12).
A pointer (that is, a hand on the clock) indicates which page is to be replaced next.
When a frame is needed, the pointer advances until it finds a page with a 0 reference bit.
As it advances, it clears the reference bits.
Once a victim page is found, the page is replaced, and the new page is inserted in the circular
queue in that position.
Page 28
Operating Systems – BCS303
Page 29
Operating Systems – BCS303
4.6 Thrashing
• If a process does not have "enough" pages, the page-fault rate is very high. This leads to:
→ low CPU utilization
→ operating system thinks that it needs to increase the degree of multiprogramming
→ another process added to the system.
• If the number of frames allocated to a low-priority process falls below the minimum number required,
it must be suspended.
• A process is thrashing if it is spending more time paging than executing.
Page 30
Operating Systems – BCS303
EXERCISE PROBLEMS
Solution:
Conclusion: The optimal page replacement algorithm is most efficient among three algorithms, as it
has lowest page faults i.e. 9.
Page 31
Operating Systems – BCS303
Solution:
Page 32
Operating Systems – BCS303
3) For the following page reference, calculate the page faults that occur using FIFO and LRU for 3 and 4
page frames respectively
5, 4, 3, 2, 1, 4, 3, 5, 4, 3, 2, 1, 5.
Solution:
Page 33
Operating Systems – BCS303
Solution:
Belady's Anomaly:
“On increasing the number of page frames, the no. of page faults do not necessarily decrease,
they may also increase”.
• Example: Consider the following reference string when number of frame used is 3 and 4:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
(i) FIFO with 3 frames:
Frames 1 2 3 4 1 2 5 1 2 3 4 5
1 1 2 3 4 1 2 5 5 5 3 4 4
2 1 2 3 4 1 2 2 2 5 3 3
3 1 2 3 4 1 1 1 2 5 5
No. of Page faults √ √ √ √ √ √ √ √ √
No. of page faults=9
Page 34