UNIT 2 Memory Management
UNIT 2 Memory Management
• Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part is not in
use?
• In multiprogramming, the OS decides which process will get memory when and how much.
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been terminated.
The memory management techniques can be classified into following main categories:
Static/Fixed Partitioning-
• Static partitioning is a fixed size partitioning scheme.
OPERATING SYSTEM 1
FYBCA SEM 2
It is important to note that these partitions are allocated to the processes as they arrive and the partition that is
allocated to the arrived process basically depends on the algorithm followed.
If there is some wastage inside the partition then it is termed Internal Fragmentation.
1 Internal Fragmentation: Suppose the size of the process is lesser than the size of the partition in that case
some size of the partition gets wasted and remains unused. This wastage inside the memory is generally termed
as Internal fragmentation. As we have shown in the above diagram the 70 KB partition is used to load a process
of 50 KB so the remaining 20 KB got wasted.
2. Limitation on the size of the process: If in a case size of a process is more than that of a maximum-sized
partition then that process cannot be loaded into the memory.
3. External Fragmentation:It is another drawback of the fixed-size partition scheme as total unused space by
various partitions cannot be used in order to load the processes even though there is the availability of space but
it is not in the contiguous fashion.
4. Degree of multiprogramming is less:In this partition scheme, as the size of the partition cannot change
according to the size of the process. Thus the degree of multiprogramming is very less and is fixed.
OPERATING SYSTEM 2
FYBCA SEM 2
The size of the partition is not declared initially. Whenever any process arrives, a partition of size equal to the
size of the process is created and then allocated to the process. Thus the size of each partition is equal to the size
of the process.
As partition size varies according to the need of the process so in this partition scheme there is no internal
fragmentation.
1. No Internal Fragmentation
As in this partition scheme space in the main memory is allocated strictly according to the requirement
of the process thus there is no chance of internal fragmentation. Also, there will be no unused space left
in the partition.
2. Degree of Multiprogramming is Dynamic
As there is no internal fragmentation in this partition scheme due to which there is no unused space in
the memory. Thus more processes can be loaded into the memory at the same time.
3. No Limitation on the Size of Process
In this partition scheme as the partition is allocated to the process dynamically thus the size of the
process cannot be restricted because the partition size is decided according to the process size.
1. External Fragmentation
As there is no internal fragmentation which is an advantage of using this partition scheme does not
mean there will no external fragmentation. Let us understand this with the help of an example: In the
above diagram- process P1(3MB) and process P3(8MB) completed their execution. Hence there are
two spaces left i.e. 3MB and 8MB. Let’s there is a Process P4 of size 15 MB comes. But the empty
space in memory cannot be allocated as no spanning is allowed in contiguous allocation. Because the
rule says that process must be continuously present in the main memory in order to get executed. Thus
it results in External Fragmentation.
2. Difficult Implementation
The implementation of this partition scheme is difficult as compared to the Fixed Partitioning scheme
as it involves the allocation of memory at run-time rather than during the system configuration. As we
know that OS keeps the track of all the partitions but here allocation and deallocation are done very
frequently and partition size will be changed at each time so it will be difficult for the operating system
to manage everything.
Internal Fragmentation
• It occurs when the space is left inside the partition after allocating the partition to a process.
• This space is called as internally fragmented space.
• This space can not be allocated to any other process.
• This is because only static partitioning allows to store only one process in each partition.
• Internal Fragmentation occurs only in static partitioning.
OPERATING SYSTEM 3
FYBCA SEM 2
External Fragmentation
• It occurs when the total amount of empty space required to store the process is available in the main
memory.
• But because the space is not contiguous, so the process can not be stored.
Popular algorithms used for allocating the partitions to the arriving processes are-
Example: Consider six memory partitions of size 200 KB, 400 KB, 600 KB, 500 KB, 300 KB and 250 KB.
These partitions need to be allocated to four processes of sizes 357 KB, 210 KB, 468 KB and 491 KB in that
order.
OPERATING SYSTEM 4
FYBCA SEM 2
• Process P1 = 357 KB
• Process P2 = 210 KB
• Process P3 = 468 KB
• Process P4 = 491 KB
Allocation Using First Fit Algorithm-
In First Fit Algorithm,
Step-01:
Step-02:
Step-03:
Step-04:
• Process P4 can not be allocated the memory.
• This is because no partition of size greater than or equal to the size of process P4 is available.
Allocation Using Best Fit Algorithm-new and delete Operators in C++ For...
• Algorithm first scans all the partitions.
• It then allocates the partition of smallest size that can store the process.
The allocation of partitions to the given processes is shown below-
Step-01:
Step-02:
Step-03:
OPERATING SYSTEM 5
FYBCA SEM 2
Step-04:
Step-01:
Step-02:
Step-03:
• Process P3 and Process P4 can not be allocated the memory.
• This is because no partition of size greater than or equal to the size of process P3 and process P4 is
available.
OPERATING SYSTEM 6
FYBCA SEM 2
Paging
1. Page Number
2. Page Offset
• Page Number specifies the specific page of the process from which CPU wants to read the data.
• Page Offset specifies the specific word on the page that CPU wants to read.
• For the page number generated by the CPU,
• Page table is maintained, it provides the corresponding frame number (base address of the frame)
where that page is stored in the main memory.
• The frame number combined with the page offset forms the required physical address.
• Frame number specifies the specific frame where the required page is stored.
• Page Offset specifies the specific word that has to be read from that page.
OPERATING SYSTEM 7
FYBCA SEM 2
The page size (like the frame size) is defined by the hardware. The size of a page is a power of 2, varying
between 512 bytes and 1 GB per page, depending on the computer architecture. The selection of a power of 2 as
a page size makes the translation of a logical address into a page number and page offset particularly easy.
OPERATING SYSTEM 8
FYBCA SEM 2
If the size of the logical address space is 2m, and a page size is 2n bytes, then the high-order m − n bits of a
logical address designate the page number, and the n low-order bits designate the page offset. Thus, the logical
address is as follows:
Example, consider the memory in Figure below. Here, in the logical address, n= 2 and m = 4. Using a page size
of 4 bytes and a physical memory of 32 bytes (8 pages), we show how the user’s view of memory can be
mapped into physical memory.
• Logical address 0 is page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5.
Thus, logical address 0 maps to physical address 20 [= (5 × 4) + 0].
• Logical address 3 (page 0, offset 3) maps to physical address 23 [= (5 × 4) + 3].
• Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped to frame 6. Thus,
logical address 4 maps to physical address 24 [= (6 × 4) + 0].
• Logical address 13 maps to physical address 9.
• You may have noticed that paging itself is a form of dynamic relocation. Every logical address is
bound by the paging hardware to some physical address. Using paging is similar to using a table of
base (or relocation) registers, one for each frame of memory.
Segmentation
• Like Paging, Segmentation is another non-contiguous memory allocation technique.
• In segmentation, process is not divided blindly into fixed size pages.
• Rather, the process is divided into modules for better visualization.
Characteristics-
• In segmentation, secondary memory and main memory are divided into partitions of unequal size.
• The size of partitions depend on the length of modules.
• The partitions of secondary memory are called as segments.
OPERATING SYSTEM 9
FYBCA SEM 2
OPERATING SYSTEM 10
FYBCA SEM 2
A logical address consists of two parts: a segment number, s, and an offset into that segment, d.
The segment number is used as an index to the segment table. The offset d of the logical address must be
between 0 and the segment limit. If it is not, we trap to the operating system (logical addressing attempt beyond
end of segment). When an offset is legal, it is added to the segment base to produce the address in physical
memory of the desired byte. The segment table is thus essentially an array of base–limit register pairs.
Advantages:
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compare to the page table in paging.
Disadvantages
• The basic idea behind demand paging is that when a process is swapped in, its pages are not swapped
in all at once. Rather they are swapped in only when the process needs them. ( on demand. ) This is
termed a lazy swapper, although a pager is a more accurate term.
OPERATING SYSTEM 11
FYBCA SEM 2
Basic Concepts
• The basic idea behind paging is that when a process is swapped in, the pager only loads into
memory those pages that it expects the process to need ( right away. )
• Pages that are not loaded into memory are marked as invalid in the page table, using the
invalid bit. ( The rest of the page table entry may either be blank or contain information about
where to find the swapped-out page on the hard drive. )
• If the process only ever accesses pages that are loaded in memory ( memory resident pages ),
then the process runs exactly as if all the pages were loaded in to memory.
On the other hand, if a page is needed that was not originally loaded up, then a page fault trap is generated,
which must be handled in a series of steps:
1. The memory address requested is first checked, to make sure it was a valid memory request.
2. If the reference was invalid, the process is terminated. Otherwise, the page must be paged in.
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessary page from disk. ( This will usually
block the process on an I/O wait, allowing some other process to use the CPU in the meantime. )
5. When the I/O operation is complete, the process's page table is updated with the new frame
number, and the invalid bit is changed to indicate that this is now a valid page reference.
6. The instruction that caused the page fault must now be restarted from the beginning, ( as soon
as this process gets another turn on the CPU. )
• In an extreme case, NO pages are swapped in for a process until they are requested by page faults. This
is known as pure demand paging.
OPERATING SYSTEM 12
FYBCA SEM 2
• Although FIFO is simple and easy, it is not always optimal, or even efficient.
• An interesting effect that can occur with FIFO is Belady's anomaly, in which increasing the number of
frames available can actually increase the number of page faults that occur.
• The discovery of Belady's anomaly lead to the search for an optimal page-replacement algorithm,
which is simply that which yields the lowest of all possible page-faults, and which does not suffer from
Belady's anomaly.
• Such an algorithm does exist, and is called OPT or MIN. This algorithm is simply "Replace the page
that will not be used for the longest time in the future.
• For example, Figure 9.14 shows that by applying OPT to the same reference string used for the FIFO
example, the minimum number of possible page faults is 9. Since 6 of the page-faults are unavoidable
( the first reference to each new page ), FIFO can be shown to require 3 times as many ( extra ) page
faults as the optimal algorithm. ( Note: The book claims that only the first three page faults are required
by all algorithms, indicating that FIFO is only twice as bad as OPT. )
• The prediction behind LRU, the Least Recently Used, algorithm is that the page that has not been used
in the longest time is the one that will not be used again in the near future. ( Note the distinction
between FIFO and LRU: The former looks at the oldest load time, and the latter looks at the
oldest use time. )
OPERATING SYSTEM 13
FYBCA SEM 2
• Some view LRU as analogous to OPT, except looking backwards in time instead of forwards. ( OPT
has the interesting property that for any reference string S and its reverse R, OPT will generate the
same number of page faults for S and for R. It turns out that LRU has this same property. )
• Figure 9.15 illustrates LRU for our sample string, yielding 12 page faults, ( as compared to 15 for FIFO
and 9 for OPT. )
2.5 Thrashing
Thrashing is a condition or a situation when the system is spending a major portion of its time servicing the
page faults, but the actual processing done is very negligible.
Causes of thrashing:
1. High degree of multiprogramming.
2. Lack of frames.
3. Page replacement policy.
OPERATING SYSTEM 14