0% found this document useful (0 votes)
14 views14 pages

UNIT 2 Memory Management

The document discusses memory management in operating systems, covering functions such as tracking memory usage, allocation, and deallocation. It details memory allocation techniques including contiguous (static and dynamic partitioning) and non-contiguous (paging and segmentation) methods, along with their advantages and disadvantages. Additionally, it explains demand paging and the handling of page faults in virtual memory management.

Uploaded by

espinozaden45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views14 pages

UNIT 2 Memory Management

The document discusses memory management in operating systems, covering functions such as tracking memory usage, allocation, and deallocation. It details memory allocation techniques including contiguous (static and dynamic partitioning) and non-contiguous (paging and segmentation) methods, along with their advantages and disadvantages. Additionally, it explains demand paging and the handling of page faults in virtual memory management.

Uploaded by

espinozaden45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

FYBCA SEM 2

UNIT 2 Memory Management

2.1 Memory Management Functions


2.2 Contiguous Memory Allocation
2.3 Non-Contiguous Memory Allocation
2.4 Virtual Memory Management
2.4.1 Demand Paging
2.4.2 Page Replacement
2.5 Thrashing

2.1 Memory Management Functions:


Memory management refers to management of Primary Memory or Main Memory. Main memory is a large
array of words or bytes where each word or byte has its own address. Main memory provides a fast storage that
can be accessed directly by the CPU. For a program to be executed, it must in the main memory.

An Operating System does the following activities for memory management −

• Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part is not in
use?
• In multiprogramming, the OS decides which process will get memory when and how much.
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been terminated.

Memory Management Techniques:

The memory management techniques can be classified into following main categories:

• Contiguous memory allocation


• Non-Contiguous memory allocation

2.2 Contiguous Memory Allocation

• Contiguous memory allocation is a memory allocation technique.


• It allows to store the process only in a contiguous fashion.
• Thus, entire process has to be stored as a single entity at one place inside the memory.

Static/Fixed Partitioning-
• Static partitioning is a fixed size partitioning scheme.

OPERATING SYSTEM 1
FYBCA SEM 2

• In this technique, main memory is pre-divided into fixed size partitions.


• The size of each partition is fixed and cannot be changed.
• Each partition is allowed to store only one process.

• These partitions are allocated to the processes as they arrive.


• The partition allocated to the arrived process depends on the algorithm followed.
Let's take an example of fixed size partitioning scheme, we will divide a memory size of 15 KB into fixed-size
partitions:

It is important to note that these partitions are allocated to the processes as they arrive and the partition that is
allocated to the arrived process basically depends on the algorithm followed.

If there is some wastage inside the partition then it is termed Internal Fragmentation.

Advantage of Fixed-size Partition Scheme

• This scheme is simple and is easy to implement

Disadvantages of Fixed-size Partition Scheme

1 Internal Fragmentation: Suppose the size of the process is lesser than the size of the partition in that case
some size of the partition gets wasted and remains unused. This wastage inside the memory is generally termed
as Internal fragmentation. As we have shown in the above diagram the 70 KB partition is used to load a process
of 50 KB so the remaining 20 KB got wasted.

2. Limitation on the size of the process: If in a case size of a process is more than that of a maximum-sized
partition then that process cannot be loaded into the memory.

3. External Fragmentation:It is another drawback of the fixed-size partition scheme as total unused space by
various partitions cannot be used in order to load the processes even though there is the availability of space but
it is not in the contiguous fashion.

4. Degree of multiprogramming is less:In this partition scheme, as the size of the partition cannot change
according to the size of the process. Thus the degree of multiprogramming is very less and is fixed.

Variable size partitioning


This scheme is also known as Dynamic partitioning and is came into existence to overcome the drawback i.e
internal fragmentation that is caused by Static partitioning. In this partitioning, scheme allocation is done
dynamically.

OPERATING SYSTEM 2
FYBCA SEM 2

The size of the partition is not declared initially. Whenever any process arrives, a partition of size equal to the
size of the process is created and then allocated to the process. Thus the size of each partition is equal to the size
of the process.

As partition size varies according to the need of the process so in this partition scheme there is no internal
fragmentation.

Some Advantages of using this partition scheme are as follows:

1. No Internal Fragmentation
As in this partition scheme space in the main memory is allocated strictly according to the requirement
of the process thus there is no chance of internal fragmentation. Also, there will be no unused space left
in the partition.
2. Degree of Multiprogramming is Dynamic
As there is no internal fragmentation in this partition scheme due to which there is no unused space in
the memory. Thus more processes can be loaded into the memory at the same time.
3. No Limitation on the Size of Process
In this partition scheme as the partition is allocated to the process dynamically thus the size of the
process cannot be restricted because the partition size is decided according to the process size.

Disadvantages of Variable-size Partition Scheme

1. External Fragmentation
As there is no internal fragmentation which is an advantage of using this partition scheme does not
mean there will no external fragmentation. Let us understand this with the help of an example: In the
above diagram- process P1(3MB) and process P3(8MB) completed their execution. Hence there are
two spaces left i.e. 3MB and 8MB. Let’s there is a Process P4 of size 15 MB comes. But the empty
space in memory cannot be allocated as no spanning is allowed in contiguous allocation. Because the
rule says that process must be continuously present in the main memory in order to get executed. Thus
it results in External Fragmentation.
2. Difficult Implementation
The implementation of this partition scheme is difficult as compared to the Fixed Partitioning scheme
as it involves the allocation of memory at run-time rather than during the system configuration. As we
know that OS keeps the track of all the partitions but here allocation and deallocation are done very
frequently and partition size will be changed at each time so it will be difficult for the operating system
to manage everything.

Internal Fragmentation
• It occurs when the space is left inside the partition after allocating the partition to a process.
• This space is called as internally fragmented space.
• This space can not be allocated to any other process.
• This is because only static partitioning allows to store only one process in each partition.
• Internal Fragmentation occurs only in static partitioning.

OPERATING SYSTEM 3
FYBCA SEM 2

External Fragmentation

• It occurs when the total amount of empty space required to store the process is available in the main
memory.
• But because the space is not contiguous, so the process can not be stored.

Popular algorithms used for allocating the partitions to the arriving processes are-

1. First Fit Algorithm


2. Best Fit Algorithm
3. Worst Fit Algorithm

1. First Fit Algorithm-


• This algorithm starts scanning the partitions serially from the starting.
• When an empty partition that is big enough to store the process is found, it is allocated to the process.
• Obviously, the partition size has to be greater than or at least equal to the process size.
2. Best Fit Algorithm-
• This algorithm first scans all the empty partitions.
• It then allocates the smallest size partition to the process.
3. Worst Fit Algorithm-
• This algorithm first scans all the empty partitions.
• It then allocates the largest size partition to the process.

Example: Consider six memory partitions of size 200 KB, 400 KB, 600 KB, 500 KB, 300 KB and 250 KB.
These partitions need to be allocated to four processes of sizes 357 KB, 210 KB, 468 KB and 491 KB in that
order.

Perform the allocation of processes using-

1. First Fit Algorithm


2. Best Fit Algorithm
3. Worst Fit Algorithm
The main memory has been divided into fixed size partitions as-

OPERATING SYSTEM 4
FYBCA SEM 2

Let us say the given processes are-

• Process P1 = 357 KB
• Process P2 = 210 KB
• Process P3 = 468 KB
• Process P4 = 491 KB
Allocation Using First Fit Algorithm-
In First Fit Algorithm,

• Algorithm starts scanning the partitions serially.


• When a partition big enough to store the process is found, it allocates that partition to the process.
The allocation of partitions to the given processes is shown below-

Step-01:

Step-02:

Step-03:

Step-04:
• Process P4 can not be allocated the memory.
• This is because no partition of size greater than or equal to the size of process P4 is available.

Allocation Using Best Fit Algorithm-new and delete Operators in C++ For...
• Algorithm first scans all the partitions.
• It then allocates the partition of smallest size that can store the process.
The allocation of partitions to the given processes is shown below-

Step-01:

Step-02:

Step-03:

OPERATING SYSTEM 5
FYBCA SEM 2

Step-04:

Allocation Using Worst Fit Algorithm-


• Algorithm first scans all the partitions.
• It then allocates the partition of largest size to the process.
The allocation of partitions to the given processes is shown below-

Step-01:

Step-02:

Step-03:
• Process P3 and Process P4 can not be allocated the memory.
• This is because no partition of size greater than or equal to the size of process P3 and process P4 is
available.

2.3 Non-Contiguous Memory Allocation-

• Non-contiguous memory allocation is a memory allocation technique.


• It allows to store parts of a single process in a non-contiguous fashion.
• Thus, different parts of the same process can be stored at different places in the main memory.
Example-

• Consider a process is divided into 4 pages P0, P1, P2 and P3.


• Depending upon the availability, these pages may be stored in the main memory frames in a non-
contiguous fashion as shown-

OPERATING SYSTEM 6
FYBCA SEM 2

Paging

• Paging is a fixed size partitioning scheme.


• In paging, secondary memory and main memory are divided into equal fixed size partitions.
• The partitions of process in secondary memory are called as pages.
• The partitions of main memory are called as frames.
• Each process is divided into parts where size of each part is same as page size.
• The pages of process are stored in the frames of main memory depending upon their availability.

CPU generates a logical address consisting of two parts-

1. Page Number
2. Page Offset

• Page Number specifies the specific page of the process from which CPU wants to read the data.
• Page Offset specifies the specific word on the page that CPU wants to read.
• For the page number generated by the CPU,
• Page table is maintained, it provides the corresponding frame number (base address of the frame)
where that page is stored in the main memory.
• The frame number combined with the page offset forms the required physical address.

• Frame number specifies the specific frame where the required page is stored.
• Page Offset specifies the specific word that has to be read from that page.

OPERATING SYSTEM 7
FYBCA SEM 2

The advantages of paging are-

• It allows to store parts of a single process in a non-contiguous fashion.


• It solves the problem of external fragmentation.
Disadvantages-
• It suffers from internal fragmentation.
• There is an overhead of maintaining a page table for each process.
• The time taken to fetch the instruction increases since now two memory accesses are required.

The page size (like the frame size) is defined by the hardware. The size of a page is a power of 2, varying
between 512 bytes and 1 GB per page, depending on the computer architecture. The selection of a power of 2 as
a page size makes the translation of a logical address into a page number and page offset particularly easy.

OPERATING SYSTEM 8
FYBCA SEM 2

If the size of the logical address space is 2m, and a page size is 2n bytes, then the high-order m − n bits of a
logical address designate the page number, and the n low-order bits designate the page offset. Thus, the logical
address is as follows:

page number page offset m -n n

Example, consider the memory in Figure below. Here, in the logical address, n= 2 and m = 4. Using a page size
of 4 bytes and a physical memory of 32 bytes (8 pages), we show how the user’s view of memory can be
mapped into physical memory.

• Logical address 0 is page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5.
Thus, logical address 0 maps to physical address 20 [= (5 × 4) + 0].
• Logical address 3 (page 0, offset 3) maps to physical address 23 [= (5 × 4) + 3].
• Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped to frame 6. Thus,
logical address 4 maps to physical address 24 [= (6 × 4) + 0].
• Logical address 13 maps to physical address 9.
• You may have noticed that paging itself is a form of dynamic relocation. Every logical address is
bound by the paging hardware to some physical address. Using paging is similar to using a table of
base (or relocation) registers, one for each frame of memory.

Segmentation
• Like Paging, Segmentation is another non-contiguous memory allocation technique.
• In segmentation, process is not divided blindly into fixed size pages.
• Rather, the process is divided into modules for better visualization.
Characteristics-
• In segmentation, secondary memory and main memory are divided into partitions of unequal size.
• The size of partitions depend on the length of modules.
• The partitions of secondary memory are called as segments.

OPERATING SYSTEM 9
FYBCA SEM 2

• Each segment has a name and a length.


• The programmer therefore specifies each address by two quantities: a segment name and an offset. For
simplicity of implementation, segments are numbered and are referred to by a segment number, rather
than by a segment name.

OPERATING SYSTEM 10
FYBCA SEM 2

A logical address consists of two parts: a segment number, s, and an offset into that segment, d.

The segment number is used as an index to the segment table. The offset d of the logical address must be
between 0 and the segment limit. If it is not, we trap to the operating system (logical addressing attempt beyond
end of segment). When an offset is legal, it is added to the segment base to produce the address in physical
memory of the desired byte. The segment table is thus essentially an array of base–limit register pairs.

Advantages:

1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compare to the page table in paging.

Disadvantages

• It can have external fragmentation.


• It is difficult to allocate contiguous memory to variable sized partition
• Costly memory management algorithms
• Virtual Memory Management

2.4.1 Demand Paging

• The basic idea behind demand paging is that when a process is swapped in, its pages are not swapped
in all at once. Rather they are swapped in only when the process needs them. ( on demand. ) This is
termed a lazy swapper, although a pager is a more accurate term.

OPERATING SYSTEM 11
FYBCA SEM 2

Basic Concepts

• The basic idea behind paging is that when a process is swapped in, the pager only loads into
memory those pages that it expects the process to need ( right away. )
• Pages that are not loaded into memory are marked as invalid in the page table, using the
invalid bit. ( The rest of the page table entry may either be blank or contain information about
where to find the swapped-out page on the hard drive. )
• If the process only ever accesses pages that are loaded in memory ( memory resident pages ),
then the process runs exactly as if all the pages were loaded in to memory.

On the other hand, if a page is needed that was not originally loaded up, then a page fault trap is generated,
which must be handled in a series of steps:

1. The memory address requested is first checked, to make sure it was a valid memory request.
2. If the reference was invalid, the process is terminated. Otherwise, the page must be paged in.
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessary page from disk. ( This will usually
block the process on an I/O wait, allowing some other process to use the CPU in the meantime. )
5. When the I/O operation is complete, the process's page table is updated with the new frame
number, and the invalid bit is changed to indicate that this is now a valid page reference.
6. The instruction that caused the page fault must now be restarted from the beginning, ( as soon
as this process gets another turn on the CPU. )

Figure 9.6 - Steps in handling a page fault

• In an extreme case, NO pages are swapped in for a process until they are requested by page faults. This
is known as pure demand paging.

OPERATING SYSTEM 12
FYBCA SEM 2

Page Replacement Algorithms

FIFO Page Replacement

• A simple and obvious page replacement strategy is FIFO, i.e. first-in-first-out.


• As new pages are brought in, they are added to the tail of a queue, and the page at the head of the queue
is the next victim. In the following example, 20 page requests result in 15 page faults:

• Although FIFO is simple and easy, it is not always optimal, or even efficient.
• An interesting effect that can occur with FIFO is Belady's anomaly, in which increasing the number of
frames available can actually increase the number of page faults that occur.

Optimal Page Replacement

• The discovery of Belady's anomaly lead to the search for an optimal page-replacement algorithm,
which is simply that which yields the lowest of all possible page-faults, and which does not suffer from
Belady's anomaly.
• Such an algorithm does exist, and is called OPT or MIN. This algorithm is simply "Replace the page
that will not be used for the longest time in the future.
• For example, Figure 9.14 shows that by applying OPT to the same reference string used for the FIFO
example, the minimum number of possible page faults is 9. Since 6 of the page-faults are unavoidable
( the first reference to each new page ), FIFO can be shown to require 3 times as many ( extra ) page
faults as the optimal algorithm. ( Note: The book claims that only the first three page faults are required
by all algorithms, indicating that FIFO is only twice as bad as OPT. )

LRU Page Replacement

• The prediction behind LRU, the Least Recently Used, algorithm is that the page that has not been used
in the longest time is the one that will not be used again in the near future. ( Note the distinction
between FIFO and LRU: The former looks at the oldest load time, and the latter looks at the
oldest use time. )

OPERATING SYSTEM 13
FYBCA SEM 2

• Some view LRU as analogous to OPT, except looking backwards in time instead of forwards. ( OPT
has the interesting property that for any reference string S and its reverse R, OPT will generate the
same number of page faults for S and for R. It turns out that LRU has this same property. )
• Figure 9.15 illustrates LRU for our sample string, yielding 12 page faults, ( as compared to 15 for FIFO
and 9 for OPT. )

2.5 Thrashing

Thrashing is a condition or a situation when the system is spending a major portion of its time servicing the
page faults, but the actual processing done is very negligible.
Causes of thrashing:
1. High degree of multiprogramming.
2. Lack of frames.
3. Page replacement policy.

OPERATING SYSTEM 14

You might also like