Module 4 Memory Management
Module 4 Memory Management
• Memory Management, mainly consists of swapping (transferring to and fro)blocks of data from
secondary storage to main memory.
• Memory I/O is slow compared to I/O in CPU registers. The OS must cleverly time the swapping
to maximise the CPU’s efficiency
In a multiprogramming system, the “user” part of memory must be further subdivided to accommodate
multiple processes. So memory management is vital in a multiprogramming system.
Memory needs to be allocated to ensure that , there is a reasonable number of ready processes to
consume available processor time.
Relocation
• The programmer does not know where the program will be placed in memory when it is
executed,
– it may be stored to disk and return to main memory at a different location
(relocated)
• Memory references must be translated to the actual physical memory address for the OS
and the Processor to access them.
Processor hardware and OS software must be able to translate the memory references into actual
physical memory addresses, reflecting the current location of the program in main memory.
Protection
• Normally, a user process cannot access any portion of the operating system, neither
program nor data.
• Usually a program in one process cannot branch to an instruction in another process or
access the data area of another process.
• The processor must be able to abort such instructions at the point of execution.
• The memory protection requirement must be satisfied by the processor (hardware) rather
than the operating system (software) because the OS cannot anticipate all of the memory
references that a program will make. It is only possible to assess the permissibility of a
memory reference at the time of execution.
Sharing
• Processes that are cooperating with each other on some task may need to share access to
the same data structure. So several processes may be allowed to access the same portion
of memory
• Better to allow each process to access the same copy of the program rather than have
their own separate copy
Logical Organization
Physical Organization
Cannot leave the programmer with the responsibility to manage memory
Memory available for a program plus its data may be insufficient.
Overlaying allows various modules to be assigned the same region of memory but is
time consuming to program
In a multiprogramming environment programmer does not know how much space will be
available. Because of this, it is clear that the task of moving information between the two
levels of memory should be a responsibility of the OS/system.
Memory Partitioning
The principal operation of memory management is to bring processes into main memory for
execution by the processor.
Types of Partitioning
• Fixed Partitioning
• Dynamic Partitioning
• Simple Paging
• Simple Segmentation
• Virtual Memory Paging
• Virtual Memory Segmentation
•
Fixed Partitioning
The number and size of partitions are specified at system generation time itself
As in the figure above, the operating system occupies some fixed portion of main memory and
that the rest of main memory is available for use by multiple processes.
Partition Sizes
Figure above shows examples of two alternatives for fixed partitioning. One possibility is to
make use of equal-size partitions. In this case, any process whose size is less than or equal to the
partition size can be loaded into any available partition. If all partitions are full and no process is
in the Ready or Running state, the operating system can swap a process out of any of the
partitions and load in another process, so that there is some work for the processor.
There are two difficulties with the use of equal-size fixed partitions:
Any program, no matter how small, occupies an entire partition. In our example fig above ,
there may be a program whose length is less than 2 Mbytes; yet it occupies an 8-Mbyte
partition whenever it is swapped in. This phenomenon, in which there is wasted space
internal to a partition due to the fact that the block of data loaded is smaller than the partition,
is internal fragmentation.
Both of these problems can be lessened, though not solved, by using unequal-size partitions
(Figure b). In this example, programs as large as 16 Mbytes can be accommodated without
overlays. Partitions smaller than 8 Mbytes allow smaller programs to be accommodated with
less internal fragmentation.
Placement Algorithm
• As long as there is any available partition, a process can be loaded into that partition and,
it does not matter which partition is used.
• If all partitions are occupied with processes that are not ready to run, then one of these
processes must be swapped out to make room for a new process.
Dynamic Partitioning
To overcome some of the difficulties with fixed partitioning, an approach known as dynamic
partitioning was developed. An important operating system that used this technique was IBM’s
mainframe operating system, OS/MVT (Multiprogramming with a Variable Number of Tasks).
Initially, main memory is empty, except for the operating system. When a process is brought into
main memory, it is allocated exactly as much memory as it requires and no more. Thus the
partitions are of variable length and number. An example, using 64 Mbytes of main memory, is
shown in Figure.
The first three processes are loaded in, starting where the operating system ends and occupying
just enough space for each process. This leaves a “hole” at the end of memory that is too small
for a fourth process. At some point, the operating system swaps out process 2, which leaves
sufficient room to load a new process, process 4. Because process 4 is smaller than process 2,
another small hole is created.
As this example shows, this method starts out well, but eventually it leads to a situation in which
there are a lot of small holes in memory. As time goes on, memory becomes more and more
fragmented, and memory utilization declines. This phenomenon is referred to as external
fragmentation, indicating that the memory that is external to all partitions becomes increasingly
fragmented. This is in contrast to internal fragmentation, referred to earlier.
One technique for overcoming external fragmentation is compaction: From time to time, the
operating system shifts the processes so that they are contiguous and so that all of the free
memory is together in one block. For example, compaction of processes in figure below, will
result in a block of free memory of length 16M. This may well be sufficient to load in an
additional process.
The difficulty with compaction is that it is a time consuming procedure and wasteful of processor
time. Compaction implies the need for a dynamic relocation capability. That is, it must be
possible to move a program from one region to another in main memory without invalidating
the memory references in the program
Best-fit algorithm
Chooses the block that is closest in size to the request
Worst performer overall
Since smallest block is found for process, the smallest amount of fragmentation is left
Memory compaction must be done more often
First-fit algorithm
Scans memory form the beginning and chooses the first available block that is large enough
Fastest
May have many process loaded in the front end of memory that must be searched over when
trying to find a free block
Next-fit
Scans memory from the location of the last placement
More often allocate a block of memory at the end of memory where the largest block is
found
The largest block of memory is broken up into smaller blocks. Compaction is required to
obtain a large block at the end of memory
Which of these approaches is best will depend on the exact sequence of process
swappings that occurs and the size of those processes.
Paging
Both unequal fixed-size and variable-size partitions are inefficient in the use of memory; the
former results in internal fragmentation, the latter in external fragmentation. Suppose, however,
that main memory is partitioned into equal fixed-size chunks that are relatively small, and that
each process is also divided into small fixed-size chunks of the same size. Then the chunks of a
process, known as pages, could be assigned to available chunks of memory, known as frames,
or page frames.
Then, the wasted space in memory for each process is due to internal fragmentation consisting of
only a fraction of the last page of a process. There is no external fragmentation.
SEGMENTATION
A user program can be subdivided using segmentation, in which the program and its associated
data are divided into a number of segments. It is not required that all segments of all programs be
of the same length, although there is a maximum segment length. As with paging, a logical
address using segmentation consists of two parts, in this case a segment number and an offset.
Because of the use of unequal-size segments, segmentation is similar to dynamic partitioning.
The difference, compared to dynamic partitioning, is that with segmentation a program may
occupy more than one partition, and these partitions need not be contiguous.
Segmentation eliminates internal fragmentation but, like dynamic partitioning, it suffers from
external fragmentation. However, because a process is broken up into a number of smaller
pieces, the external fragmentation should be less.
• As in paging, a segmentation scheme would make use of a segment table for each process
and a list of free blocks of main memory. Each segment table entry would have to give
• the starting address in main memory of the corresponding segment.
• the length of the segment, to assure that invalid addresses are not used.
• When a process enters the Running state, the address of its segment table is loaded into a
special register used by the memory management hardware.
In our example, we have the logical address 0001001011110000, which is segment number 1,
offset 752. Suppose that this segment is residing in main memory starting at physical address
0010000000100000. Then the physical address is 0010000000100000 + 001011110000 =
0010001100010000
To summarize, with simple segmentation, a process is divided into a number of segments that
need not be of equal size. When a process is brought in, all of its segments are loaded into
available regions of memory, and a segment table is set up.
Virtual Memory
All memory references within a process are logical addresses that are dynamically translated into
physical addresses at run time. This means that a process may be swapped in and out of main
memory such that it occupies different regions of main memory at different times during the
course of execution. The pages or segments need not be contiguously located in main memory
during execution.
If the preceding two characteristics are present, then it is not necessary that all of the pages or
all of the segments of a process be in main memory during execution. If the segment or page that
holds the next instruction to be fetched or that holds the next data location to be accessed are in
main memory, then at least for a time execution may proceed.
Thus only PART of the program needs to be in memory for execution. Logical address space can
therefore be much larger than physical address space. But pages need to be swapped in and out.
This storage allocation scheme in which secondary memory can be addressed as though it were
part of main memory is called Virtual Memory.
The size of virtual storage is limited by the addressing scheme of the computer system and by the
amount of secondary memory available and not by the actual number of main storage locations.
In the case of Simple paging and segmentation schemes, All the pages/segments of a process
must be in main memory for process to run.
But in the case of Virtual memory paging or segmentation, not all pages/segments of a process
need be in main memory frames for the process to run. Pages/segments may be read in as
needed. Reading a page/segment into main memory may require writing one or more page out to
disk
Virtual memory, based on either paging or paging plus segmentation, has become an essential
component of contemporary operating systems
In simple paging, it is seen that each process has its own page table. Each page table entry
contains the frame number of the corresponding page in main memory. In the virtual memory
scheme also, a unique page table is needed for each page. In this case, however, the page table
entries become more complex. Since only some of the pages of a process may be in main
memory, a bit is needed in each page table entry to indicate whether the corresponding page is
present (P) in main memory or not. If the bit indicates that the page is in memory, then the entry
also includes the frame number of that page. The page table entry also includes a modify (M)
bit, indicating whether the contents of the corresponding page have been altered since the page
was last loaded into main memory. If there has been no change, then it is not necessary to write
the page out when it comes time to replace the page in the frame that it currently occupies. Other
control bits may also be present.
Page Fault
• With each page table entry a present (P) bit is associated (1 ⇒ in-memory, 0 ⇒ not in
memory).
• Initially, P bit is set to 0 on all entries.
• During address translation, if P bit in page table entry is 0 --- page fault occurs.
• The first reference to an unmapped page will interrupt the OS with a page fault.
The fetch policy determines when a page should be brought into main memory. The two
common alternatives are demand paging and prepaging. With demand paging, a page is
brought into main memory only when a reference is made to a location on that page.
Prepaging exploits the characteristics of most secondary memory devices, such as disks, which
have seek times and rotational latency. If the pages of a process are stored contiguously in
secondary memory, then it is more efficient to bring in a number of contiguous pages at one time
rather than bringing them in one at a time over an extended period
• Replacement Policy
This deals with the selection of a page in main memory to be replaced when a new page
must be brought in. Several interrelated concepts are involved:
• How many page frames are to be allocated to each active process
• Whether the set of pages to be considered for replacement should be limited to those of
the process that caused the page fault or encompass all the page frames in main memory
• Among the set of pages considered, which particular page should be selected for
replacement
Basic Algorithms: There are certain basic algorithms that are used for the selection of a page to
replace. Replacement algorithms include
• Optimal
• Least recently used (LRU)
First-in-first-out (FIFO)
• Clock
Performance Issue - need an algorithm which will result in minimum number of page faults and
page replacements.
• Optimal
Replace the page that will not be used again the farthest time into the future
• LRU - Least Recently Used
Replace the page that has not been used for the longest time
Locality