Chapter 5 Memory Management
Chapter 5 Memory Management
Memory Management
Basic Memory Management
Background :
• memory is an important resource of the computer system that needs to be managed
by the operating system. To execute the program, user needs to keep the program in
main memory. The main memory is volatile. Therefore, a user needs to store his
program in some secondary storage which is non-volatile.
• Every process needs main memory since a process code, stack, heap (dynamically-
allocated structures), and data (variables) must all reside in memory.
• The management of main memory is required to support for multiprogramming.
• Many executable processes exist in main memory at any given time.
• Different processes in main memory have different address space. Memory
manager is the module of the operating system responsible for managing memory.
• Programs after completion of execution move out of main memory or processes
suspended for completion of I/O can be swapped out on secondary storage to free
the main memory for other processes.
• New processes are required to be loaded in main memory. If available main
memory is not sufficient to hold all processes swapping between main memory and
secondary memory is done. Memory managers move processes back and forth
between memory and disk during execution.
Mono-programming
• Only one program was residing in main memory
at given point of time.
• Operating system was residing in some portion of memory &
other portion was fully devoted to the single executing process.
• This system was simple to design but was not supporting
multiprogramming. In this approach no relocation was needed
as program could be always loaded at the same location.
Multiprogramming
• Multiprogramming is required to support multiple processes simultaneously.
Since multiple processes are resident in memory at the same time, it
increases processor utilization if the processes are I/O bound, CPU does not
remain idle.
• CPU utilization increases as it executes multiple processes on time sharing
basis. Multiprogramming gives illusion of running multiple processes at
once and provides users with interactive response to processes.
Multiprogramming with Fixed sized Partitions
• In this scheme memory is divided into a number of fixed-sized
partitions.
• Each partition may contain exactly one process.
• The number of partitions and its size would be defined by the
system administrator when the system starts up. The degree of
multiprogramming will be high if the number of partitions
created is more.
• When a partition is free, a process is selected from the input
queue and is loaded into the free partition. When the process
completes the execution and terminates, the partition becomes
free for another process. The operating system keeps track on
free and occupied partitions by keeping table indicating which
parts of memory are free and which are occupied.
• Fixed sized partitions are relatively simple to implement.
• The problems with the fixed size partitions is if program size is
larger than the partition size, program cannot be stored in
partition.
• Eg. Is as figure shows fixed size partitions of size 8 M.
Multiprogramming with Variable sized Partitions
• In a variable-sized partition, memory is divided into the
partitions of different sizes.
• Processes are loaded into the partitions having size nearby to
its need. It is easy to always ensure the best-fit.
• A queue is organized for each size of the partition where
processes wait for that partition. Partitions are allocated to
the processes with best-fit policy. Best fit policy ensures to
allocate the partition that is big enough for the process to
reside.
• However, such an allocation may reduce the performance as
processes would queue up in best fit queue even though other
partitions are free. For example, it may have many processes
queued up in a queue of partition of size 8M, and no processes
are queued up in a queue for the partition size 16M.
• Fig. shows variable size partitions.
Fragmentation
• In a fixed size partitioning of memory, if partition size is larger than program
size, some space of the partition will be unused within that partition. This is
called Internal fragmentation.
• The best fit policy of variable size of partitioning technique minimizes internal
fragmentation.
• In both fixed and variable size partitions it may happen that, partition can
remain empty even though processes are waiting in other partitions queue
leading to external fragmentation.
• Dynamic partition technique can reduce the internal and external
fragmentations.
• In this technique basically a variable partitioning with a variable number of
partitions determined dynamically. Initially, all memory is available for user
processes, and is considered as one large block, of available memory, a hole.
• When a process arrives and needs memory, it searches for a hole large enough
to fit for this process. This technique ensures the efficient use of main memory.
There is no internal fragmentation. However, if the many small holes are
scattered, it can't be allocated to one large size process.
Fragmentation
• Initially 64 MB main memory is available. 8 MB is allocated to operating
system as shown in Fig. (a). For user processes 56 MB memory is available and
considered as one large single hole.
• Processes P1, P2 and P3 are loaded and to these processes sufficient memory
space can be allocated, creating the hole of size 4 MB at the end of memory . At
some point of time none of the allocated processes are ready.
• Operating system swaps out P2 and in its place new process P4 of size 18 MB is
swapped in Fig. (e). The hole of 2 MB is created. Again after some time since
none of the processes in main memory is ready and only P2 is in ready suspend
state, is available.
• To swap in P2, there is no sufficient space is available. So operating system
swaps P1 out and swaps P2 back in as in Fig. (f)
• The example shows that, there are many scattered small size holes. These holes
cannot be allocated to the process of larger size than these holes. This is called
external fragmentation.
• Solution to this problem is compaction. Compaction is a technique to covert
many small size scattered holes into one large size continuous hole.
Fragmentation
• Compaction is relocation of the various processes so as to accumulate all the
free space in one place. Due to compaction, there is inefficient use of processor.
Fig shows the memory after compaction: scattered small size holes.
• If processes P4 and P3 moved up, one large size hole at end of memory as fig.
• If processes P2, P4 and P3 moved downward, one large size hole of 8 MB at
the start of user memory area as in Fig (c). However in this, 48 MB processes
need to be moved compared to 28 MB in Fig.(b)(d) shows the 28 MB shifting of
processes.
• Compaction should ensure the moving of processes having minimum total size.
• Compaction increases the degree of multiprogramming. It is because processes
can be loaded in the memory which is available by combining many scattered
holes. If relocation is done at compile time (static), compaction is not possible.
It is possible only if relocation is dynamic and done at execution time.
Fragmentation
Difference Between Fragmentation
Free Space Management Techniques
• The OS must manage the memory when allocated dynamically. There are two
approaches to keep track of memory usage: bitmaps and linked list of free
allocated memory segments.
Bit Map : In this method, every allocation unit signified by a 0 or 1 bit. Free
allocation unit is represented by bit 0 and allocated unit is represented by bit 1.
• In this approach, even with an allocation unit as small as 4 bytes, 32 bits of
memory will require only 1 bit of the map. A memory of 32n bits will use n map
bits, so the bitmap will take up only 1/32 of memory. If the allocation unit is
chosen large, the bitmap will be smaller, but appreciable memory may be
wasted in the last unit of the process if the process size is not an exact multiple
of the allocation unit.
• Advantage: A bitmap provides a simple way to keep track of memory words in a
fixed amount of memory because the size of the bitmap depends only on the size
of memory and the size of the allocation unit.
Disadvantages: The problem is when it has been decided to bring a k-unit process into
memory, memory manager must search the bitmap to find a run of k consecutive 0 bits
in the map.& Searching a bitmap for a run of a given length is a slow operation
Free Space Management Techniques
Free Space Management Techniques
• Linked List: Another way of keeping track of memory is to maintain a linked list of
allocated and free memory segments, where a segment either contains a process or is an
empty hole between two processes.
• The memory of Fig (a) is represented in Fig (c) as a linked list of segments. Each entry in
the list specifies a hole (H) or process (P), the address at which it starts, the length, and a
pointer to the next item.
• In this example, the segment list is kept sorted by address. Sorting this way has the
advantage that when a process terminates or is swapped out, updating the list is
straightforward. A terminating process normally has two neighbours (except when it is at
the very top or bottom of memory).
• These may be either processes or holes, leading to the four combinations shown in Fig.
updating the list requires replacing a P by an H. In Fig. (b) and Fig. (c), two entries are
coalesced into one, and the list becomes one entry shorter. In Fig. (d), three entries are
merged and two items are removed from the list.
• When the processes and holes are kept on a list sorted by address, several algorithms can
be used to allocate memory for created or existing processes.
• Advantages: The segment list is kept sorted by address. Sorting this way has advantage
that when a process terminates or is swapped out, updating the list is straightforward.
Disadvantages: This is not efficient method as linked list need to be maintained.
Free Space Management Techniques
Virtual Memory
• There are many cases where entire program is not needed in main memory at a time.
• In many cases even though entire program is needed, it may not all be needed at the same
time.
• Application programs always get the feeling of availability of contiguous working address
space due to the idea of virtual memory.
• Actually, this working memory can be physically fragmented and may even overflow on to
disk storage.
• This technique makes programming of large applications easier and use real physical
memory more efficiently those without virtual memory.
• Although executing process is not entirely present in memory, it can complete its execution
due to virtual memory technique.
• Virtual memory permits execution of larger size programs although smaller size physical
memory is available.
• It means larger size programs than available physical memory can complete the
execution.
• Virtual memory concept separates the user logical memory from actual physical memory.
• This separation offers very large virtual memory to programmers although actual
physical memory is having very small size.
Paging
• Paging is memory management technique which allows physical address space of the
process to be noncontiguous.
• In some sense, paging mechanism is similar to reading of a book. When we read a book
we only see and need to open the current page to read.
• All the other pages are not visible to us and remains closed. In the same manner, we can
say that even when we may have a large program available, the processor only needs
small set of instructions to execute at any time.
• In fact, all these instructions which the processor needs to execute are within a small
proximity of each other.
• This is like a page which contains all the statements which we are currently reading.
• In this way paging allows to keep just the parts of a process that we're using in memory
and the rest on the disk.
Basic Operation: The problem of external fragmentation is avoided by using paging.
• Paging is implemented by integrating the paging hardware and operating system.
• In this technique, logical memory is divided into blocks of the same size called pages.
• The physical memory is divided into fixed-sized blocks called frames (size is power of 2)
The page size and frame size is equal.
• Initially all pages remain on secondary storage (backing store). When a process is
needed to be executed, its pages are loaded into any available memory frames from the
secondary storage.
paging model of physical and logical memory
Paging Hardware
Following basic operations is done in paging :
1. CPU generates logical address and it is divided into two parts: a page
number (p) and a page offset (d).
2. The page number is used as an index into a page table.
3. The base address of each page in physical memory is maintained by page
table.
4. The combination of base address with page offset defines the physical
memory address that is sent to the memory unit.
5. The physical location of the data in memory is therefore at offset d in page
frame f.
• Because of the paging the user program sights the memory as one single
contiguous space, it gives the illusion that memory contains only one
program.
• But the user program is spread throughout main memory (physical memory).
The logical addresses are translated into physical addresses.
• Fig. shows the operation of the paging hardware.
Paging Hardware
Paging Example
• Consider the following example in Fig. 5.2.3. Let page size is of 4 K and memory
available is 32 K.
• Page 0 is in frame number 5. Page 1 is in frame number 7 and Page 0 is in frame number
5. Logical address 4 is in page 1 and offset is 0. Page 1 is mapped to frame 7. So physical
address 28 = ((7*4)+ offset 0).
• Logical address 10 is in page 2 and offset is 2. Page 2 is mapped to frame 2.
• So physical address 10=((2*4)+offset 2)
Memory Protection and Sharing
• Page table contains the frame number. So for the reference of the page, page table is
verified to get correct frame number.
• Protection bit is associated with each page to denote whether page is read-write or
read only.
• If try is made to write read only page, a hardware trap goes to operating system.
• A legal page remains in logical address space of the process and in page table this
page is marked with valid (v) bit. Otherwise page is illegal and marked with invalid
(i) bit.
• Paging also provides advantages of sharing of common code.
• The re-entrant code cannot be modified. If many users share this code, only one
copy needs to be kept in main memory.
• The page table of different user points to same physical memory (frames). In this
case data used by different users can be different so data pages are mapped to
different frames.
• By simply having page table entries for different processes point to the page frame,
the operating system can create regions of memory that are shared by two or more
processes.
Segmentation
• In segmentation, a user program and its associated data can be subdivided
into a number of segments.
• Different segments of the programs can have different length.
• Although there is a maximum segment length, the length depends on purpose
of the segment in the program.
• Segments are arbitrarily-sized contiguous ranges of the address space.
• They are a tool for structuring the application. As such, the programmer or
compiler may decide to arrange the address space using multiple segments.
• For example, this may be used to create some data segments that are shared
with other processes, while other segments are not.
• Compiler can create the different segment for global variables, code, thread
stack, library etc.
• In segmentation, a program address space may occupy more than one
partition, and these partitions need be contiguous.
• Internal fragmentation is eliminated in segmentation but, like dynamic
partitioning, it suffers from external fragmentation.
Segmentation Example
In segmentation:
• Logical address it is divided into two parts: a segment number (s) & offset (d)
• Each entry of segment table contains segment base and segment limit.
• Segment base indicate starting physical address of the segment in main memory and
segment base denotes length of the segment.
• Segment number is used as index to the segment table.
• Offset (d) always between 0 and segment limit otherwise error will occur indicating
attempting access beyond the end of the segment. Fig. indicates 4 segments of the
program having different sizes. Segment table contains base and limit of each segment.
Page Fault Demand Paging
• A demand paging is paging system with swapping where pages are brought i When it is
needed to execute a process, memory manager swap it into memory. main memory from
secondary storag Instead of swapping the entire process into memory, demand paging
allows to swap in only those pages which areon demand.There will be failure in
accessing the page, if it is presently mapped to a frame. This will cause a page fault
which is arequired for execution at that timeTo handle this interrupt, disk operation is
initiated by OS handler to read the required page in memory. special type of
interrupt.After this, mapping of page to free frame is carried out. During all this
operation the process remains blocked. As soon as page becomes available, the blocked
process during disk operation is now unblocked, and placed on the ready queue.When
scheduler schedules this process, it restarts its execution with the instruction that caused
the page fault The execution will continue until all the instructions are in memory. This
approach of fetching pages as they are needed for execution is called demand paging.
Before swapping in the process in memory, the pager program makes sure about which
pages will be used before theprocess is swapped out again.The complete process is
Page Fault Demand Paging
• -Fi 5.2.5 shows page table with valid and invalid pages. Those pages which are in main
memory are marked as valid. Pages on secondary storage are marked as invalid. Page 0,
Page 2 and Page 4 are in main memory. So it is marked with valid (v) bit. Page 1 and 3,
are not in main memoryso marked with invalid () bit.allocation of memory to the pages is
necessary. The reason is, any page can be mapped into any available The page table
maintains the information regarding mapping of page to page frame. Due to this, it
offers the impression of having one contiguous block of memory.Page 1. page 3, Page 5,
page 6, and page 7are on secondary storageContinuous page frame.Advantages of
Demand PagingProvides large size virtual memory.-Memary is utilized more
efficientlySupport for degree of multiprogramming is very good.Disadvantage of
Demand PagingPage interrupts requires more number of table handling and processor
overhead with compare to simple pagedmanagement techniques
Page Fault Demand Paging
Page Fault and Instruction Restarts
• If the page is in secondary storage and mapped to a page frame in main memory, a page
fault occurs. (It is a trap the operating system)When page fault occurs operating system
is responsible for either killing the process in the case of an invalid memory reference.
Or locating the page or loading the needed page into an empty page frame and then
restarting the faulting instruction.
• Following is the procedure done by operating system for handling the page fault 1
Internal table which is with the PCB of the process is verified for valid or invalid
reference.2 For invalid reference process is terminated. For valid reference, if page yet is
present in main memory, needbring it in main memory from secondary storage.4 If disk
is busy operating system has to want. If busy search the page on disk and read it into
main For this new page find empty frame from free frame list.6memorytoWhen reading
of the page from secondary storage completes, internal table and page table of the
process getsupdated indicating page is now in main memory Restart the execution of
interrupted instruction from newly transferred page
Page Replacement Algorithms
• After page fault handling, to accommodate the new page, needed to replace the existing
page from main memory.For that purpose, there are many page replacement algorithmsit
is possible that frames will remain always free. So tiAll these algorithms can be
evaluated by running it on a particular string of memory reference, It is expected that the
number of page faults should be minimum to get the performance.Reference string is the
string of memory references.The generation of the reference strin s carried out
artificially.The reference string can also be generated by Inting the address f each
memory reference by tracing the system. The important factor to determine number of
page faults is the number of page frames available.These numbers of page faults are for
a particular reference string and for page replacement algorithm for givenavailable
number of page frames.The number of frames available is inversely proportional to
number of page faults.It means that if more frames are available less number of faults
will occur.
FIFO
• The simplestAlgorithm
page-replacement algorithm and work on the basis of first in first out
(FIFO). it throws out the pages inthe order in which they were brought inThe time is
associated with each page when it was brought into main memory.This algorithm always
chooses oldest page for replacement. Since replacement is FIFO, a queue can be
maintained to hold all the pages in main memory.This algorithm doesn't care about
which pages are accessed frequently and which are not. However, it is used inwindows
2000.Consider the reference string 5, 0, 2, 3, 0, 1, 3, 4, 5, 4, 2, 0, 3, 4, 3 and consider 3
available frames.Reference string
• Since first three pages were initially in main memory, references (5, 0, 2) causes page
faults and brought into these 3 free frames.Operating System (MSBTE)5-15Reference to
page 3 again causes the fault. To bring page 3 in memory as per FIFOreference is to
page 0 and it is already in memory.To bring page 1 in memory again as per FIFO policy
page 0 is replaced. Page 3 is already in memory. Reference to page 4 again causes the
fault.To bring page 4 in memory as per FIFO policy, page 2 is replaced. This process
continues as shown in Fig Whenever faults occurred, it is shown that which pages are in
3 frames. There are in total 11 faults occurred together.This algorithm sometimes suffers
through result called as Belady's anomaly:That is page fault rate will increase even
though number of allocated frames increases.
FIFO Algorithm
LRU Algorithm
• 5.3.2 Least Recently Used (LRU) AlgorithmThe time of page's last use is associated
with each page.When a page must be replaced, LRU chooses that page that was used
farthest back in the past.LRU is a good approximation to the optimal algorithm.This
algorithm looks backward in time while optimal replacement algorithm looks forward
in time.This policy suggests that replace a page whose last usage is farthest from
current time.This algorithm can be implemented with some hardware support and is
considered to be a good solution for page replacement.This algorithm does suffer
through Belady's anomaly.We will again consider the same reference string 5, 0, 2, 3, 0,
1, 3, 4, 5, 4, 2, 0, 3, 4, 3. Following is the result of applying LRU page replacement
algorithm.
• Since first three pages were initially in main memory, references (5, 0, 2) causes page
faults and brought into these free frames. Reference to page 3 again causes the
fault.usedTo bring page 3 in memory as per optimal replacement policy, page 5 is
replaced because it is the page which is least recently.Next reference is to page 0 and it
is already in memory.23To bring page 1 in memory again as per LRU replacement
policy, page 2 is replaced. Next reference is to page 3 and it is already in memory. This
process continues as shown in Fig. 5.3.2 causing in total 11 page faults.Implementation
of LRUUsing StackInitially keep all the page numbers on the stack.Remove the page
from stack whenever it is referenced and place it on the top of the stack.
LRU Algorithm
Optimal Page Replacement Algorithm
• Compare to all other page-replacement algorithms, this algorithm has the lowest page-
fault rate. This algorithm replaces that page which will not be used for the longest
period of time.This delays the unavoidable page fault as much as possible, and thus
decreases the total number of page faults.As algorithm replace the page on the basis of
future use of page, it is impossible to implement.The optimal algorithm is unrealizable
because it is impossible to determine the number of instructions that will beexecuted in
the future before the page will be referenced.This algorithm does suffer through
Belady's anomaly.Again consider the same reference string 5, 0, 2, 3, 0, 1, 3, 4, 5, 4, 2,
0, 3, 4, 3
• Since first three pages were initially in main memory, references (5, 0, 2) causes page
faults and brought into these free frames.Fig.5.3.3: Optimal page replacement
algorithmReference to page 3 again causes the fault. To bring page 3 in memory as per
optimal replacement policy, page 2 replaced because it is the page which will be used
for longest period of time.Next reference is to page 0 and it is already in memory. To
bring page 1 in memory again as per optimal replacement policy, page 0 is
replaced.Next reference is to page 0 and it is already in memory.Reference to page 4
again causes the fault. Since page 1 will no longer be referenced, it is replaced. This
proces continues as shown in Fig. 5.3.3 causing in total 8 page faults.
Optimal Page Replacement Algorithm