Memory Management 4th
Memory Management 4th
Memory Management
Basic Hardware
Program must be
→ brought (from disk) into memory and placed within a process for it to be run.
• Main-memory and registers are only storage CPU can access directly.
• Registers are accessible within in one CPU clock.
• Completing a memory acess can take many cycles of the CPU clock.
• Cache sits between main-memory and CPU registers to ensure fast memory access.
• Protection of memory is required to ensure correct operation.
• The base register holds the smallest legal physical address and the limit register specifies the range of
addresses. A pair of base and limit-registers define the logical (virtual) address space shown in the fig below:
Protection of memory space is accomplished by having the CPU hardware compare every address generated
in user mode with the registers. The base and limit registers can be loaded only by the operating system which
Nirmala G, Dept. of CSE, SSIT 1
CS4TH3 Operating Systems
uses a special privileged instruction that can be executed only in kernel mode. Only the operating system can
load the base and limit registers.
Address Binding
Address binding of instructions to memory-addresses can happen at 3 different stages (Figure 3.3). It is
mapping from one address space to another.
1) Compile Time
• If memory-location known a priori, absolute code can be generated.
• Must recompile code if starting location changes.
2) Load Time
• Must generate relocatable code if memory-location is not known at compile time. In this case, final
binding is delayed until load time.
3) Execution Time
• Binding delayed until run-time if the process can be moved during its execution from one memory-
segment to another.
Logical Address
Physical Address
The user program generates the logical address and thinks that the program is running in this logical address
but the program needs physical memory for its execution, therefore, the logical address must be mapped to the
physical address by MMU before they are used.
Comparison Chart:
User can view the logical User can never view physical
Dynamic Loading
All the programs are loaded in the main memory for execution. Sometimes complete program is loaded into
the memory, but sometimes a certain part or routine of the program is loaded into the main memory only when
it is called by the program, this mechanism is called Dynamic Loading.
Advantage is that a routine is loaded only when it is needed. This method is useful when large amounts of code
are needed to handle infrequently occurring cases such as error routines.
It does not require special support from OS. It is the responsibility of the users to design their programs to take
advantage of such method.
Dynamic linking
Establishing the linking between all the modules or all the functions of the program in order to continue the
program execution is called linking.
Linking intakes the object codes generated by the assembler and combines them to generate the executable
module.
1. The key difference between linking and loading is that the linking generates the executable file of a
program whereas, the loading loads the executable file obtained from the linking into main memory for
execution.
2. The linking intakes the object module of a program generated by the assembler. However, the loading
intakes the executable module generated by the linking.
3. The linking combines all object modules of a program to generate executable modules it also links the
library function in the object module to built-in libraries of the high-level programming language. On the
other hand, loading allocates space to an executable module in main memory.
Shared Libraries
Swapping
Swapping is one of the several methods of memory management. In swapping an idle or a blocked process
in the main memory is swapped out to the backing store (disk) and the process that is ready for execution in
the disk, is swapped in main memory for execution.
In simple Swapping is exchanging data between the hard disk and the RAM.
Working:
• A process must be in the main memory before it starts execution. So, a process that is ready for
execution is brought in the main memory. Now, if a running the process gets blocked.
• The memory manager temporarily swaps out that blocked process on to the disk. This makes the space
for another process in the main memory.
• So, the memory manager swaps in the process ready for execution, in the main memory, from the disk.
Nirmala G, Dept. of CSE, SSIT 5
CS4TH3 Operating Systems
• The swapped out process is also brought back into the main memory when it again gets ready for
execution.
• Ideally, the memory manager swaps the processes so fast, that the main memory always has processes
ready for execution.
Whenever a process with higher priority arrives the memory manager swaps out the process with the lowest
priority to the disk and swaps in the process with the highest priority in the main memory for execution. When
the highest priority process is finished, the lower priority process is swapped back in memory and continues to
execute. This Variant of swapping is termed as roll-out, roll-in or swap-out swap-in.
Note: Major part of swap-time is transfer-time; i.e. total transfer-time is directly proportional to the amount
of memory swapped.
Advantages of Swapping
Disadvantages of Swapping.
• In the case of heavy swapping activity, if the computer system loses power, the user might lose all the
information related to the program.
• If the swapping algorithm is not good, the overall method can increase the number of page faults and
decline the overall processing performance.
• Inefficiency may arise in the case where a resources or a variable is commonly used by the processes
which are participating in the swapping process
In contiguous memory allocation, all the available memory space remains together in one place. It means
freely available memory partitions are not scattered here and there across the whole memory space.
In the non-contiguous memory allocation the available free memory space are scattered here and there and
all the free memory space is not at one place. So this is time-consuming.
Memory Protection
Memory-protection means
→ protecting OS from user-process and
→ protecting user-processes from one another.
• Memory-protection is done using
Relocation-register: contains the value of the smallest physical-address.
Limit-register: contains the range of logical-addresses.
Memory Allocation
1) Fixed-sized Partitioning
• The memory is divided into fixed-sized partitions.
• Each partition may contain exactly one process.
• The degree of multiprogramming is bound by the number of partitions.
• When a partition is free, a process is
→ selected from the input queue and
→ loaded into the free partition.
• When the process terminates, the partition becomes available for another process.
Nirmala G, Dept. of CSE, SSIT 7
CS4TH3 Operating Systems
2) Variable-sized Partitioning
Disadvantage: Management is very difficult as memory is becoming purely fragmented after some time.
Three strategies used to select a free hole from the set of available holes.
1) First Fit
• Allocate the first hole that is big enough.
• Searching can start either
→ at the beginning of the set of holes or
→ at the location where the previous first-fit search ended.
Advantage: Fastest algorithm because it searches as little as possible.
Disadvantage: The remaining unused memory areas left after allocation become waste if it is too smaller. Thus
request for larger memory requirement cannot be accomplished.
2) Best Fit
Disadvantage: It is slower and may even tend to fill up memory with tiny useless holes.
3) Worst Fit
Disadvantage: If a process requiring larger memory arrives at a later stage then it cannot be accommodated as
the largest hole is already split and occupied.
First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.
Fragmentation
Fragmentation is a phenomenon in which storage space is used inefficiently, reducing capacity or performance
and often both. Whenever a process is loaded or removed from the physical memory block, it creates a small
hole in memory space which is called fragment
Both the internal and external classification affects data accessing speed of the system. Two types of memory
fragmentation:
1) Internal fragmentation: When a process is assigned to a memory block and if that process is smaller than the
memory requested, it creates a free space in the assigned memory block. Then the difference between assigned
and requested memory is called the internal fragmentation.
2) External fragmentation: When the process is loaded or removed from the memory, a free space is created.
This free space creates an empty space in the memory which is called external fragmentation.
In internal fragmentation
Internal fragmentation
2. memory. removed.
The above diagram clearly shows the internal fragmentation because the difference between memory
allocated and required space or memory is called Internal fragmentation.
In the above diagram, we can see that, there is enough space (55 KB) to run a process-07 (required 50 KB) but
the memory (fragment) is not contiguous. Here, we use compaction, paging or segmentation to use the free
space to run a process.
Paging
Paging is a memory management scheme. Paging allows a process to be stored in a memory in a non-
contiguous manner. Storing process in a non-contiguous manner solves the problem
of external fragmentation.
Characteristics:
• Each process is divided into parts where size of each part is same as page size.
• The size of the last part may be less than the page size.
• The pages of process are stored in the frames of main memory depending upon their availability.
Example-
Step-01:
CPU generates a logical address consisting of two parts-
1. Page Number
2. Page Offset
• Page Number specifies the specific page of the process from which CPU wants to read the data.
• Page Offset specifies the specific word on the page that CPU wants to read.
Step-02:
For the page number generated by the CPU,
• Page Table provides the corresponding frame number (base address of the frame) where that page is
stored in the main memory.
Step-03:
• The frame number combined with the page offset forms the required physical address.
• Frame number specifies the specific frame where the required page is stored.
• Page Offset specifies the specific word that has to be read from that page.
Diagram-
The following diagram illustrates the above steps of translating logical address into physical address-
Protection
• Memory-protection is achieved by protection-bits for each frame.
• The protection-bits are kept in the page-table.
• One protection-bit can define a page to be read-write or read-only.
• Every reference to memory goes through the page-table to find the correct frame-number.
• Firstly, the physical-address is computed. At the same time, the protection-bit is checked to verify that no
writes are being made to a read-only page.
• An attempt to write to a read-only page causes a hardware-trap to the OS (or memory-protection violation).
Nirmala G, Dept. of CSE, SSIT 14
CS4TH3 Operating Systems
Note: In other words, we can say that TLB is faster and smaller than the main memory but cheaper and bigger
than the register.
Working:
When a logical-address is generated by the CPU, its page-number is presented to the TLB.
If the page-number is found (TLB hit), its frame-number is
→ immediately available and
→ used to access memory.
If page-number is not in TLB (TLB miss), a memory-reference to page table must be made.
The obtained frame-number can be used to access memory (Figure 3.19).
In addition, we add the page-number and frame-number to the TLB, so that they will be
found quickly on the next reference.
• If the TLB is already full of entries, the OS must select one for replacement.
• Percentage of times that a particular page-number is found in the TLB is called hit ratio
Advantage:
• Search operation is fast.
• TLB reduces the effective access time.
• Only one memory access is required when TLB hit occurs.
Disadvantage:
• TLB can hold the data of only one process at a time.
• When context switches occur frequently, the performance of TLB degrades due to low hit ratio.
• As it is a special hardware, it involves additional cost.
Three types
1) Hierarchical Paging
2) Hashed Page-tables
3) Inverted Page-tables
Hierarchical Paging
• Problem: Most computers support a large logical-address space (232 to 264). In these systems, the page-
table itself becomes excessively large.
Solution: Divide the page-table into smaller pieces.
Segmentation
Types of Segmentation
Basic Method
Hardware Support
Advantages of Segmentation
• It allows to divide the program into modules which provides better visualization.
• Segment table consumes less space as compared to Page Table in paging.
• It solves the problem of internal fragmentation.
Disadvantage of Segmentation
• There is an overhead of maintaining a segment table for each process.
• The time taken to fetch the instruction increases since now two memory accesses are required.
• Segments of unequal size are not suited for swapping.
• It suffers from external fragmentation as the free space gets broken down into smaller pieces with the
processes being loaded and removed from the main memory.
BASIS FOR
PAGING SEGMENTATION
COMPARISON
fragmentation. fragmentation.
Address The user specified address is The user specifies each address by two
Size The hardware decides the page The segment size is specified by the
size. user.
Table Paging involves a page table that Segmentation involves the segment table
contains base address of each that contains segment number and offset