OS Module IV
OS Module IV
OS Module IV
Memory Management
by:
Dr. Soumya Priyadarsini Panda
Assistant Professor
Dept. of CSE
Silicon Institute of Technology, Bhubaneswar
Background
Memory consists of a large array of bytes, each with its own address.
Main memory and registers are the only storage that CPU can access
directly.
Cache Memory: fast memory between the CPU and main memory
Basic Hardware
The objective is to ensure relative speed of accessing physical memory,
along with correct operation.
To determine the range of legal addresses that the process may access:
base and limit registers are used.
The base register holds the smallest legal physical memory address.
Example:
If the base register holds 300040 and the limit register is 120900, then
the program can legally access all addresses from 300040 through
420939 (inclusive).
Cont…
The base and limit registers can be loaded only by the operating
system, which uses a special privileged instruction.
The processes on the disk that are waiting to be brought into memory for
execution form the input queue.
Load time:
If it is not known at compile time where the process will reside in
memory, then the compiler must generate relocatable code.
In this case, final binding is delayed until load time.
If the starting address changes, reloading of the user code is required to
incorporate the changed value.
Cont…
Execution time:
If the process can be moved during its execution from one memory
segment to another, then binding must be delayed until run time.
Special hardware must be available for this scheme to work
Logical vs. Physical Address
Logical address:
Generated by the CPU
Also referred to as virtual address
Physical address:
Address seen by the memory unit
Logical and physical addresses are the same in compile-time and load-
time address-binding schemes.
The user program deals with logical addresses; it never sees the real
physical addresses.
Dynamic Loading
With dynamic loading, a routine is not loaded until it is called.
The main program is loaded into memory and executed first and
whenever other routines are required they are loaded.
With dynamic linking, a stub is included in the image for each library
routine reference.
Swapping makes it possible for the total physical address space of all
processes to exceed the real physical memory of the system
Thus increases the degree of multiprogramming in a system.
Cont…
The system maintains a ready queue consisting of all processes whose
memory images are on the backing store or in memory and are ready to
run.
The transfer of the 100 MB process to/from main memory will take:
100 MB/50 MB per second = 2 seconds
The total swap time is about 4,000 milliseconds (both swap out/in)
Note:
The major part of the swap time is transfer time.
The resident operating system, usually held in low memory and user
processes held in high memory.
Variable-partition scheme:
The operating system keeps a table indicating which parts of memory
are available and which are occupied.
Initially, all memory is available for user processes and is considered
one large block of available memory, a hole
Dynamic Storage-Allocation Problem
The memory blocks comprise a set of holes of various sizes scattered
throughout memory.
When a process arrives and needs memory, the system searches the set
for a hole that is large enough for this process.
There are three most commonly used solutions to select a free hole from
the set of available hole: first-fit, best-fit, and worst-fit
Cont…
First-fit:
Allocate the first hole that is big enough
Best-fit:
Allocate the smallest hole that is big enough
Must search entire list, unless ordered by size
Produces the smallest leftover hole
Worst-fit:
Allocate the largest hole
Must search entire list to find the largest hole
Produces the largest leftover hole
Note: First-fit and best-fit methods are better than worst-fit in terms of
speed and storage utilization
Fragmentation
Both the first-fit and best-fit strategies for memory allocation suffer from
external fragmentation.
As processes are loaded and removed from memory, the free memory
space is broken into small pieces.
Internal Fragmentation
Allocated memory may be slightly larger than requested memory
This size difference is memory internal to a partition, but not being used
Statistical analysis of first fit, reveals that, even with some optimization,
given N allocated blocks, another 0.5 N blocks will be lost to
fragmentation.
i,.e, one-third of memory may be unusable
This property is known as the 50-percent rule.
Solutions to External Fragmentation
1. Compaction
Shuffle memory contents to place all free memory together in one
large block.
Compaction is possible only if relocation is dynamic, and is done at
execution time.
4
1
3 2
4
Each entry in the segment table has a segment base and a segment limit.
The segment base contains the starting physical address where the
segment resides in memory.
The offset d of the logical address must be between 0 and the segment
limit.
4300 + 53 = 4353.
Example-2
Given the segment table. A reference to segment 3, byte 852, is
mapped to_____?
Answer:
3200 (Base value in segment table) + 852 = 4052
Example-3
A reference to byte 1222 of segment 0 would result in a trap to the
operating system.
Disadvantage of Segmentation:
As processes are loaded and removed from the memory, the free memory
space is broken into smaller pieces, causing external fragmentation.
Paging
Paging
Paging is a memory-management scheme that permits the physical
address space of a process to be non-contiguous.
Paging involves-
dividing the physical memory into fixed-sized blocks called frames
and
divide logical memory into blocks of the same size called pages
Paging Hardware
Cont…
Every address generated by the CPU is divided into two parts:
a page number (p)
and a page offset (d).
The page table contains the base address of each page in physical
memory.
This base address is combined with the page offset to define the physical
memory address that is sent to the memory unit.
Paging Model of Logical and Physical
Memory
Cont…
The page size (like the frame size) is defined by the hardware.
If the size of the logical address space is 2m, and a page size is 2n bytes,
then the high-order m−n bits of a logical address designate the page number
and the n low-order bits designate the page offset.
Where p is an index into the page table and d is the displacement within
the page.
page number page offset
p d
m -n n
Example
The two memory access problem can be solved by the use of a special
fast-lookup hardware cache called associative memory or translation
look-aside buffers (TLBs)
Cont…
Translation look-aside buffers (TLBs) is a special, small, fast lookup
hardware cache.
The TLB is associative, high-speed memory.
Each entry in the TLB consists of two parts:
a key (or tag) and a value.
The percentage of times that the page number of interest is found in the
TLB is called the hit ratio.
Paging Hardware With TLB
Example-1
An 80-percent hit ratio means that, the desired page number is found in
the TLB 80 percent of the time.
Note:
For further details on Virtual memory do self study from book
Demand Paging
Demand Paging is similar to paging with swapping.
A lazy swapper never swaps a page into memory unless that page is
needed.
Cont…
In demand paging a valid- invalid bit scheme is used.
The page table entry for a page that is not currently in memory is set as
invalid.
Page Table when some pages are not in
main memory
Cont…
Page fault:
When a process tries to access a page that is not present in the physical
memory page fault occurs.
Page replacement:
Page replacement is the process of swapping out an existing page from
the main memory and replacing it with the required page.
Page replacement is required when all the frames of the main memory
are already occupied. And some new page is needed to be loaded.
Steps in handling a page fault
Steps in handling a page fault cont…
1. Check an internal table (usually kept with the process control block)
for the process to determine whether the reference was a valid or an
invalid memory access.
2. If the reference was invalid, terminate the process. If it was valid but
not yet brought in that page, page it in.
3. Find a free frame (by taking one from the free-frame list).
4. Schedule a disk operation to read the desired page into the newly
allocated frame.
5. When the disk read is complete, modify the internal table kept with the
process and the page table to indicate that the page is now in memory.
6. Restart the instruction that was interrupted by the trap. The process can
then access the page as though it had always been in memory.
Page Replacement Algorithms
OPT (Optimal)
Example:
If there are 3 frames and are initially empty.
Reference string:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
Belady’s anomaly:
For some page-replacement algorithms, the page-fault rate may increase
as the number of allocated frames increases.
OPT Page Replacement Algorithm
The Optimal (OPT) algorithm has the lowest page-fault rate of all
algorithms.
It never suffer from Belady’s anomaly.
It replace the page that will not be used for the longest period of time.
Example:
If no. of frames=3
Search string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
Example:
No. of frames=3
Search string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
Problem:
When a page is used heavily during the initial phase of a process, but
then is never used again, it has a large count and remains in memory
even though it is no longer needed.
MFU Page Replacement
The Most Frequently Used (MFU) page-replacement algorithm is based
on the argument that the page with the smallest count was probably just
brought in and has yet to be used.
Assuming demand paging with three frames, how many page faults would
occur for the following replacement algorithms?
LRU replacement
FIFO replacement
Optimal replacement