Os Module 3
Os Module 3
MODULE 3
MEMORY MANAGEMENT
SYLLABUS:
Memory management - Different address bindings – compile, link and run time
bindings. - Difference between logical address and physical address - Contiguous
memory allocation – fixed partition and variable partition – Allocation Strategies
- first fit, best fit and worst fit - Define fragmentation – internal and external, and
suggest solutions - Paging and paging hardware - Segmentation, and the
advantages of segmentation over paging
Concept of virtual memory - Demand paging - Page-faults and how to handle page
faults. - Page replacement algorithms: FIFO, optimal, LRU, LRU Approximation,
Counting based (LFU and MFU) - Learn the concept of thrashing.
MEMORY MANAGEMENT
Memory management keeps track of the status of each memory location,
whether it is allocated or free.
It allocates the memory dynamically to the programs at their request and frees
it for reuse when it is no longer needed.
ADDRESS BINDINGS
Address binding is the process of mapping from one address space to another
address space. Logical address is address generated by CPU during execution
whereas Physical Address refers to location in memory unit(the one that is loaded
into memory).
Compile Time - If it is known at compile time where a program will reside in
physical memory, then absolute code can be generated by the compiler, containing
actual physical addresses. However if the load address changes at some later time,
then the program will have to be recompiled.
Load time – If the location at which a program will be loaded is not known at
compile time, then the compiler must generate relocatable code, which references
addresses relative to the start of the program. If the starting address changes, then
the program must be reloaded but not recompiled.
2
Execution Time - If a program can be moved around in memory during the course
of its execution, then binding must be delayed until execution time. This requires
special hardware.
Logical and physical address are same in compile and load time address
binding scheme.
Logical and physical address are different in execution time address binding
scheme.
4
FixedPartitioning:
This is the oldest and simplest technique used to put more than one processes
in the main memory.
In this partitioning, number of partitions (non-overlapping) in RAM are fixed
but size of each partition may or may not be same.
As it is contiguous allocation, hence no spanning is allowed.
Here partition are made before execution or during system configure.
5
Disadvantages
1. InternalFragmentation:
Main memory use is inefficient. Any program, no matter how small, occupies
an entire partition. This can cause internal fragmentation.
2. ExternalFragmentation:
The total unused space (as stated above) of various partitions cannot be used to
load the processes even though there is space available but not in the contiguous
form (as spanning is not allowed).
3. Limitprocesssize:
Process of size greater than size of partition in Main Memory cannot be
accommodated. Partition size cannot be varied according to the size of
incoming process’s size. Hence, process size of 32MB in above stated example
is invalid.
4. LimitationonDegreeofMultiprogramming:
Partition in Main Memory are made before execution or during system
configure. Main Memory is divided into fixed number of partition.
VARIABLE PARTITION
overcome the problems caused by fixed partitioning
Os keep a table indicating which part of memory is available and which are
occupied.
All memory available for process are considered as large block called hole.
In this technique, the partition size is not declared initially. It is declared at
the time of process loading.
The first partition is reserved for the operating system.
The remaining space is divided into parts.
The size of each partition will be equal to the size of the process.
The partition size varies according to the need of the process so that the
internal fragmentation can be avoided.
7
ALLOCATION STRATEGIES
For both fixed and dynamic memory allocation schemes, the operating system
must keep list of each memory location noting which are free and which are
busy. Then as new jobs come into the system, the free partitions must be
allocated.
First fit
Best fit
Worst fit
These are Contiguous memory allocation techniques.
FIRST FIT
Allocate the first free partition or hole which is large enough that can
accommodate the process. It finishes after finding the first suitable free
partition.
9
Advantages
It is fast in processing. As the processor allocates the nearest available
memory partition to the job, it is very fast in execution.
Disadvantages
It wastes a lot of memory. The processor ignores if the size of partition
allocated to the job is very large as compared to the size of job or not.
BEST FIT
Allocate the smallest free partition which meet the requirement of requesting
process.
This algorithm first searches the entire list of free partitions and considers the
smallest hole that is adequate.
It then tries to find a hole which is close to actual process size needed.
Advantages
Memory utilization is much better than first fit as it searches the smallest free
partition first available.
Disadvantage
It is slower and may even tend to fill up memory with tiny useless holes.
WORST FIT
In worst fit approach is to locate largest available free portion so that the
portion left will be big enough to be useful.
It is the reverse of best fit.
Advantage
Disadvantage
FRAGMENTATION
As processes are loaded and removed from memory, the free memory space
is broken into little pieces. It happens after sometimes that processes cannot
be allocated to memory blocks considering their small size and memory
blocks remains unused. This problem is known as Fragmentation.
Fragmentation is of two types
Internal fragmentation
External fragmentation
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left
unused, as it cannot be used by another process.
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it,
but it is not contiguous, so it cannot be used.
11
In above diagram, we can see that, there is enough space (55 KB) to run a
process-07 (required 50 KB) but the memory (fragment) is not contiguous.
External fragmentation can be reduced by compaction or shuffle memory
contents to place all free memory together in one large block. To make
compaction feasible, relocation should be dynamic.
Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore
the main memory will be divided into the collection of 16 frames of 1 KB
each.
There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each.
Each process is divided into pages of 1 KB each so that one page can be stored
in one frame.
Initially, all the frames are empty therefore pages of the processes will get
stored in the contiguous way.
13
Let us consider that, P2 and P4 are moved to waiting state after some time.
Now, 8 frames become empty and therefore other pages can be loaded in that
empty place. The process P5 of size 8 KB (8 pages) is waiting inside the ready
queue.
Given the fact that, we have 8 non contiguous frames available in the memory
and paging provides the flexibility of storing the process at the different
places. Therefore, we can load the pages of process P5 in the place of P2 and
P4.
14
Address Translation
Page address is called logical address and represented by page number and
the offset.
PAGING HARDWARE
A data structure called page map table is used to keep track of the relation
between a page of a process to a frame in physical memory.
When the system allocates a frame to any page, it translates this logical
address into a physical address and create entry into the page table to be used
throughout execution of the program.
When a process is to be executed, its corresponding pages are loaded into any
available memory frames.
Suppose you have a program of 8Kb but your memory can accommodate
only 5Kb at a given point in time, then the paging concept will come into
picture.
When a computer runs out of RAM, the operating system (OS) will move idle
or unwanted pages of memory to secondary memory to free up RAM for other
processes and brings them back when needed by the program.
This process continues during the whole execution of the program where the
OS keeps removing idle pages from the main memory and write them onto
the secondary memory and bring them back when required by the program.
16
Advantages
To access a particular page from the physical memory always refer page map table.
Page accessing speed depends upon page map table accessing speed.
There are some page they are referred very frequently and as will required
another hardware use called translation look aside buffer.
A Translation look aside buffer can be defined as a memory cache which can
be used to reduce the time taken to access the page table again and again.
It is a memory cache which is closer to the CPU and the time taken by CPU
to access TLB is lesser then that taken to access main memory.
17
TLB hit is a condition where the desired entry is found in translation look
aside buffer. If this happens then the CPU simply access the actual location in
the main memory.
If the entry is not found in TLB (TLB miss) then CPU has to access page table
in the main memory and then access the actual frame in the main memory.
Therefore, in the case of TLB hit, the effective access time will be lesser as
compare to the case of TLB miss.
Advantages
TLB reduces the effective access time.
Only one memory access is required when TLB hit occurs.
Disadvantages
TLB can hold the data of only one process at a time.
When context switches occur frequently, the performance of TLB degrades
due to low hit ratio.
As it is a special hardware, it involves additional cost.
SEGMENTATION
Segmentation is a memory management technique which supports user's
view of memory. This technique of division of a computer's primary memory
into sections called segments.
In a computer system using segmentation, a logical address space can be
viewed as multiple segments
The size of the segment may grow or shrink that is it is of variable length.
During execution, each segment has a name and a length.
The address specifies both the segment name and the displacement within the
segment.
The user, therefore, specifies each address by two quantities; segment name
and an offset.
Normally it is implemented as segments are numbered and are referred to by
a segment number, in place of a segment name. Thus a logical address consists
of two tuples:
The details about each segment are stored in a table called as segment table.
Segment Table – It maps two-dimensional Logical address into one-
dimensional Physical address. It’s each table entry has:
Base Address: It contains the starting physical address where
the segments reside in memory.
Limit: It specifies the length of the segment.
The logical address consists of two parts: a segment number (s) and an offset
(d) into that segment.
The segment number used as an index into the segment table.
The offset d of the logical address must be between 0 and the segment limit.
If offset is beyond the end of the segment, we trap the Operating System.
If offset is in the limit, then it is combined with the segment base to produce
the address in physical memory, hence the segment table is an array of base
limit and register pairs.
19
Example
Advantages of Segmentation –
No Internal fragmentation.
Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation
At the time of swapping, processes are loaded and removed from the main
memory, then the free memory space is broken into small pieces, cause of this
occurs External fragmentation.
PAGING SEGMENTATION
The user specified address is divided by The user specifies each address by two
CPU into a page number and offset quantities a segment number and the
offset (Segment limit).
The hardware decides the page size. The segment size is specified by the
user.
Paging involves a page table that Segmentation involves the segment
contains base address of each page. table that contains segment number and
offset (segment length).
VIRTUAL MEMORY
A computer can address more memory than the amount physically installed
on the system. This extra memory is actually called virtual memory and it is
a section of a hard disk that's set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than
physical memory.
21
Virtual memory serves two purposes. First, it allows us to extend the use of
physical memory by using disk. Second, it allows us to have memory
protection, because each virtual address is translated to a physical address.
It allow the execution of process that are not completely in memory.
DEMAND PAGING
The process of loading the page into memory on demand (whenever page fault
occurs) is known as demand paging.
A demand paging system is quite similar to a paging system with swapping
where processes reside in secondary memory and pages are loaded only on
demand, not in advance.
When a context switch occurs, the operating system does not copy any of the
old program’s pages out to the disk or any of the new program’s pages into
the main memory Instead, it just begins executing the new program after
loading the first page and fetches that program’s pages as they are referenced.
Advantages
Disadvantages
Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques.
PAGE FAULT
A page fault occurs when a program attempts to access data or code that is in
its address space, but is not currently located in the system RAM.
HANDLE PAGE FAULT
Belady’s anomaly
Belady’s anomaly proves that it is possible to have more page faults
when increasing the number of page frames while using the First in
First Out (FIFO) page replacement algorithm.
Optimal
In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.
Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4
page frame.
26
LRU –
LRU APPROXIMATION
LRU Page replacement need sufficient hardware support.
Some system may not provide hardware support and other page replacement
algorithm must be used.
Many system provide some help in the form of reference bit
Reference bit for a page is set by hardware whenever that page is referenced.
Initially all bits are cleared to 0s.
With each page referred is set to 1 by hardware.
SECOND CHANCE ALGORITHM
Actually a FIFO replacement algorithm, with small modification that cause
it to approximate LRU
When a page is selected according to FIFO order, we check it reference bit.
If it is set 1,we clear it and look for another page.
Also called clock algorithm.
27
In the worst case when all bit are set the pointer cycles through all the pages
giving each page a second chance and clearing its bit.
THRASHING
A process that is spending more time paging than executing is said to be
thrashing.
In other words it means, that the process doesn't have enough frames to hold
all the pages for its execution, so it is swapping pages in and out very
frequently to keep executing.
Sometimes, the pages which will be required in the near future have to be
swapped out.
If a process doesn’t have enough pages, page fault rate is very high, it leads
to
Low cpu utilization
OS think that is need to increase the degree of
multiprogramming.
Another process added to the system.
28