Unit 4
Unit 4
Memory
Dr B.Soujanya
Assistant Professor
Department of Computer Science and Technology
GITAM Institute of Technology (GIT)
Visakhapatnam – 530045
Email: [email protected]
UNIT-4
● It is the most important function of an operating system that manages primary memory.
● It helps processes to move back and forward between the main memory and
execution disk.
● Main memory and registers are only storage CPU can access directly
Logical and physical addresses are the same in compile-time and load-time address-
binding schemes; logical (virtual) and physical addresses differ in execution-time
address-binding scheme.
Logical address space is the set of all logical addresses generated by a program
Physical address space is the set of all physical addresses generated by a program
Base and Limit
● A pair of baseRegisters
and limit registers define the logical address space
● CPU must check every memory access generated in user mode to be sure it
is between base and limit for that user
Hardware Address
Protection
Swappin
g
● Swapping is a method in which the process should be swapped
● It will be later brought back into the memory for continue execution.
● Backing store is a hard disk or some other secondary storage device that
■Backing store – fast disk large enough to accommodate copies of all memory
images for all users; must provide direct access to these memory images
■ Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped
■Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows)
Swappin
g
Benefits of
Swapping
Benefits of Swapping
Here, are major benefits/pros of swapping:
We can place the operating system in either low memory or high memory.
In this contiguous memory allocation, each process is contained in a single contiguous section
of memory.
Contiguous Memory
Allocation
Three components
2. Memory Allocation
3. Fragmentation
1.Memory Mapping and
Protection
● The relocation register contains the value of the smallest physical address;
● the limit register contains the range of logical addresses (for example,
relocation = 100040 and limit = 74600).
● With relocation and limit registers, each logical address must be less
than the limit register;the MMU maps the logical address dynamically by
adding the value in the relocation register.
● This mapped address is sent to memory .
● When the CPU scheduler selects a process for execution, the dispatcher
loads the relocation and limit registers with the correct values as part of
the context switch.
● The relocation-register scheme provides an effective way to allow the
operating-system size to change dynamically. This flexibility is desirable in
1.Memory Mapping and
many situations.
Protection
1.Memory Mapping and
Protection
Hardware Support for relocation and limit registers.
2. Memory
Allocation
One of the simplest methods for allocating memory is to divide memory into
several fixed-sized partitions.
selected from the input queue and is loaded into the free partition.
● When the process terminates, the partition becomes available for another process.
2. Memory Allocation
This procedure is a particular instance of the general dynamic storage allocation problem,
which concerns how to satisfy a request of size n from a list of free holes. There are
many solutions to this problem. The first-fit, best-fit, and worst-fit strategies are the
ones most commonly used to select a free hole from the set of available holes.
1. First fit. Allocate the first hole that is big enough. Searching can start either at
the beginning of the set of holes or where the previous first-fit search ended.
We can stop searching as soon as we find a free hole that is large enough.
2. Best fit. Allocate the smallest hole that is big enough. We must search the entire list, unless
the list is ordered by size. This strategy produces the smallest leftover hole.
3. Worst fit. Allocate the largest hole. Again, we must search the entire list, unless
it is sorted by size. This strategy produces the largest leftover hole, which may
be more useful than the smaller leftover hole from a best-fit approach.
3.Fragmentati
on
As processes are loaded and removed from memory, the free memory space is broken into
little pieces. It happens after sometimes that processes cannot be allocated to memory
blocks considering their small size and memory blocks remains unused. This problem is
known as Fragmentation.
1. Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left unused, as it
2. External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is not
3.Fragmentati
on
contiguous, so it cannot be used.
3.Fragmentati
The following diagram shows on
how fragmentation can cause waste of memory and a
compaction technique can be used to create more free memory out of fragmented memory −
Addresses bound at compile time or load time have identical logical and
physical addresses.
In this case the logical address is also known as a virtual address, and
the two terms are used interchangeably by our text.
The MMU can take on many forms. One of the simplest is a modification
of the base-register scheme described earlier.
problem.
● The problem arises because, when some code fragments or data residing
backing store.
Pagin
g
Basic Method:-
The basic method for implementing paging involves breaking physical memory into fixed-
sized blocks called frames and breaking logical memory into blocks of the same size
called pages.
When a process is to be executed, its pages are loaded into any available memory
frames from the backing store.
The backing store is divided into fixed-sized blocks that are of the same size as the
memory frames.
Every address generated by the CPU is divided into two parts: a page number
(p) and a page offset (d).
The page number is used as an index into a page table. The page table
contains the base address of each page in physical memory.
This base address is combined with the page offset to define the physical
memory address that is sent to the memory unit.
Paging-
Hardware
Pagin
g
The page size (like the frame size) is defined by the hardware.
The size of a page is typically a power of 2, varying between 512 bytes and 16 MB per
page, depending on the computer architecture.
The selection of a power of 2 as a page size makes the translation of a logical address
into a page number and page offset particularly easy.
If the size of logical address space is 2'"* and a page size is 2" addressing units (bytes or
words), then the high-order m - n bits of a logical address designate the page number,
and the n low-order bits designate the page offset.
where p is an index into the page table and d is the displacement within the page.
Pagin
g
Address Translation
Page address is called logical address and represented by page number and the offset.
A data structure called page map table is used to keep track of the relation
between a page of a process to a frame in physical memory.
Paging model of logical and physical memory
Paging-Example
The solution to this problem is to use a very special high-speed memory device called the
translation look-aside buffer, TLB.
The TLB is associative, high-speed memory. Each entry in the TLB consists of two
parts: a key (or tag) and a value. When the associative memory is presented with
an item, the item is compared with all keys simultaneously.
The TLB contains only a few of the page-table entries. When a logical address is
generated by the CPU, its page number is presented to the TLB. If the page
number is found, its frame number is immediately available and is used to access
memory.
Paging -Hardware Support
Paging-
Protection
● Memory protection in a paged environment is accomplished by protection
bits associated with each frame. Normally, these bits are kept in the page
table. One bit can define a page to be read-write or read-only.
● One additional bit is generally attached to each entry in the page table: a
valid-invalid bit. When this bit is set to "valid," the associated page is in the
process's logical address space and is thus a legal (or valid) page.
● When the bit is set to"invalid,'" the page is not in the process's logical
address space. Illegal addresses are trapped by use of the valid-invalid bit.
● The operating system sets this bit for each page to allow or disallow access to the page.
Paging -
Protection
Structure of Page
Table
Structure of page table simply defines, in how many ways a page table can be structured.
Well, the paging is a memory management technique where a large process is
divided into pages and is placed in physical memory which is also divided into frames.
Frame and page size is equivalent. The operating system uses a page table to map the
logical address of the page generated by CPU to its physical address in the main
memory.
Virtual memory serves two purposes. First, it allows us to extend the use of
physical memory by using disk. Second, it allows us to have memory protection,
because each virtual address is translated to a physical address.
Virtual memory also allows the sharing of files and memory by multiple processes,
with several benefits:
● System libraries can be shared by mapping them into the virtual address
space of more than one process.
● Processes can also share virtual memory by mapping the same block of
memory to more than one process.
● Process pages can be shared during a fork( ) system call, eliminating the
need to copy all of the pages of the original ( parent ) process.
Diagram showing virtual memory that is larger than physical
memory
Demand
Paging
An alternative strategy is to load pages only as needed.This technique is
known as demand-paging and is commonly used in virtual memory systems.
With demand -paged virtual memory ,pages are only loaded when they are
demanded during program execution,pages that are never accessed are thus
never loaded into physical memory.
Rather than swapping the entire process into memory ,however we use a
lazy swapper. A lazy swapper never swaps a page into the memory unless
1. Adjust the memory used by I/O buffering, etc., to free up some frames for user processes.
The decision of how to allocate memory for I/O versus user processes is a complex one,
yielding different policies on different systems. ( Some allocate a fixed amount for I/O, and
others let the I/O system contend for memory along with everything else. )
2. Put the process requesting more pages into a wait queue until some free frames become available.
3. Swap some process out of memory completely, freeing up its page frames.
4. Find some page in memory that isn't being used right now, and swap that page only out to
disk, freeing up a frame that can be allocated to the process requesting it. This is known as
page replacement, and is the most common solution.
Basic Page Replacement
Now the page-fault handling must be modified to free up a frame if necessary, as follows:
1. Find the location of the desired page on the disk, either in swap space or in the file system.
2. Find a free frame:
1. If there is a free frame, use it.
2. If there is no free frame, use a page-replacement algorithm to select
an existing frame to be replaced, known as the victim frame.
3. Write the victim frame to disk. Change all related page tables to
indicate that this page is no longer in memory.
3. Read in the desired page and store it in the frame. Adjust all related page and
frame tables to indicate the change.
4. Restart the process that was waiting for this page.
Page Replacement Algorithms
The three replacement algorithms are
● Although FIFO is simple and easy, it is not always optimal, or even efficient.
● An interesting effect that can occur with FIFO is Belady's anomaly, in which increasing
the number of frames available can actually increase the number of page faults that
occur!
FIFO Algorithm
Page-fault curve for FIFO replacement on a reference string.
Optimal Page Replacement
● The discovery of Belady's anomaly lead to the search for an optimal page-replacement
algorithm, which is simply that which yields the lowest of all possible page-faults, and
which does not suffer from Belady's anomaly.
● Such an algorithm does exist, and is called OPT or MIN. This algorithm is simply
"Replace the page that will not be used for the longest time in the future."
● In practice most page-replacement algorithms try to approximate OPT by predicting
( estimating ) in one fashion or another what page will not be used for the longest
period of time. The basis of FIFO is the prediction that the page that was brought in the
longest time ago is the one that will not be needed again for the longest future time,
LRU(Least Recently Used)
● The prediction behind LRU, the Least Recently Used, algorithm is that the page that has
not been used in the longest time is the one that will not be used again in the near
future. ( Note the distinction between FIFO and LRU: The former looks at the oldest load
time, and the latter looks at the oldest use time. )
● LRU is considered a good replacement policy, and is often used. The problem is how
exactly to implement it. There are two simple approaches commonly used:
1. Counters. Every memory access increments a counter, and the current value of this
counter is stored in the page table entry for that page. Then finding the LRU page
involves simple searching the table for the page with the smallest counter value.
2. Stack. Another approach is to use a stack, and whenever a page is accessed, pull that
page from the middle of the stack and place it on the top. The LRU page will always
be at the bottom of the stack. Because this requires removing objects from the
middle of the stack, a doubly linked list is the recommended data structure.
● Neither LRU or OPT exhibit Belady's anomaly. Both belong to a class of page-
replacement algorithms called stack algorithms, which can never exhibit Belady's
anomaly.
Allocation of
Frames
1 Minimum Number of Frames
2 Allocation Algorithms
● Equal Allocation - If there are m frames available and n processes to share them,
each process gets m / n frames, and the leftovers are kept in a free-frame
buffer pool.
● Proportional Allocation - Allocate the frames proportionally to the size of the
process, relative to the total size of all processes. So if the size of process i is
S_i, and S is the sum of all S_i, then the allocation for process P_i is a_i = m * S_i
/ S.
Allocation of
Frames
3 Global versus Local Allocation
● The above arguments all assume that all memory is equivalent, or at least has
equivalent access times.
● This may not be the case in multiple-processor systems, especially where each
CPU is physically located on a separate circuit board which also holds some
portion of the overall system memory.
Thrashin
g
● If a process cannot maintain its minimum required number of frames, then it
must be swapped out, freeing up frames for other processes. This is an
intermediate level of CPU scheduling.
● But what about a process that can keep its minimum, but cannot keep all of the
frames that it is currently using on a regular basis? In this case it is forced to
page out pages that it will need again in the very near future, leading to large
numbers of page faults.
● A process that is spending more time paging than executing is said to be thrashing.
1 Cause of Thrashing
● The selection of delta is critical to the success of the working set model - If it is too
small then it does not encompass all of the pages of the current locality, and if it is
too large, then it encompasses pages that are no longer being frequently accessed.
● The total demand, D, is the sum of the sizes of the working sets for all processes. If
D exceeds the total number of available frames, then at least one process is
thrashing, because there are not enough frames available to satisfy its minimum
working set. If D is significantly less than the currently available frames, then
additional processes can be launched.
Thrashin
g
3 Page-Fault Frequency
A more direct approach is to recognize that what we really want to control is the page-fault
rate, and to allocate frames based on this directly measurable value. If the page-fault
rate exceeds a certain upper bound then that process needs more frames, and if it is
below a given lower bound, then it can afford to give up some of its frames to other
processe
Protectio
n