0% found this document useful (0 votes)
6 views69 pages

Memory Management

The document provides an overview of computer memory, detailing its types such as RAM, ROM, and various programmable memory types, along with their characteristics. It discusses memory management techniques including memory hierarchy, swapping, and partitioning methods, as well as virtual memory concepts like paging and page replacement algorithms. Additionally, it covers the importance of efficient memory allocation and management to optimize system performance.

Uploaded by

Surya Basnet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views69 pages

Memory Management

The document provides an overview of computer memory, detailing its types such as RAM, ROM, and various programmable memory types, along with their characteristics. It discusses memory management techniques including memory hierarchy, swapping, and partitioning methods, as well as virtual memory concepts like paging and page replacement algorithms. Additionally, it covers the importance of efficient memory allocation and management to optimize system performance.

Uploaded by

Surya Basnet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

What is Memory

Computer memory is any physical device capable of storing information


temporarily or permanently
Types Of Memory

1. Random Access Memory (RAM)


2. Read Only Memory (ROM)
3. Programmable Read-Only Memory(RROM)
4. Erasable Programmable Read-Only Memory(EPROM)
5. Electrically Erasable Programmable Read-Only Memory(EEPROM)s
RAM ROM
Stands for Random Access Memory Stands for Read Only Memory

IT is a read-write Memory It is read only memory


Used to store the data that has to be currently It stores the instruction required during bootstrap of
processed by CPU temporarily the computer
It is a volatile memory It is a nonvolatile memory
Data in RAM can be modified Data in ROM can not be modified
Types of RAM are static RAM and dynamic RAM Types of ROM are PROM,EPROM,EEPROM
What is Memory(contd)….
3. Programmable Read-Only Memory(PROM):- This is a memory chip
on which you can store a program. But once the PROM has been used,
you cannot wipe it clean and use it to store something else. Like ROMs,
PROMs are non-volatile
4. Erasable Programmable Read-Only Memory(EPROM): - is a special
type of PROM that can be erased by exposing it to ultraviolet light.
5. Electrically Erasable Programmable Read-Only Memory(EEPROM): is
a special type of PROM that can be erased by exposing it to an
electrical charge.
What is memory hierarchy ?
• The hierarchical arrangement of storage in current computer
architectures is called the memory hierarchy.
Memory Abstraction
• The hardware and OS memory manager makes you see the memory
as a single contiguous entity.
• How do they do that?
• Abstraction
• Is Abstraction necessary ?
No memory Abstraction
• When program execute instruction like
MOV REGISTER1,1000
• If at the same time another program execute same instruction then
value of first program will be overwrite.
• So only one process at a time can be running.
No memory abstraction
• What if we want to run multiple programs?
• OS saves entire memory on disk
• OS brings next program
• OS runs next program
• We can use swapping to run multiple programs concurrently.

• The process of bringing in each process in its entirely in to memory,


running it for a while and then putting it back on the disk is called
swapping.
Ways to implement swapping system
• Two different ways to implement swapping system
1. Multiprogramming with fixed partitions
2. Multiprogramming with dynamic partitions
Multiprogramming with fixed partitions
• Here memory is divided into fixed sized partitions.
• Size can be equal or unequal for different partitions.
• Generally unequal partitions are used for better utilizations.
• Each partition can accommodate exactly one process, means only
single process can be placed in one partition.
• The partition boundaries are not movable.
Multiprogramming with fixed partitions
• Whenever any program needs to be loaded in memory, a free
partition big enough to hold the program is found. This partition will
be allocated to that program or process.
• If there is no free partition available of required size, then the process
needs to wait. Such process will be put in a queue.
Multiprogramming with fixed partitions
• There are two ways of maintain queue
1. Using multiple input queue
2. Using single input queue
Multiprogramming with dynamic partitions
• If enough free memory is not available to fit the process, process
needs to wait until required memory becomes available.
• Whenever any process gets terminate, it releases the space occupied.
If the released free space is contiguous to another free partition, both
the free partitions are merged together in to single free partition.
• Better utilization of memory than fixed sized size partition.
Multiprogramming with dynamic partitions
Memory compaction
• When swapping creates multiple holes in memory, it is possible to
combine them all in one big hole by moving all the processes
downward as far as possible. This techniques is known as memory
compaction.
• It requires lot of CPU Time.
Relocation
Base and limit register
• An address space is set of addresses that a process can use to address
memory.
• An address space is a range of valid address in memory that
are available for a program or process.
• Two registers: Base and Limit
• 1. Base register: Start address of a program in physical
memory.
• 2. Limit register: Length of the program.
• For every memory access
• Base is added to the address
• Result compared to limit
• Only OS can modify Base and Limit register.
Dynamic relocation
• Steps in dynamic relocation
• 1. Hardware adds relocation register(base) to virtual address to get a
physical address
• 2. Hardware compares address with limit register, address must be
less than or equal limit.
• 3. If test fails, the processor takes an address trap and ignore the
physical address
Techniques for Managing Free Memory

Managing free Memory


• Two ways to keep track of memory usage(Free memory)
1. Memory management with Bitmaps
2. Memory management with Linked list
• Another way to keep track of memory is to maintain a linked list of
allocated and free memory segments, where segment either contains
a process or is an empty hole between two processes.
• Each entry in th list specifies a hole(H) or Process(P), the address at
which it starts the length and a pointer to the next entry.
• Sorting this way has the advantage that when a process terminates or
is swapped out, updating the list is straightforward.
• A terminating process normally has two neighbors(except when it is
at the very top or bottom of memory)
Memory Allocation algorithm
Four memory allocation algorithms are as follows
1. First fit
2. Next fit
3. Best fit
4. Worst fit
First fit
• Search starts from the starting location of the memory
• Frist available hole that is large enough to hold the process is selected for
allocation.
• The hole is then broken up into two pieces, one for process and another for
unused memory.
• Example: Processes of 212k,417k,112k and 426k arrives in order.

• Here process of size 426k will not get any partition for allocation.
First fit

• Fastest algorithm because it searches as little as possible.


• Memory loss is higher, as very large hole may be selected for small
process.
Next fit
• It works in the same way as first fit, except that it keeps the track of
where it is whenever if finds a suitable hole.
• The next time when it is called to find a hole, it starts searching the
list from the place where it left off last time.
• Processes of 212k, 417k, 112k and 426k arrives in order.

• Here process of size 426k will not get any partition for allocation.
Next Fit
• Search time is smaller
• Memory manager must have to keep track of last allotted hole to
process.
• It gives slightly worse performance than first fit.
Best fit
• Entire memory is searched here
• The smallest hole, which is large enough to hold the process, is selected for
allocation
• Processes of 212k, 417k,112k and 426k arrives in order.

• Search time is high, as it searches entire memory every time.


• Memory loss is less
Worst fit
• Entire memory is searched here also. The largest hole, which is largest
enough to hold the process, is selected for allocation.
• Processes of 212k,417k,112k and 426k arrives in order.

• Here process of size 426k will not get any partition for allocation
• Search time is high, as it searches entire memory every time.
• This algorithm can be used only with dynamic partitioning
Virtual Memory
Our computer has a finite(fix) amount of RAM
when too many programs are running at once at that time it is possible
to run out of memory

This is where virtual memory comes in picture


• Virtual Memory
• Virtual memory increases the available memory of your computer by
enlarging the address space.
• It does this by using hard disk space for additional memory allocation.
• However, since the hard drive is much slower than the RAM, data
stored in virtual memory must be mapped back to real memory in
order to be used.
Virtual Memory
• The process of mapping data back and forth between the hard drive
and the RAM takes longer than accessing it directly from the memory.
• This means that the more virtual memory is used, the more it will
slow your computer down.
Virtual Memory
• Each program has its own address space,
which is broken up into pages
• Each page is a contiguous range of addresses.
• These pages are mapped onto the physical memory
but, to run the program all pages are not required to
be present in the physical memory.
• The operating system keeps those parts of the
program currently in use in main memory,
and the rest on the disk.
Virtual Memory

• In a system using virtual memory, the physical memory is divided into


page frames and the virtual address space is divided in into equally-
sized partitions called pages.
• Virtual memory works fine in a multiprogramming system, with bits
and pieces of many programs in memory at once.
Paging
• The program generated address is called as Logical
Addresses and from the virtual Address Space.
• An address actually available on memory is called
Physical Address.
• Virtual address space is divided into fixed-size
• partitions called pages.
• The corresponding units in the physical memory are
called as page frames.
• The pages and page frames are always of the same
size.
• (page size=frame size)
Paging
• Most virtual memory systems use a technique called paging
• Paging is a memory management technique by which a computer retrieves
data from secondary storage(HDD) and stores in main memory for the use.
• In paging, the operating system retrieves data from secondary storage in
the form of pages and place the entire page into frame.
• Size of virtual address space is greater than that of main memory, so
instead of loading entire address space in to memory to run the process,
MMU copies only required pages into main memory.
• In order to keep the track of pages and page frames, OS maintains a data
structure called page table.
Conversion of virtual address to physical address
• A complete copy of a program's core image, up to 44 KB, must be
present on the disk.
• Only required pages are loaded in the physical memory.
Conversion of virtual address to physical address
• If the present/absent bit is 0, it is page-fault, a trap to the operating
system is caused to bring required page into main memory.
• If the present/absent bit is 1, required page is there with main
memory and page frame number found in the page table is copied to
the higher order bit of the output register along with the offset.
• Together page frame number and offset creates physical address.
• Physical Address=Page frame Number+ offset of virtual address.
Page Table

• Page table is a data structure which translates virtual address into


equivalent physical address.
• The virtual number is used as an index into the page table to find the
entry for that virtual page and from the page table physical page
frame number is found.
• Thus the purpose of page table is to map virtual pages onto page
frames.
• Page frame Number: This field shows the corresponding physical page
frame number for a particular virtual page.
• Present/Absent bit: If this bit is 1, the entry is valid and can be used, if
it is 0 the virtual page to which the entry belongs is not currently in
memory. Accessing page table entry with this bit set to 0 causes a
page fault.
• The protection bits: This tells what kind of access is permitted. In the
simplest form, this field contain 1 bit, with 0 for read/write and 1 for
read only.
Page Table Structure
• Modified bit: When a page is written to, the hardware sets the
modified bit. If the page in memory has been modified, it must be
written back to disk. This bit is also called as dirty bit as it reflects the
page's state.
• Referenced bit: a reference bit is set whenever a page is referenced,
either for reading or writing. Its value helps operating system in page
replacement algorithm.
• Cashing Disabled bit: This feature is important for pages that maps
onto device registers rather than memory. With this bit cashing can
be turned off.
Demand Paging

• In paging system, processes are started up with none of their pages in


memory
• When CPU tries to fetch the first instruction, it gets page fault, other
page faults for global variables and stack usually follow quickly.
• After a while, the process has most of the pages it needs in main
memory and it has few page faults.
• This strategy is called demand paging because pages are loaded only
on demand, not in advance
Definitions

• Working Set: The set of pages that a process is currently using is


known as working set.
• Thrashing: A program causing page faults every few instructions is
said to be thrashing.
• Pre-Paging: Many paging systems try to keep track of each process's
working set and make sure that it is in memory before letting the
process run.
• Loading pages before allowing processes to run is called pre-paging.
Issues in paging

• In any paging system, two major issues must be faced.


• 1. The mapping from virtual address to physical address must be fast.
• 2. If the virtual address space is large, the page table will be large.
Page Replacement Algorithm
Following are different types of page replacement algorithms

1. Optimal Page Replacement Algorithm


2. FIFO Page Replacement Algorithm
3. The Second Chance page Replacement Algorithm
4. The Clock Page Replacement Algorithm
5. LRU (Least Recently Used) Page Replacement Algorithm
6. NRU(Not Recently Used ) Page Replacement Algorithm
Optimal Page Replacement Algorithm

• Each page can be labeled with the number of instructions that will be
executed before that page is first referenced.
• The optimal page algorithm simply says that the page with the
highest label should be removed.
• The only problem with this algorithms is that it is unrealizable.
• At the time of the page fault, the operating system has no way of
knowing when each of the pages will be referenced next.
FIFO Page Replacement Algorithm
• The first in first out page replacement algorithm is the simplest page
replacement algorithm
• The operating system maintains a list of all pages currently in memory, with
the most recently arrived page at the tail and least recent at the head.
• On a page fault, the page at head is removed and the new page is added to
the tail.
• When a page replacement is required the oldest page in memory needs to
be replaced.
• The performance of the FIFO algorithm is not always good because it may
happen that the page which is the oldest is frequently referred by OS.
• Hence removing the oldest page may create page fault again.
FIFO Page Replacement Algorithm
Page Reference String:
• 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
Three frames
Second Chance Page Replacement Algorithm
• It is modified form of the FIFO page replacement algorithm.
• It looks at the front of the queue as FIFO does, but instead of
immediately paging out that page, it checks to see if its referenced bit
is set.
• If it is set(zero) the page is swapped out.
• Otherwise the referenced bit is cleared, the page is inserted at the
back of the queue(as if it were a new page) and this process is
repeated.
Second Chance Page Replacement Algorithm

• If all the pages have their referenced bit set, on the second encounter
of the first page in the list, that page will be swapped out, as it now
has its referenced bit cleared.
• If all the pages have their reference bit cleared, then second chance
algorithm degenerates into pure FIFO.
Clock Page Replacement Algorithm
• In second chance algorithm pages are constantly moving around on
its list. So it is not working efficiently.
• A better approach is to keep all the page frames on a circular list in
the form of a clock.
Clock Page Replacement Algorithm
• When a page fault occurs, the page being pointed to by the hand is
inspected.
• If its R is 0, the page is evicted, the new page is inserted into the clock
in its place, and the hand is advanced one position.
• If R is 1, it is cleared and the hand is advanced to the next page
LRU(Least Recently Used) Page Replacement Algorithm
• A good approximation to the optimal algorithm is based on the observation
that pages that have been heavily used in last few instructions will
probably be heavily used again in next few instructions.
• When page fault occurs, throw out the page that has been used for the
longest time. This strategy is called LRU(Least Recently Used) paging.
• To fully implement LRU, it is necessary to maintain a linked list of all pages
in memory, with the most recently used page at the front and the least
recently used page at the rear.
• The list much be updated on every memory reference.
• Finding a page in the list, deleting it, and then moving it to the front is a
very time consuming operations.
LRU(Least Recently Used) Page Replacement
Algorithm

• Page Reference String:


• 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
• Three frames
NRU(Not Recently Used)Page Replacement
Algorithm
• NRU makes approximation to replace the page based on R
(referenced) and M(modified) bits.
• When a process is started up, both page bits for all pages are set to 0
by operating system.
• Periodically, the R bit is cleared, to distinguish pages that have not
been referenced recently from those that have been.
NRU(Not Recently Used) Page Replacement
Algorithm
• When page fault occurs, the operating system inspects all the pages
and divide them into 4 categories based on current values of their R
and M bits.
• 1. Case 0: not referenced, not modified
• 2. Case 1: not referenced, modified
• 3. Case 2: referenced, not modified
• 4. Case 3: referenced, modified
• The NRU(Not Recently Used) algorithm removes a page at random
from the lowest numbered nonempty class.
NRU(Not Recently Used) Page Replacement
Algorithm

• For example, if
• 1. Page-0 is of class-2(referenced, not modified)
• 2. Page-1 is of class-1 (not referenced, modified)
• 3. page-2 is of class-0(not referenced, not modified)
• 4. Page-3 is of class-3 (referenced, modified)
• So lowest class page-2 needs to be replaced by NRU.
Segmentation
• Segmentation is a memory management technique in which each job
is divided into several segments of different sizes, one for each
module that contains pieces that perform related functions.
• Each segment is actually a different logical address space of the
program.
• When a process is to be executed, its corresponding segmentation
• are loaded into a contiguous block of available memory.
• Segmentation memory management works very similar to paging but
there segments are of variable-length where as in paging pages are of
fixed size.
Segmentation

• A program segment contains the program's main function, utility functions,


data structures, and so on.
• The operating system maintains a segment map table for every process.
• Segment map table contains list of free memory blocks along with segment
numbers, their size and corresponding memory locations in main memory.
• For each segment, the table stores the starting address of the segment and
the length of the segment.
• A reference to a memory location includes a value that identifies a segment
and an offset.
Paging Vs Segmentation
Paging Segmentation

Paging was invented to get large address space without Segmentation was invented to allow programs and data to be
having to buy more physical memory broken up into logically independent address space and to
add sharing and protection

The programmer does not aware that paging is used The programmer is aware that segmentation is used

Procedure and data cannot be distinguished and protected Procedure and data be distinguished and protected
separately separately

Change in data or procedure requires compiling entire Change in data or procedure requires compiling only affected
program segment not entire program

Sharing of different procedures not available Sharing of different procedures available

You might also like