0% found this document useful (0 votes)
32 views48 pages

Unit 5 Memory Organization

The document discusses memory organization, categorizing memory into volatile and non-volatile types, and detailing the memory hierarchy from cache to auxiliary memory. It explains various memory types, including CPU registers, main memory, secondary memory, and cache memory, along with their characteristics and access modes. Additionally, it covers concepts like virtual memory, address translation, paging, and memory allocation strategies, emphasizing the importance of efficient memory management for optimal CPU performance.

Uploaded by

omthk001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views48 pages

Unit 5 Memory Organization

The document discusses memory organization, categorizing memory into volatile and non-volatile types, and detailing the memory hierarchy from cache to auxiliary memory. It explains various memory types, including CPU registers, main memory, secondary memory, and cache memory, along with their characteristics and access modes. Additionally, it covers concepts like virtual memory, address translation, paging, and memory allocation strategies, emphasizing the importance of efficient memory management for optimal CPU performance.

Uploaded by

omthk001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Unit 5

Memory Organization
Memory Organization
• A memory unit is the collection of storage units or devices together.
The memory unit stores the binary information in the form of bits.
Generally, memory/storage is classified into 2 categories:
• Volatile Memory: This loses its data, when power is switched off.
• Non-Volatile Memory: This is a permanent storage and does not lose
any data when power is switched off.
Memory Organization
• Memory device characteristics:
– CPU should have rapid, uninterrupted access to the external memories where its
programs and the data they process are stored.
– So that CPU can operate at or near its max. speed .

• Goal of memory system is to provide adequate storage capacity with an acceptable


level of performance and cost.

• Automatic storage control methods make efficient use of available memory


capacity.
– These methods should free the user from explicit management of memory space.
Memory Hierarchy
• The memory hierarchy system consists of all storage devices
contained in a computer system from the slow Auxiliary
Memory to fast Main Memory and to smaller Cache
memory.
• The memory unit that directly communicate with CPU is
called Main Memory i.e. RAM
• Device that provides backup storage is called Auxiliary
Memory. Ex. Magnetic Disks and Tapes
• Auxillary memory access time is generally 1000 times that
of the main memory, hence it is at the bottom of the
hierarchy.
• Program and data needed by CPU resides in Main Memory.
All other information stored in auxiliary memory and
transferred to main memory when needed.
• Programs not currently needed in main memory are
transferred into auxiliary memory to provide space in main
memory for other programs that are currently in use.
Memory Types
• Information storage components placed in 4 groups shown in figure
1. CPU Registers
• The register memory is a temporary storage area for storing and
transferring the data and the instructions to a computer.
• It is the smallest and fastest memory of a computer.
• It is a part of computer memory located in the CPU as the form of
registers.
• The register memory is 16, 32 and 64 bits in size.
• It temporarily stores data instructions and the address of the memory
that is repeatedly used to provide faster response to the CPU.
Memory Types
• Information storage components placed in 4 groups shown in figure
2. Main (Primary) Memory
• Primary memory is also known as the computer system's main memory
that communicates directly within the CPU, Auxiliary memory and the
Cache memory.
• Main memory is used to kept programs or data when the processor is
active to use them.
• When a program or data is activated to execute, the processor first
loads instructions or programs from secondary memory into main
memory, and then the processor starts execution.
• Access to main memory is slower than register because of its larger
capacity and it is physically separated from CPU.
• Typical size is in GB.
Memory Types
• Information storage components placed in 4 groups shown in figure
3. Secondary (Auxilary) Memory
• Secondary memory is a permanent storage space to hold a large
amount of data.
• Secondary memory is also known as external memory that
representing the various storage media (hard drives, USB, CDs, flash
drives and DVDs) on which the computer data and program can be
saved on a long term basis.
• However, it is cheaper and slower than the main memory.
• Unlike primary memory, secondary memory cannot be accessed
directly by the CPU
• It can be accessed indirectly via input/output programs that transfer
information between main and secondary memory.
Memory Types
• Information storage components placed in 4 groups shown in figure
4. Cache Memory
• It is a small-sized chip-based computer memory that lies between the
CPU and the main memory.
• It is a faster, high performance and temporary memory to enhance the
performance of the CPU.
• It stores all the data and instructions that are often used by computer
CPUs.
• It also reduces the access time of data from the main memory.
• It is faster than the main memory, and sometimes, it is also called CPU
memory because it is very close to the CPU chip.
Memory Types
• Information storage components placed in 4 groups shown in figure
4. Cache Memory
• The following are the levels of cache memory.
– L1 Cache: The L1 cache is also known as the onboard, internal, or primary cache. It is
built with the help of the CPU. Its speed is very high, and the size of the L1 cache varies
from 8 KB to 128 KB.
– L2 Cache: It is also known as external or secondary cache, which requires fast access
time to store temporary data. It is built into a separate chip in a motherboard, not built
into the CPU like the L1 level. The size of the L2 cache may be 128 KB to 1 MB.
– L3 Cache: L3 cache levels are generally used with high performance and capacity of the
computer. It is built into a motherboard. Its speed is very slow, and the maximum size
up to 8 MB.
Memory Types
Memory Systems
• Goal of Memory Systems
• Is to provide adequate storage capacity with an acceptable level of
performance and cost.
• Performance and cost:
– Rate at which information can be read from or written into the
memory.
– Price of memory not include only cost of information storage but also
the access circuitry needed to operate the memory.
– e.g. C=price in $, S= bits of storage capacity, then
Cost of memory c=C/S $/bit
Memory Systems
• Performance Measure
– Access time: The average time to read a fixed amount of information
from memory is called the read access time (tA) or access time.
– It can range from a few nanoseconds (ns) to microseconds (µs).
– tA Depends on physical nature of the storage medium and the access
mechanism used.
– Low cost and short access time are desirable memory characteristics.
Memory Hierarchy
– CPU and other processors can
communicate directly with M1
only, M1 can communicate M2
and so on.
– For memory hierarchy to work
efficiently the address generated
by the CPU should found in M1 as
often as possible.
– If desired data cannot be found in
M1,request must be suspended
until and appropriate reallocation
of storage is made.
Memory Access Modes
• Access modes:
– Is the sequence or order in which information can be accessed.
– Random Access Memory (RAM)
– Serial Access Memory( SAM)
Memory Access Modes
• RAM
– If locations can be accessed in any order
and access time is independent of the
location being accessed the memory is
termed RAM
– There is separate access mechanism or
read write head for every location.
– Memory locations can be accessed
randomly.
– Access time is independent of the location
to be accessed.
– Semiconductor memories are of this type.
– Each storage locations can be accessed
independently of the other locations,
hence these memories are costlier .
– e.g. SRAM and DRAM
Memory Retention
• In some technologies stored information is lost over a period of time
unless corrective action is taken.
• Memory retention refers to the ability of a storage medium to retain
stored information over time
• Memories that destroys information in this way are--
– Destructive read out
– Dynamic storage
– Volatility
Memory Retention
• Destructive Read Out
– In computer memory, a destructive readout refers to the process where
reading data from a memory location changes or destroys the stored
information.
– In DRO memories each read operation must be followed by a write operation
to store the memories original state.
• Dynamic Storage
• This term could refer to dynamic random-access memory (DRAM), which is a
type of volatile memory commonly used in computers for storing data that
needs to be accessed quickly by the CPU.
• DRAM requires constant power to retain data, meaning it loses its contents
when the power is turned off.
Memory Retention
• Volatility
– Volatility in the context of computer memory refers to whether or not data is
retained when power is removed.
– Volatile memory loses its data when power is turned off, while non-volatile
memory retains data even when power is removed.
– Dynamic storage, like DRAM, is an example of volatile memory.
Virtual Memory
• VM refers to something which appears to be present but actually it is not.
• Virtual memory is an area of a computer system’s secondary memory storage space,
such as an HDD
• Virtual memory is a space where large programs can store themselves in the form of
pages while their execution and only required pages of program are loaded into the
main memory.
• It can be much larger than physical memory.
• The set of all logical address generated by CPU or program is known as logical (virtual)
address space.
– Logical address(virtual address) is generated by CPU during program execution
• The set of all physical addresses corresponding to logical addresses is known as physical
address space.
• Each virtual or logical address reference by the CPU is mapped to a physical address in
main memory.
Virtual Memory
• Reasons for using virtual memory (Advantages)
– To free user programs from the need to carry out storage allocation and
permit efficient sharing of available memory space among different users.
– Virtual memory helps in efficient CPU utilization.
– To make programs independent of the configuration and capacity of the
physical memory present for their execution.
– To achieve very low access time and cost per bit
Address Translation
• To execute a program on a particular computer , its virtual addresses
must be mapped to its real address space R , i.e. the memory physically
present in the computer.
• Virtual address to physical (Real)address mapping process is called
address translation or address mapping
F: V R
• The process of translating virtual addresses into real addresses is
called mapping.
• Specialized software and hardware within computer automatically
determines the real address required for program execution.
Address Translation
Address Translation
• As shown in the above figure, Virtual memory is divided into virtual
pages, typically 4 KB in size.
• Physical memory is likewise divided into physical pages of the same
size (4 KB).
• A virtual page may be located in physical memory (DRAM) or on the
disk.
• Some virtual pages are present in physical memory, and some are
located on the disk.
• The process of determining the physical address from the virtual
address is called address translation.
Address Translation
Paging
• Paging is a fixed size partitioning scheme.
• In paging, secondary memory and main memory are divided into equal
fixed size partitions.
• The partitions of secondary memory are called as pages.
• The partitions of main memory are called as frames.
• Each process is divided into parts where size of each part is same as page
size.
• The size of the last part may be less than the page size.
• The pages of process are stored in the frames of main memory depending
upon their availability.
Paging
Example
• Consider a process is divided into 4 pages P0, P1, P2 and P3.
• Depending upon the availability, these pages may be stored in the main
memory frames in a non-contiguous fashion as shown-

Po P2

P1 P1
Pages Process
P2 P3

P3 P0
Process Main Memory
Paging
• In Paging two types of addresses are used
– Logical address:- is the address generated by the CPU for every page
– physical address:- is the actual address of the frame where each page
will be stored.
• CPU generates a logical address • Physical address consisting of two
consisting of two parts- parts-
1. Page Number:- specifies the 1. Frame Number:- specifies the
specific page of the process from specific frame where the required
which CPU wants to read the data. page is stored.
2. Page Offset:- specifies the specific 2. Frame Offset:- specifies the specific
word on the page that CPU wants to word that has to be read from that
read. page.
Paging
Page Table
• Page table is a data structure.
• It maps the page number referenced by the CPU to the frame number
where that page is stored.
• Page table is stored in the main memory.
• Number of entries in a page table = Number of pages in which the process
is divided.
• Each process has its own independent page table.
Paging
Page Table

Page 0 1 Page 0

Page 1 0 1 2
4 Page 2
Page 2 1 3
3 Page 1
Page 3 2 4
7
3 5
Logical Memory
6
Page Table Page 3
7
Physical Memory
Paging
Working of Page Table
• Page Table Base Register (PTBR)
contains the base address of page
table.
• The base address of the page table is
added with the page number
referenced by the CPU.
• It gives the entry of the page table
containing the frame number where
the referenced page is stored.
Paging
Hardware Implementation of Paging
• The following diagram illustrates the above steps of translating logical
address into physical address-
Paging
Hardware Implementation of Paging
• Example
Memory Allocation
• Levels of Memory system are divided into sets of contiguous locations
called as regions, segments or pages which are blocks of data.
• The placement of blocks of information in a memory system is called
memory allocation.
• The method of selecting the part of M1 in which an incoming block K is to
be placed is the replacement policy.
• Replacement policies assign K to M1 only when an unoccupied or inactive
region of sufficient size is available.
• Successful memory allocation methods result in a high hit ratio and a low
average access time.
Dynamic Memory Allocation Algorithms
Dynamic
Memory
allocation

Non
Preemptive
Preemptive

Worst fit Replacement


First fit Best fit Compaction
policies

Optimal Page
FIFO LRU
replacement

Preemptive scheduling allows a running process to be interrupted by a high priority process,


whereas in non-preemptive scheduling, any new process has to wait until the running process
finishes its CPU cycle.
Memory Allocation
• First fit
– Scans memory map sequentially until available region (R) of n words or more than n words
is found.
– n is the size of block k
– It allocates block k to region R.
– First fit approach is to allocate the first free region(large enough region) which can
accommodate the block.
– It finishes after finding the first suitable free region.
– Advantages:
• Requires less time to execute.
• Fastest algorithm because it searches as little as possible.
– Disadvantage
– The remaining unused memory areas left after allocation become waste if it is too smaller
Memory Allocation
• Best fit-
• searches memory map completely
• The best fit deals with allocating the smallest free region which meets the
requirement of the requesting process.
• This algorithm first searches the entire list of free regions and considers the smallest
region that is adequate.
• It then tries to find a region which is close to actual region size needed.
• Assigns Ki to a region nj ≥ ni such that nj - ni is minimized.
• Advantages:
• Requires more time to execute compared to first fit method.
• Memory utilization is better than first fit as it searches the smallest free region
first available.
• Disadvantage:
• It is slower.
Memory Allocation
• Worst fit:
• In worst fit approach is to locate largest available free portion so that the portion left will
be big enough to be useful.
• It is the reverse of best fit.
• Advantage
• Reduces the rate of production of small gaps.
• Disadvantage:
• If a block requiring larger memory arrives at a later stage then it cannot be accommodated
as the largest region is already split and occupied.
Memory Allocation
Memory Allocation
• Compaction
• Non-preemptive allocation cannot make efficient use of memory in all
situations. (Rejection of memory allocation request due to insufficient space .)
• More efficient use of available memory space is possible if the occupied space
can be reallocated to create a gap large enough for the incoming blocks.
• Relocation of the blocks already occupying M1can be done by a method called
compaction.
• The blocks currently in memory are compressed into single contiguous group at
one end of memory.
• This creates an available region of maximum size as shown in fig.
Memory Allocation
• Preemptive Allocation

Fig. Memory allocation (a) before and (b) after compaction


Memory Allocation
Page Fault
• Page fault dominates more like an error. It mainly occurs when any program
tries to access the data or the code that is in the address space of the
program, but that data is not currently located in the RAM of the system.
• So basically when the page referenced by the CPU is not found in the main
memory then the situation is termed as Page Fault.
• Whenever any page fault occurs, then the required page has to be fetched
from the secondary memory into the main memory.
• A page Hit occurs when a page referenced by the CPU is found in the main
memory.
• A page fault/Miss occurs when a page referenced by the CPU is not found in
the main memory.
Memory Allocation
Page Replacement Algorithm
• When a page fault occurs, then Operating System has to choose a page to
remove from memory to make room for the page that has to be brought in.
This is known as Page Replacement.
• The page replacement algorithm decides which memory page is to be
replaced. The process of replacement is sometimes called swap out or write
to disk.
• Page replacement algorithms should have the lowest page fault rate.
• As the number of frames increases, the number of Page faults are
minimized.
Memory Allocation
Page Replacement Algorithm
• When a page fault occurs, then Operating System has to choose a page to
remove from memory to make room for the page that has to be brought in.
This is known as Page Replacement.
• The page replacement algorithm decides which memory page is to be
replaced. The process of replacement is sometimes called swap out or write
to disk.
• Page replacement algorithms should have the lowest page fault rate.
• As the number of frames increases, the number of Page faults are
minimized.
Memory
Paging Allocation
FIFO (First In First Out)
• FIFO is a simplest page replacement algorithm.
• In this algorithm, a queue is maintained.
• The page which is assigned the frame first will be replaced first.
• In other words, the page which resides at the rare end of the queue will be
replaced on the every page fault.
• The FIFO page replacement algorithm is easy to understand and program.
• It’s performance is not always good.
Memory
Paging Allocation
FIFO (First In First Out)
• Example:-

• The number of page faults are 15.


Memory Allocation
FIFO (First In First Out)
• Belady’s anomaly –
– Belady’s anomaly proves that it is possible to have more page faults
when increasing the number of page frames while using the First in
First Out (FIFO) page replacement algorithm.
– For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1,
0, 4 and 3 slots, we get 9 total page faults, but if we increase slots to
4, we get 10-page faults.
Memory
Paging Allocation
Optimal Page Replacement
• An Optimal Page Replacement Algorithm has the lowest fault rate of all
algorithms and does not suffer from Belady’s anomaly.
• Optimal Page Replacement Algorithm state that replace the page which will
not be used for the longest period of time.

• With only nine page faults, optimal page replacement is much better than a
FIFO algorithm, which had 15 faults.
• The optimal page replacement is difficult to implement
Memory
Paging Allocation
Least Recently Used
• If we use recent past as an approximation of the near future, then LRU
replaces the page which has not been used for the longest period of time,
This is the least recently used algorithm.

• No. of page faults =12

You might also like