SEN307 Lecture 10
SEN307 Lecture 10
Main Memory
Design Virtual
Memory
1.Understand the principles of memory hierarchy and the roles of different types of
memory.
3.Comprehend the organization and design of main memory and virtual memory.
These systems are designed to provide storage that varies in speed, size, cost, and
accessibility, forming a hierarchy that balances these trade-offs to optimize overall system
performance..
• Hierarchy and Speed: Different levels of the memory hierarchy offer varying speeds
and sizes, with faster, smaller memories (like caches and registers) closer to the CPU
and slower, larger memories (like RAM and secondary storage) further away.
• Volatility: Some memory types (like RAM) are volatile and lose their data when
powered off,
while others (like SSDs and HDDs) are non-volatile and retain data without power.
• Cost Efficiency: Higher-speed memory is more expensive per byte, while slower
memory types
are cheaper, affecting the overall cost and design of a computer system.
• Data Transfer and Access Time: The design of memory systems impacts data
transfer rates and access times, directly influencing the performance of a computer.
Computer Architecture - NUN 2024 Austin Olom Ogar
Memory Hierarchy Overview
Register Serve as the CPU's
storage,
s: holding internal
data that is
immediately needed for computations
• Cost:
• Memory cost increases with speed and decreases with size. Registers and caches are
the most
expensive per byte, while secondary storage is the cheapest.
• Not all memory can be made of the fastest type due to cost constraints and
physical space limitations
• Size:
• Larger memory types (like SSDs and HDDs) are slower but cheaper per gigabyte and
provide bulk storage for data that is not frequently accessed.
• Smaller, faster memory types (like registers and cache) are limited in size but provide
rapid
Computer access
Architecture - NUN 2024 Austin Olom Ogar
comparative chart for Memory Hierarchy
Memory Type Speed (Access Cost per Size (Capacity)
Time) Byte
Registers Fastest Highest Smallest
L1 Cache Very Fast High Small
L2 Cache Fast Moderate Moderate
L3 Cache Moderately Fast Lower Larger
Main
Moderate Lower Much Larger
Memory
(RAM)
Seconda
Slow Lowest Largest
ry
Storage
Computer Architecture - NUN 2024 Austin Olom Ogar
Cache Memory Design
Cache memory is a small-sized, high-speed volatile
memory located close to the CPU to temporarily
store frequently accessed data and instructions.
• Cache miss: If the requested data is not found in the cache memory,
then the cache miss is said to occur
• Cache miss time penalty: In case of cache miss, the time required to
fetch the required block from main memory (to deliver to CPU) is called
Cache miss- NUN
Computer Architecture time penalty.
2024 Austin Olom Ogar
Cache Hit and Miss Concepts:
• Hit Ratio : Number of hits
No of cache Hit + No of misses
TA = Htc + (1-h)(tc+tm)
42 = H(30) + (1-h)(30 + 150)
42 = 30H + (1-h)(180)
42 = 30H + 180 – 180H
42 - 180 = 30H – 180H
-132 = -150H
-132/150 +-H
H= 0.92
Solution
Solution
• Indirect Addressing:
• Uses pointers or registers to point to a memory location,
offering more flexibility.
• Indexed Addressing:
• Combines a base address with an index register to
determine the memory location, allowing for efficient array
processing and looping.
Memory Access Control and Timing
• Access Control:
• Involves managing read and write operations to ensure
data integrity and system stability.
• Utilizes memory controllers to coordinate data flow between
the CPU and memory.
• Timing:
• Memory timing parameters (e.g., CAS latency, RAS-to-CAS
delay) determine how quickly data can be accessed.
• Lower latency and faster memory access improve overall
system performance.
Concepts of Memory Interleaving
• Definition:
• A technique used to increase the memory bandwidth by dividing
memory into multiple banks and accessing them simultaneously.
• Types of Interleaving:
• Low-order interleaving: Alternates memory access between
consecutive addresses across multiple banks.
• High-order interleaving: Allocates blocks of addresses to specific
banks, reducing contention.
• Benefits:
• Increases data throughput and reduces access latency.
• Allows parallel access to different memory banks, enhancing
performance in multi-core systems.
Error Detection and Correction Techniques
• Parity Bit:
• Adds an extra bit to each byte for error detection; can detect single-bit
errors but not correct them.
• Segmentation:
• Divides the memory into variable-sized segments based on logical divisions of a program (e.g.,
functions, data structures).
• Each segment is stored in a separate location in physical memory and has a segment table that
keeps track of its location and length.
• Advantages: Provides a more logical view of memory, supports protection and sharing of
memory segments.
• Disadvantages: Can lead to external fragmentation if there isn’t enough contiguous space for
a new segment.
Page Replacement Algorithms:
• FIFO (First-In, First-Out):
• Description: Replaces the oldest page in memory that has been there the
longest.
• Advantages: Simple to implement.
• Disadvantages: Can result in suboptimal performance (Belady's anomaly).
• LRU (Least Recently Used):
• Description: Replaces the page that has not been used for the longest period.
• Advantages: Provides a good approximation of the optimal algorithm in terms of
minimizing page faults.
• Disadvantages: More complex to implement and requires additional hardware
support.
• Optimal (OPT):
• Description: Replaces the page that will not be used for the longest period in the
future.
• Advantages: Provides the lowest possible page fault rate.
• Disadvantages: Requires future knowledge of memory access, which is not
Performance Considerations in Virtual Memory Systems:
• Page Fault Rate:
• The frequency of page faults significantly impacts system performance. A high
page fault rate can degrade performance due to frequent disk I/O operations.
• Thrashing:
• Occurs when a system spends more time swapping pages in and out of
memory than executing instructions, leading to a severe decline in
performance.
• Question 1: Explain the role of cache memory in the memory hierarchy and discuss
how different levels of cache (L1, L2, L3) are optimized for both speed and cost.
What are the trade-offs involved in increasing the size of each cache level