0% found this document useful (0 votes)
2 views8 pages

Hierarchy of Memory in Computer Organization and Architecture

This presentation covers the hierarchy of memory in computer organization, detailing various memory levels including cache, main memory, and secondary storage. It emphasizes the trade-offs between speed, cost, and capacity, as well as principles like locality of reference and cache coherence. Understanding these concepts is crucial for optimizing program performance and data management.

Uploaded by

Tushar Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views8 pages

Hierarchy of Memory in Computer Organization and Architecture

This presentation covers the hierarchy of memory in computer organization, detailing various memory levels including cache, main memory, and secondary storage. It emphasizes the trade-offs between speed, cost, and capacity, as well as principles like locality of reference and cache coherence. Understanding these concepts is crucial for optimizing program performance and data management.

Uploaded by

Tushar Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Hierarchy of Memory in

Computer Organization
and Architecture
This presentation delves into the hierarchy of memory, a critical
aspect of computer organization and architecture. We'll explore the
different levels of memory, their characteristics, and their roles in
efficient data management.
by Tushar Kumar
Introduction to Memory Hierarchy
Why Hierarchy? Key Concepts

Different memory levels cater to varying needs. Faster Speed-cost trade-off: Faster memory is more
but smaller memory (cache) for frequently accessed expensive. Capacity-speed trade-off: Larger capacity
data. Larger, slower memory (main) for program typically means slower access.
execution. Vast but slowest memory (secondary) for
long-term storage.
Cache Memory
1 Level 1 (L1) 2 Level 2 (L2)
Smallest and fastest, Larger than L1, slightly
often integrated with the slower. Serves as a buffer
CPU. Holds frequently between L1 and main
used data for immediate memory.
access.

3 Level 3 (L3)
Largest and slowest, shared among multiple cores. Acts as a
larger buffer for frequently used data.
Main Memory (RAM)
Dynamic RAM (DRAM)
Most common type, uses capacitors to store data. Requires periodic
1
refresh to maintain data.

Static RAM (SRAM)


2 Faster but more expensive, uses latches to store data.
Doesn't need refresh, ideal for small caches.
Secondary Storage
Hard Disk Drives (HDDs) Solid State Drives (SSDs)
Traditional storage, uses Modern storage, uses flash
spinning platters with memory chips to store data.
magnetic heads to read and Faster access but more
write data. Lower cost but expensive than HDDs.
slower access.

Magnetic Tapes
Large capacity but slowest access, primarily used for backups
and archiving. Cost-effective for long-term data retention.
Memory Hierarchy Principles
Locality of Reference
Programs tend to access data and instructions in
localized regions. This enables efficient caching and
data movement.

Cache Coherence
Ensuring data consistency across multiple caches
when multiple processors access shared data.
Important for parallel processing.
Performance Considerations

1 1
Cache Hit Cache Miss
Data found in the cache, Data not found in the cache,
providing fast access. Improves requiring access to slower
program performance memory levels. Increases
significantly. latency and slows down
performance.
Conclusion and Key Takeaways
The memory hierarchy is a fundamental concept in computer
systems. Understanding the different levels and their
characteristics enables efficient program design, performance
optimization, and data management.

You might also like