0% found this document useful (0 votes)
10 views24 pages

SS Computer Architecture Cache Memory Organization

The document discusses cache memory organization, highlighting its role as a fast buffer between RAM and CPU, and detailing three mapping techniques: direct mapping, set associative mapping, and fully associative mapping. It explains the mechanics of each mapping technique, including how data is accessed and stored, and provides performance metrics such as hit rate, hit time, and average memory access time (AMAT). Additionally, it compares split and unified cache architectures, noting the implications of their respective AMAT calculations.

Uploaded by

Aritra Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views24 pages

SS Computer Architecture Cache Memory Organization

The document discusses cache memory organization, highlighting its role as a fast buffer between RAM and CPU, and detailing three mapping techniques: direct mapping, set associative mapping, and fully associative mapping. It explains the mechanics of each mapping technique, including how data is accessed and stored, and provides performance metrics such as hit rate, hit time, and average memory access time (AMAT). Additionally, it compares split and unified cache architectures, noting the implications of their respective AMAT calculations.

Uploaded by

Aritra Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Computer Architecture

(PCC CS-402)
Cache Memory Organization

May 12, 2025


Introduction
■ Extremely fast memory
■ Acts as buffer between RAM and CPU
■ Holds frequently requested data and instruction so
that CPU immediately can avail those when required.
■ Reduce the average time to access data from Main
memory.
■ Three different mapping techniques:
● Direct mapping
● Set associative mapping
● Fully associative mapping

May 12, 2025 2


Direct Mapped Cache
■ Direct Mapped 2N byte cache:
● The uppermost (32 ‐ N) bits are always the Cache Tag
● The lowest M bits are the Byte Select (Block Size =
2M)
■ Example: 1 KB Direct Mapped Cache with 32 B
Blocks
● Index chooses potential block
● Tag checked to verify block
● Byte select chooses byte within block

May 12, 2025 3


Direct Mapping
■ Cache is like a hash table without chaining (one slot
per bucket)
● Collisions yield to evictions
● Each main memory block will always map to the same
cache location

May 12, 2025 4


Direct Mapping
■ Each block from memory can only be put in one
location.
■ Given n cache blocks:
● Block i maps to cache block i mod n.

May 12, 2025 5


Set Associative Mapped Cache
■ N‐way set associative: N entries per Cache Index
● N direct mapped caches operates in parallel
■ Example: Two‐way set associative cache
● Cache Index selects a set from the cache
● Two tags in the set are compared to input in parallel
● Data is selected based on the tag result

May 12, 2025 6


K-way Set Associative Mapping
■ Given, S sets, block i of MM maps to set i mod s
■ Within the set, block can be put anywhere
■ Let k = number of cache blocks per set = n/s
● K comparisons required for search
■ Blocks from set i can map into any cache block from
set i

May 12, 2025 7


Set Associative Mapping
■ 12-bit address:
● 16 bytes per block => 4 LSB’s used to determine the
desired byte/word offset within the block
● 2 = 21 possible sets => 1 bits to determine cache set
(i.e. hash function => use this 1-bit of address)
● Tag = Upper 7 bits used to identify the block in the
cache

May 12, 2025 8


Set Associative Mapping

May 12, 2025 9


Set Associative Mapping

May 12, 2025 10


Set Associative Mapping

May 12, 2025 11


Set Associative Mapping

May 12, 2025 12


Set Associative Mapping

May 12, 2025 13


Set Associative Mapping

May 12, 2025 14


Fully Set Associative Mapped Cache
■ Fully Associative: Every block can hold any line
● Address does not include a cache index
● Compare Cache Tags of all Cache Entries in Parallel
■ Example: Block Size=32B blocks
● We need N 27‐bit comparators
● Still have byte select to choose from within block
■ Any block from memory can be put in any cache
block (i.e. no mapping scheme)
■ Completely flexible

May 12, 2025 15


Set Associative Mapping

May 12, 2025 16


Set Associative Mapping

May 12, 2025 17


Set Associative Mapping

May 12, 2025 18


Set Associative Mapping

May 12, 2025 19


Set Associative Mapping

May 12, 2025 20


Set Associative Mapping

May 12, 2025 21


Set Associative Mapping

May 12, 2025 22


Measure: Cache Performance
■ Hit rate = No. of time the word found in the cache
memory.
■ Hit time = Time required to access the cache memory.
■ Miss penalty = Time required to replace a block from
lower level, including time to replace in CPU.
Average Memory Access Time (AMAT) = Hit time + Miss rate × Miss penalty

■ Cache memory category:


● Split cache: Data & instruction are stored separately
(Harvard architecture).
● Unified cache: Data & instruction are stored together
(Von Neumann architecture).
May 12, 2025 23
AMAT calculation
■ AMAT = (% instruction × (instruction hit time +
instruction miss rate × instruction miss penalty)) + (% data
× (data hit time + data miss rate × data miss penalty))
■ AMAT for Split cache = (75% × (1 + 0.64% ×50)) + (25%
× (1 + 6.74% ×50)) = 2.05
■ AMAT for Unified cache = (75% × (1 + 1.99% ×50)) +
(25% × (2 + 1.99% ×50)) = 2.24

■ Unified cache has longer AMAT, even though its miss


rate is lower. It happens due to conflicts for
instruction and data hazards.

May 12, 2025 24

You might also like