Computer Organization: Large and Fast: Exploiting Memory Hierarchy
Computer Organization: Large and Fast: Exploiting Memory Hierarchy
Chapter 5
Large and Fast: Exploiting Memory Hierarchy
Reading:
Textbook Section 5.1-5.5, 5.8
Basics of Cache
Cache was the name chosen to represent the level of the
memory hierarchy between the processor and main memory
in the first commercial computer to have this extra level
Small and fast memory
Stores the subset of instructions and data currently being accessed
Used to reduce average access time to memory
Caches exploit temporal locality by keeping recently accessed data
closer to the processor
Caches exploit spatial locality by moving blocks consisting of multiple
contiguous words closer to the processor
The goal is to achieve
Fast speed of cache memory access
Balance the cost of the memory system
Basics of Cache
Example: Request data item Xn
A very simple cache in which the processor requests are
each one word and the blocks also consist of a single word
A referenced address is
divided into
A tag field, which is
used to compare with
the value of the tag
field of the cache
A cache index, which is
used to select the block
Cache Example with 64-bit address
The total number of bits needed for a cache is a
function of the cache size and the address size,
because the cache includes both the storage for the
data and the tags
Three parts of cache:
Data
Tags
Valid bits
Cache Example with 64-bit address
For the following situation:
64-bit addresses
A direct-mapped cache
The cache size is 2n blocks, so n bits are used for the index
The block size is 2m words (2m+2 bytes), so m bits are used
for the word within the block, and two bits are used for
the byte part of the address