0% found this document useful (0 votes)
2 views11 pages

Cache Memory - Indexing: CS223 Computer Architecture & Organization

The document discusses cache memory design choices including block placement, identification, replacement, and write strategies. It provides calculations for cache indexing, explaining how addresses map to cache sets and the benefits of using middle bits for indexing to improve spatial locality. Additionally, it contrasts look-aside and look-through cache architectures in terms of data retrieval processes.

Uploaded by

subhamyadav1921
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views11 pages

Cache Memory - Indexing: CS223 Computer Architecture & Organization

The document discusses cache memory design choices including block placement, identification, replacement, and write strategies. It provides calculations for cache indexing, explaining how addresses map to cache sets and the benefits of using middle bits for indexing to improve spatial locality. Additionally, it contrasts look-aside and look-through cache architectures in terms of data retrieval processes.

Uploaded by

subhamyadav1921
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

CS223 Computer Architecture & Organization

Cache Memory – Indexing

John Jose
Associate Professor
Department of Computer Science & Engineering
Indian Institute of Technology Guwahati
Four cache memory design choices
❖ Where can a block be placed in the cache?
– Block Placement
❖ How is a block found if it is in the upper level?
– Block Identification
❖ Which block should be replaced on a miss?
– Block Replacement
❖ What happens on a write?
– Write Strategy
Block Placement
Index and Offset Calculations
A cache has 512 KB capacity, 4B word, 64B block size and 8-way set
associative. The system is using 32 bit address. Given the address
0X ABC89984, which set of cache will be searched and specify which
word of the selected cache block will be forwarded if it is a hit in
cache?
# sets = CS/(BSxA) = 219/(26x23) = 210 = 1024 sets
1 word = 4B , Hence 64 byte block has 16 words
Tag = 16 Index =10 Offset=6 (4+2)

0x ABC89984 = 1001 1001 1000 0100 → Set 614, word 1

0x 485669AC = 0110 1001 1010 1100 → Set 422, word 11


Mapping Calculations
The address of a word in a byte addressable 1MB physical
memory is 0xAB8F2. This word upon bringing to the cache is
mapped to set 30. The word length of the processor is 16 bits.
How many words can be accommodated in each cache block?
❖ 0xAB8F2→ 10101011100011110010
❖ 30→ 11110: We can see the set index 30 (11110) above.
RHS of set index is byte offset.
❖ offset → 010→ 8 bytes. one word is 2 bytes [16 bits] therefore
8 byte offset can accommodate 4 words
Cache Indexing
t bits s bits b bits
00001
m-1 tag set index block offset 0
❖ Decoders are used for indexing
❖ Indexing time depends on decoder size ( s: 2s)
❖ Smaller number of sets, less indexing time.
Why Use Middle Bits as Index?
4-line Cache
00 High-Order Bit Indexing
00
01 00
10 00
11 01
00
10
01
11
High-Order Bit Indexing 01
00
01
❖Adjacent memory lines 01
10
would map to same 10
11
10
00
cache entry 10
01
10
❖Poor use of spatial 11
11
00
locality 11
01
11
10
11
Why Use Middle Bits as Index?
4-line Cache High-Order Middle-Order
00 00 Bit Indexing 00 Bit Indexing
01 00 00
10 00 00
11 01 01
00
10 00
10
01
11 01
11
Middle-Order Bit Indexing 01
00 01
00
❖Consecutive memory lines 01
01
10
01
01
10
map to different cache 10
11 10
11
10
00 10
00
lines 10
01 10
01
❖Better use of spacial 10
11
10
11
locality without 11
00 11
00
11
01 11
01
replacement 11
10 11
10
11 11
Look-aside vs Look through caches
❖ Look-aside cache: Request from processor goes to cache and
main memory in parallel
❖ Cache and main memory both see the bus cycle
❖ On cache hit→processor loaded from cache, bus cycle
terminates; On cache miss: processor & cache loaded from
memory in parallel
Look-aside vs Look through caches
❖ Look-through cache: Cache checked first when processor
requests data from memory
❖ On hit🡪 data loaded from cache: On miss🡪 cache loaded
from memory, then processor loaded from cache
[email protected]
http://www.iitg.ac.in/johnjose/

You might also like