0% found this document useful (0 votes)
12 views

Reference - Cache Memory

The document discusses cache memory, including its design aspects like cache block size, total cache size, mapping function, replacement algorithm, write policy, and number of cache levels. It provides examples of cache sizes in historical processors and explains concepts like direct mapping, set associative mapping, and cache hit ratio curves.

Uploaded by

Mr Mudassir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Reference - Cache Memory

The document discusses cache memory, including its design aspects like cache block size, total cache size, mapping function, replacement algorithm, write policy, and number of cache levels. It provides examples of cache sizes in historical processors and explains concepts like direct mapping, set associative mapping, and cache hit ratio curves.

Uploaded by

Mr Mudassir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Rehan Azmat

Lecture 3, 4 and 5
Cache Memory
 Memory System
 Memory Characteristics
◦ Location
◦ Capacity
◦ Unit of transfer
◦ Access method
◦ Performance
◦ Physical type
◦ Physical characteristics
◦ Organisation
 Memory Hierarchy
◦ Locality of Reference
 Cache Memory
 Cache Memory Design
◦ Cache block size (line size)
◦ Total cache size
Cache
 Small amount of fast memory
 Sits between normal main memory and CPU
 May be located on CPU chip or module
 CPU requests contents of memory location
 Check cache for this data
 If present, get from cache (fast)
 If not present, read required block from main
memory to cache
 Then deliver from cache to CPU
 Cache includes tags to identify which block of
main memory is in each cache slot
Cache Design
 Size
 Mapping Function
 Replacement Algorithm
 Write Policy
 Block Size
 Number of Caches
Cache Size
 Cost
◦ More cache is expensive
◦ Small enough
 Speed
◦ More cache is faster (up to a point)
◦ Checking cache for data takes time
◦ Large enough
Year of
Processor Type L1 cache L2 cache L3 cache
Introduction
IBM 360/85 Mainframe 1968 16 to 32 KB — —
PDP-11/70 Minicomputer 1975 1 KB — —
VAX 11/780 Minicomputer 1978 16 KB — —
IBM 3033 Mainframe 1978 64 KB — —
IBM 3090 Mainframe 1985 128 to 256 KB — —
Intel 80486 PC 1989 8 KB — —
Pentium PC 1993 8 KB/8 KB 256 to 512 KB —
PowerPC 601 PC 1993 32 KB — —
PowerPC 620 PC 1996 32 KB/32 KB — —
PowerPC G4 PC/server 1999 32 KB/32 KB 256 KB to 1 MB 2 MB
IBM S/390 G4 Mainframe 1997 32 KB 256 KB 2 MB
IBM S/390 G6 Mainframe 1999 256 KB 8 MB —
Pentium 4 PC/server 2000 8 KB/8 KB 256 KB —
High-end server/
IBM SP 2000 64 KB/32 KB 8 MB —
supercomputer
CRAY MTAb Supercomputer 2000 8 KB 2 MB —
Itanium PC/server 2001 16 KB/16 KB 96 KB 4 MB
SGI Origin 2001 High-end server 2001 32 KB/32 KB 4 MB —
Itanium 2 PC/server 2002 32 KB 256 KB 6 MB
IBM POWER5 High-end server 2003 64 KB 1.9 MB 36 MB
CRAY XD-1 Supercomputer 2004 64 KB/64 KB 1MB —
 Cache Memory
 Cache Memory Design
◦ Cache block size (line size)
◦ Total cache size
◦ Mapping function
 Direct Mapping
 Associative Mapping
 Set Associative Mapping
Mapping Function
 Cache of 64kByte
 Cache block of 4 bytes
◦ i.e. cache is 16kbytes (214) lines of 4 bytes
 16MBytes main memory
 24 bit address
◦ (224=16Mbytes)
 Each block of main memory maps to only one
cache line
◦ i.e. if a block is in cache, it must be in one specific
place
 Address is in two parts
 Least Significant w bits identify unique word
 Most Significant s bits specify one memory
block
 The MSBs are split into a cache line field r and
a tag of s-r (most significant)
Tag s-r Line or Slot r Word w
8 14 2

 24 bit address
 2 bit word identifier (4 byte block)
 22 bit block identifier
◦ 8 bit tag (=22-14)
◦ 14 bit slot or line
 No two blocks in the same line have the same Tag field
 Check contents of cache by finding line and checking Tag
Cache line Main Memory blocks held
0 0, m, 2m, 3m…(2^s)-m

1 1,m+1, 2m+1…(2^s)-m+1


m-1 m-1, 2m-1,3m-1…(2^s)-1
 Address length = (s + w) bits
 Number of addressable units = 2^s+w words
or bytes
 Block size = line size = 2^w words or bytes
 Number of blocks in main memory = 2^(s+
w)/2^w = 2^s
 Number of lines in cache = m = 2^r
 Size of tag = (s – r) bits
 Simple
 Inexpensive
 Fixed location for given block
◦ If a program accesses 2 blocks that map to the
same line repeatedly, cache misses are very high
 Lower miss penalty
 Remember what was discarded
◦ Already fetched
◦ Use again with little penalty
 Fully associative
 4 to 16 cache lines
 Between direct mapped L1 cache and next
memory level
 A main memory block can load into any line
of cache
 Memory address is interpreted as tag and
word
 Tag uniquely identifies block of memory
 Every line’s tag is examined for a match
 Cache searching gets expensive
Word
Tag 22 bit 2 bit

 22 bit tag stored with each 32 bit or 4 bytes block


of data
 Compare tag field with tag entry in cache to check
for hit
 Least significant 2 bits of address identify which
word is required from 32 bit or 4 bytes data block
 Address length = (s + w) bits
 Number of addressable units = 2^(s+w)
words or bytes
 Block size = line size = 2^w words or bytes
 Number of blocks in main memory = 2^(s+
w)/2^w = 2^s
 Number of lines in cache = undetermined
 Size of tag = s bits
 Cache is divided into a number of sets
 Each set contains a number of lines
 A given block maps to any line in a given set
◦ e.g. Block B can be in any line of set i
 e.g. 2 lines per set
◦ 2 way associative mapping
◦ A given block can be in one of 2 lines in only one
set
Word
Tag 9 bit Set 13 bit 2 bit

 Use set field to determine cache set to look


in
 Compare tag field to see if we have a hit
 Address length = (s + w) bits
 Number of addressable units = 2^(s+w)
words or bytes
 Block size = line size = 2^w words or bytes
 Number of blocks in main memory = 2^d
 Number of lines in set = k
 Number of sets = v = 2^d
 Number of lines in cache = kv = k * 2^d
 Size of tag = (s – d) bits
 Significant up to at least 64kB for 2-way
 Difference between 2-way and 4-way at 4kB
much less than 4kB to 8kB
 Cache complexity increases with associativity
 Not justified against increasing cache to 8kB
or 16kB
 Above 32kB gives no improvement
 (simulation results)
1.0
0.9
0.8
0.7
Hit ratio

0.6
0.5
0.4
0.3
0.2
0.1
0.0
1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M
Cache size (bytes)
direct
2-way
4-way
8-way
16-way
 Cache Memory
 Cache Memory Design
◦ Cache block size (line size)
◦ Total cache size
◦ Mapping function
◦ Replacement method
◦ Write policy
◦ Numbers of caches:
 Single, two-level, or three-level cache
 Unified vs. split cache
Replacement Algorithm
 No choice
 Each block only maps to one line
 Replace that line
 Hardware implemented algorithm (speed)
 Least Recently used (LRU)
 e.g. in 2 way set associative
◦ Which of the 2 block is lru?
 First in first out (FIFO)
◦ replace block that has been in cache longest
 Least frequently used
◦ replace block which has had fewest hits
 Random
Write Policies
 Must not overwrite a cache block unless main
memory is up to date
 Multiple CPUs may have individual caches
 I/O may address main memory directly
 All writes go to main memory as well as cache
 Multiple CPUs can monitor main memory
traffic to keep local (to CPU) cache up to date
 Lots of traffic
 Slows down writes
 Updates initially made in cache only
 Update bit for cache slot is set when update
occurs
 If block is to be replaced, write to main
memory only if update bit is set
 Other caches get out of sync
 I/O must access main memory through cache
Cache Line Size
 Retrieve not only desired word but a number of
adjacent words as well
 Increased block size will increase hit ratio at first
◦ the principle of locality
 Hit ratio will decreases as block becomes even
bigger
◦ Probability of using newly fetched information becomes
less than probability of reusing replaced
 Larger blocks
◦ Reduce number of blocks that fit in cache
◦ Data overwritten shortly after being fetched
◦ Each additional word is less local so less likely to be needed
 No definitive optimum value has been found
 8 to 64 bytes seems reasonable
Cache Levels and Types
 High logic density enables caches on chip
◦ Faster than bus access
◦ Frees bus for other transfers
 Common to use both on and off chip cache
◦ L1 on chip, L2 off chip in static RAM
◦ L2 access much faster than DRAM or ROM
◦ L2 often uses separate data path
◦ L2 may now be on chip
◦ Resulting in L3 cache
 Bus access or now on chip…
 One cache for data and instructions or two,
one for data and one for instructions
 Advantages of unified cache
◦ Higher hit rate
 Balances load of instruction and data fetch
 Only one cache to design & implement
 Advantages of split cache
◦ Eliminates cache contention between instruction
fetch/decode unit and execution unit
 Important in pipelining
End of the Lecture

You might also like