Computer Organization & Architecture
Computer Organization & Architecture
& Architecture
BITS Pilani Virendra Singh Shekhawat
Department of Computer Science and Information Systems
Pilani Campus
BITS Pilani
Pilani Campus
Module-4 (Lecture-3)
[Ref. Computer Organization and Architecture , 8th Ed. by William Stallings]
Topics: Cache Performance Measurement , Cache Miss Types, Hit Ratio, Write
Policy, Multilevel Caches
Cache Performance
3
BITS Pilani, Pilani Campus
Where to misses come from?
• Compulsory— Initially cache is empty or no valid data in it. So
the first access to a block is always miss. Also called cold start
misses or first reference misses.
• Capacity—If the cache size is not enough such that it can not
accommodate sufficient blocks needed during execution of a
program then frequent misses will occur. Capacity misses are
those misses that occur regardless of associativity or block size.
5
BITS Pilani, Pilani Campus
Operation of Two Level
Memory System-2
• Cost
Cs = (C1S1+C2S2)/(S1+S2)
7
BITS Pilani, Pilani Campus
Write Through Policy
8
BITS Pilani, Pilani Campus
Write Back Policy
9
BITS Pilani, Pilani Campus
Hit Ratio vs. Cache Line Size
• For each miss not only desired word but a number of
adjacent words are retrived
• Increased block size will increase hit ratio at first
– Due to the principle of locality
• Hit ratio will decreases as block becomes even bigger
– Probability of using newly fetched information becomes less
than probability of reusing replaced
• Larger blocks
– Reduce number of blocks that fit in cache
– Data overwritten shortly after being fetched
– Each additional word is less local so less likely to be needed
• No definitive optimum value has been found
• 8 to 64 bytes seems reasonable
10
BITS Pilani, Pilani Campus
Multi Level Caches
• On chip cache improves the performance. Why???
• Will more than one level of cache improves the
performance???
• Simplest organization is two level cache on chip (L1)
and external cache (L2)
• In many cache designs for an off chip cache (L2) a
separate bus is used to transfer the data.
• Now a days L2 cache is also available as on chip cache
due to shrinking the size of processor
• So now we have one more level of cache i.e. L3 off
chip cache
11
BITS Pilani, Pilani Campus
Multilevel Cache Performance
12
BITS Pilani, Pilani Campus
Unified Vs. Split Caches
• Earlier same cache is used for data as well as
instructions i.e. Unified Cache
• Now we have separate caches for data and
instructions i.e. Split cache
• Advantages of Unified cache
– It balances load between data and instruction automatically
• Advantages of Split cache
– Useful in parallel instruction execution
– Eliminate contention for the instruction fetch/decode unit
and the execution unit
– E.g. Super scalar machines Pentium and Power PC
13
BITS Pilani, Pilani Campus
Caches and External
Connections in P-3 Processor
Processing units
System bus
Cache bus
Main
L2 cache memory Input/Output
L2 is of 512K
2 Way Set Associative
14
BITS Pilani, Pilani Campus
Cache Memory Evolution
Year of
Processor Type L1 cache L2 cache L3 cache
Introduction
IBM 360/85 Mainframe 1968 16 to 32 KB — —
PDP-11/70 Minicomputer 1975 1 KB — —
VAX 11/780 Minicomputer 1978 16 KB — —
IBM 3033 Mainframe 1978 64 KB — —
IBM 3090 Mainframe 1985 128 to 256 KB — —
Intel 80486 PC 1989 8 KB — —
Pentium PC 1993 8 KB/8 KB 256 to 512 KB —
PowerPC 601 PC 1993 32 KB — —
PowerPC 620 PC 1996 32 KB/32 KB — —
PowerPC G4 PC/server 1999 32 KB/32 KB 256 KB to 1 MB 2 MB
IBM S/390 G4 Mainframe 1997 32 KB 256 KB 2 MB
IBM S/390 G6 Mainframe 1999 256 KB 8 MB —
Pentium 4 PC/server 2000 8 KB/8 KB 256 KB —
High-end server/
IBM SP 2000 64 KB/32 KB 8 MB —
supercomputer
CRAY MTAb Supercomputer 2000 8 KB 2 MB —
Itanium PC/server 2001 16 KB/16 KB 96 KB 4 MB
SGI Origin 2001 High-end server 2001 32 KB/32 KB 4 MB —
Itanium 2 PC/server 2002 32 KB 256 KB 6 MB
IBM POWER5 High-end server 2003 64 KB 1.9 MB 36 MB
CRAY XD-1 Supercomputer 2004 64 KB/64 KB 1MB —
15
BITS Pilani, Pilani Campus
Summary
• Cache Performance
– Hit Ratio
– Average Access Time
– Access Efficiency
– Write Policy
– Line Size
– Multiple Level of cache
– Unified Cache and Split Cache
16
BITS Pilani, Pilani Campus
Thank You!
17
BITS Pilani, Pilani Campus