0% found this document useful (0 votes)
79 views49 pages

10 Cache Memories

Cache memories are small, fast memories that store frequently accessed blocks of main memory to improve performance. The CPU first checks the cache for data before accessing main memory. Typical caches are organized hierarchically with multiple levels like L1, L2, L3 caches located on the CPU chip or nearby on the motherboard. Caches improve performance by exploiting locality where frequently used data is likely to be reused in the near future.

Uploaded by

darwinvargas2011
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views49 pages

10 Cache Memories

Cache memories are small, fast memories that store frequently accessed blocks of main memory to improve performance. The CPU first checks the cache for data before accessing main memory. Typical caches are organized hierarchically with multiple levels like L1, L2, L3 caches located on the CPU chip or nearby on the motherboard. Caches improve performance by exploiting locality where frequently used data is likely to be reused in the near future.

Uploaded by

darwinvargas2011
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Cache Memories

1
Today
 Cache memory organization and operation
 Performance impact of caches
 The memory mountain
 Rearranging loops to improve spatial locality
 Using blocking to improve temporal locality

2
Cache Memories
 Cache memories are small, fast SRAM-based memories
managed automatically in hardware.
 Hold frequently accessed blocks of main memory
 CPU looks first for data in caches (e.g., L1, L2, and L3),
then in main memory.
 Typical system structure:
CPU chip
Register file
Cache
ALU
memories
System bus Memory bus

I/O Main
Bus interface
bridge memory

3
General Cache Organization (S, E, B)
E = 2e lines per set

set
line

S = 2s sets

Cache size:
v tag 0 1 2 B-1 C = S x E x B data bytes
valid bit
B = 2b bytes per cache block (the data)
4
• Locate set
Cache Read • Check if any line in set
has matching tag
E = 2e lines per set • Yes + line valid: hit
• Locate data starting
at offset

Address of word:
t bits s bits b bits

S = 2s sets
tag set block
index offset

data begins at this offset

v tag 0 1 2 B-1

valid bit
B = 2b bytes per cache block (the data)
5
Example: Direct Mapped Cache (E = 1)
Direct mapped: One line per set
Assume: cache block size 8 bytes

Address of int:
v tag 0 1 2 3 4 5 6 7
t bits 0…01 100

v tag 0 1 2 3 4 5 6 7
find set
S = 2s sets
v tag 0 1 2 3 4 5 6 7

v tag 0 1 2 3 4 5 6 7

6
Example: Direct Mapped Cache (E = 1)
Direct mapped: One line per set
Assume: cache block size 8 bytes

Address of int:
valid? + match: assume yes = hit
t bits 0…01 100

v tag 0 1 2 3 4 5 6 7

block offset

7
Example: Direct Mapped Cache (E = 1)
Direct mapped: One line per set
Assume: cache block size 8 bytes

Address of int:
valid? + match: assume yes = hit
t bits 0…01 100

v tag 0 1 2 3 4 5 6 7

block offset

int (4 Bytes) is here

No match: old line is evicted and replaced

8
Direct-Mapped Cache Simulation
t=1 s=2 b=1 M=16 byte addresses, B=2 bytes/block,
x xx x S=4 sets, E=1 Blocks/set

Address trace (reads, one byte per read):


0 [00002], miss
1 [00012], hit
miss
7 [01112],
miss
8 [10002],
miss
0 [00002]
v Tag Block
Set 0 0
1 1?
0 ?
M[8-9]
M[0-1]
Set 1
Set 2
Set 3 1 0 M[6-7]
9
Ignore the variables sum, i, j
A Higher Level Example assume: cold (empty) cache,
a[0][0] goes here
int sum_array_rows(double a[16][16])
{
int i, j;
double sum = 0;

for (i = 0; i < 16; i++)


for (j = 0; j < 16; j++)
sum += a[i][j];
return sum;
}

int sum_array_cols(double a[16][16])


{
int i, j;
double sum = 0;

for (j = 0; i < 16; i++)


for (i = 0; j < 16; j++)
32 B = 4 doubles
sum += a[i][j];
return sum;
} blackboard

10
E-way Set Associative Cache (Here: E = 2)
E = 2: Two lines per set
Assume: cache block size 8 bytes Address of short int:
t bits 0…01 100

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 find set

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7

11
E-way Set Associative Cache (Here: E = 2)
E = 2: Two lines per set
Assume: cache block size 8 bytes Address of short int:
t bits 0…01 100
compare both

valid? + match: yes = hit

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7

block offset

12
E-way Set Associative Cache (Here: E = 2)
E = 2: Two lines per set
Assume: cache block size 8 bytes Address of short int:
t bits 0…01 100
compare both

valid? + match: yes = hit

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7

block offset

short int (2 Bytes) is here

No match:
• One line in set is selected for eviction and replacement
• Replacement policies: random, least recently used (LRU), …
13
2-Way Set Associative Cache Simulation
t=2 s=1 b=1
xx x x M=16 byte addresses, B=2 bytes/block,
S=2 sets, E=2 blocks/set

Address trace (reads, one byte per read):


0 [00002], miss
1 [00012], hit
miss
7 [01112],
miss
8 [10002], hit
0 [00002]
v Tag Block
0
Set 0 1 ?
00 ?
M[0-1]
0
1 10 M[8-9]
0
1 01 M[6-7]
Set 1
0
14
Ignore the variables sum, i, j
A Higher Level Example
assume: cold (empty) cache,
int sum_array_rows(double a[16][16]) a[0][0] goes here
{
int i, j;
double sum = 0;

for (i = 0; i < 16; i++)


for (j = 0; j < 16; j++)
sum += a[i][j];
return sum;
}

int sum_array_rows(double a[16][16])


32 B = 4 doubles
{
int i, j;
double sum = 0;

for (j = 0; i < 16; i++)


for (i = 0; j < 16; j++)
sum += a[i][j];
return sum;
}
blackboard

15
Spectrum of Associativity
 For a cache with 8 entries

Chapter 5 — Large and


Fast: Exploiting Memory
Hierarchy — 16 16
What about writes?
 Multiple copies of data exist:
 L1, L2, Main Memory, Disk
 What to do on a write-hit?
 Write-through (write immediately to memory)
 Write-back (defer write to memory until replacement of line)
 Need a dirty bit (line different from memory or not)
 What to do on a write-miss?
 Write-allocate (load into cache, update line in cache)
Good if more writes to the location follow
 No-write-allocate (writes immediately to memory)
 Typical
 Write-through + No-write-allocate
 Write-back + Write-allocate

17
Intel Core i7 Cache Hierarchy
Processor package
Core 0 Core 3 L1 i-cache and d-cache:
32 KB, 8-way,
Regs Regs
Access: 4 cycles

L1 L1 L1 L1 L2 unified cache:
d-cache i-cache
… d-cache i-cache 256 KB, 8-way,
Access: 11 cycles

L2 unified cache L2 unified cache L3 unified cache:


8 MB, 16-way,
Access: 30-40 cycles
L3 unified cache
(shared by all cores) Block size: 64 bytes
for all caches.

Main memory
18
Cache Performance Metrics
 Miss Rate
 Fraction of memory references not found in cache (misses / accesses)
= 1 – hit rate
 Typical numbers (in percentages):
 3-10% for L1
 can be quite small (e.g., < 1%) for L2, depending on size, etc.
 Hit Time
 Time to deliver a line in the cache to the processor
 includes time to determine whether the line is in the cache
 Typical numbers:
 1-2 clock cycle for L1
 5-20 clock cycles for L2
 Miss Penalty
 Additional time required because of a miss
 typically 50-200 cycles for main memory (Trend: increasing!)

19
Lets think about those numbers
 Huge difference between a hit and a miss
 Could be 100x, if just L1 and main memory

 Would you believe 99% hits is twice as good as 97%?


 Consider:
cache hit time of 1 cycle
miss penalty of 100 cycles

 Average access time = hit time + miss rate * miss penalty


97% hits: 1 cycle + 0.03 * 100 cycles = 4 cycles
99% hits: 1 cycle + 0.01 * 100 cycles = 2 cycles

 This is why “miss rate” is used instead of “hit rate”

20
Average memory access time (AMAT)
 AMAT = L1hit + PL1miss*(L2hit + PL2miss* Memory)
 Each access costs L1 hit latency
 If L1 misses (PL1miss), then multiply by time to access L2
 Possible to add more cache levels
 Can be specific to instructions or data

 Compute AMAT for


 16KB L1 with 95% hit rate, 2 cycle access time
 1MB L2 with 80% hit rate, 20 cycle access time
 Main memory has 200 cycle access time

21
Writing Cache Friendly Code
 Make the common case go fast
 Focus on the inner loops of the core functions

 Minimize the misses in the inner loops


 Repeated references to variables are good (temporal locality)
 Stride-1 reference patterns are good (spatial locality)

Key idea: Our qualitative notion of locality is quantified


through our understanding of cache memories.

22
Today
 Cache organization and operation
 Performance impact of caches
 The memory mountain
 Rearranging loops to improve spatial locality
 Using blocking to improve temporal locality

23
The Memory Mountain
 Read throughput (read bandwidth)
 Number of bytes read from memory per second (MB/s)

 Memory mountain: Measured read throughput as a


function of spatial and temporal locality.
 Compact way to characterize memory system performance.

24
Memory Mountain Test Function
/* The test function */
void test(int elems, int stride) {
int i, result = 0;
volatile int sink;

for (i = 0; i < elems; i += stride)


result += data[i];
sink = result; /* So compiler doesn't optimize away the loop */
}

/* Run test(elems, stride) and return read throughput (MB/s) */


double run(int size, int stride, double Mhz)
{
double cycles;
int elems = size / sizeof(int);

test(elems, stride); /* warm up the cache */


cycles = fcyc2(test, elems, stride, 0); /* call test(elems,stride) */
return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */
}

25
Intel Core i7
R e a d th ro u g h p u t (M B /s )

The Memory Mountain 32 KB L1 i-cache


32 KB L1 d-cache
256 KB unified L2 cache
8M unified L3 cache

All caches on-chip

7000

6000

5000

4000

3000

2000

1000
16K
4K
256K

0
64K
s1
s3

1M
s5

16M
s7

4M
64M
s9
s11
s13
s15
s32

Stride (x8 bytes) Working set size (bytes)

26
Intel Core i7
R e a d th ro u g h p u t (M B /s )

The Memory Mountain 32 KB L1 i-cache


32 KB L1 d-cache
256 KB unified L2 cache
8M unified L3 cache

All caches on-chip

7000

6000

5000

4000
Slopes of
3000
spatial
locality
2000

1000
16K
4K
256K

0
64K
s1
s3

1M
s5

16M
s7

4M
64M
s9
s11
s13
s15
s32

Stride (x8 bytes) Working set size (bytes)

27
Intel Core i7
R e a d th ro u g h p u t (M B /s )

The Memory Mountain 32 KB L1 i-cache


32 KB L1 d-cache
256 KB unified L2 cache
8M unified L3 cache

All caches on-chip

7000
L1
6000
Ridges of
5000 Temporal
4000 locality
Slopes of L2
3000
spatial
locality
2000
L3
1000
16K
4K
256K

0
64K

Mem
s1
s3

1M
s5

16M
s7

4M
64M
s9
s11
s13
s15
s32

Stride (x8 bytes) Working set size (bytes)

28
Today
 Cache organization and operation
 Performance impact of caches
 The memory mountain
 Rearranging loops to improve spatial locality
 Using blocking to improve temporal locality

29
Miss Rate Analysis for Matrix Multiply
 Assume:
 Line size = 32B (big enough for four 64-bit words)
 Matrix dimension (N) is very large
Approximate 1/N as 0.0
 Cache is not even big enough to hold multiple rows
 Analysis Method:
 Look at access pattern of inner loop

k j j

i k i

A B C
30
Matrix Multiplication Example
Variable sum
 Description: /*
/* ijk
ijk */
*/ held in register
 Multiply N x N matrices for (i=0; i<n; i++)
for (i=0; i<n; i++) { {
 O(N3) total operations for
for (j=0;
(j=0; j<n;
j<n; j++)
j++) {{
sum
sum == 0.0;
0.0;
 N reads per source for
for (k=0;
(k=0; k<n;
k<n; k++)
k++)
element sum
sum +=+= a[i][k]
a[i][k] ** b[k][j];
b[k][j];
 N values summed per c[i][j]
c[i][j] == sum;
sum;
destination }}
 but may be able to }}
hold in register

31
Layout of C Arrays in Memory (review)
 C arrays allocated in row-major order
 each row in contiguous memory locations
 Stepping through columns in one row:
 for (i = 0; i < N; i++)
sum += a[0][i];
 accesses successive elements
 if block size (B) > 4 bytes, exploit spatial locality
 compulsory miss rate = 4 bytes / B
 Stepping through rows in one column:
 for (i = 0; i < n; i++)
sum += a[i][0];
 accesses distant elements
 no spatial locality!
 compulsory miss rate = 1 (i.e. 100%)

32
Matrix Multiplication (ijk)

/*
/* ijk
ijk */
*/ Inner loop:
for
for (i=0;
(i=0; i<n;
i<n; i++)
i++) {{
for
for (j=0;
(j=0; j<n;
j<n; j++)
j++) {{ (*,j)
sum
sum == 0.0;
0.0; (i,j)
for
for (k=0;
(k=0; k<n;
k<n; k++)
k++) (i,*)
sum
sum +=+= a[i][k]
a[i][k] ** b[k][j];
b[k][j]; A B C
c[i][j]
c[i][j] == sum;
sum;
}}
}}
Row-wise Column- Fixed
wise
Misses per inner loop iteration:
A B C
0.25 1.0 0.0

33
Matrix Multiplication (jik)

/*
/* jik
jik */
*/ Inner loop:
for
for (j=0;
(j=0; j<n;
j<n; j++)
j++) {{
for
for (i=0;
(i=0; i<n;
i<n; i++)
i++) {{ (*,j)
sum
sum == 0.0;
0.0; (i,j)
for
for (k=0;
(k=0; k<n;
k<n; k++)
k++) (i,*)
sum
sum +=+= a[i][k]
a[i][k] ** b[k][j];
b[k][j]; A B C
c[i][j]
c[i][j] == sum
sum
}}
}}
Row-wise Column- Fixed
wise
Misses per inner loop iteration:
A B C
0.25 1.0 0.0

34
Matrix Multiplication (kij)

/*
/* kij
kij */
*/ Inner loop:
for
for (k=0;
(k=0; k<n;
k<n; k++)
k++) {{
for
for (i=0;
(i=0; i<n;
i<n; i++)
i++) {{ (i,k) (k,*)
rr == a[i][k];
a[i][k]; (i,*)
for
for (j=0;
(j=0; j<n;
j<n; j++)
j++) A B C
c[i][j]
c[i][j] +=+= rr ** b[k][j];
b[k][j];
}}
}} Fixed Row-wise Row-wise

Misses per inner loop iteration:


A B C
0.0 0.25 0.25

35
Matrix Multiplication (ikj)

/*
/* ikj
ikj */
*/
for
Inner loop:
for (i=0;
(i=0; i<n;
i<n; i++)
i++) {{
for
for (k=0;
(k=0; k<n;
k<n; k++)
k++) {{ (i,k) (k,*)
rr == a[i][k];
a[i][k]; (i,*)
for
for (j=0;
(j=0; j<n;
j<n; j++)
j++) A B C
c[i][j]
c[i][j] +=+= rr ** b[k][j];
b[k][j];
}}
}}
Fixed Row-wise Row-wise

Misses per inner loop iteration:


A B C
0.0 0.25 0.25

36
Matrix Multiplication (jki)

/* Inner loop:
/* jki
jki */
*/
for
for (j=0;
(j=0; j<n;
j<n; j++)
j++) {{ (*,k) (*,j)
for
for (k=0;
(k=0; k<n;
k<n; k++)
k++) {{
rr == b[k][j];
(k,j)
b[k][j];
for
for (i=0;
(i=0; i<n;
i<n; i++)
i++) A B C
c[i][j]
c[i][j] +=+= a[i][k]
a[i][k] ** r;
r;
}}
}}
Column- Fixed Column-
wise wise

Misses per inner loop iteration:


A B C
1.0 0.0 1.0

37
Matrix Multiplication (kji)

/*
/* kji
kji */
*/ Inner loop:
for
for (k=0;
(k=0; k<n;
k<n; k++)
k++) {{
for
for (j=0;
(j=0; j<n;
j<n; j++)
j++) {{ (*,k) (*,j)
rr == b[k][j];
b[k][j]; (k,j)
for
for (i=0;
(i=0; i<n;
i<n; i++)
i++)
c[i][j]
c[i][j] +=+= a[i][k]
a[i][k] ** r;
r; A B C
}}
}}
Column- Fixed Column-
wise wise
Misses per inner loop iteration:
A B C
1.0 0.0 1.0

38
Summary of Matrix Multiplication
for (i=0; i<n; i++) {
for (j=0; j<n; j++) {
sum = 0.0; ijk (& jik):
for (k=0; k<n; k++) • 2 loads, 0 stores
sum += a[i][k] * b[k][j]; • misses/iter = 1.25
c[i][j] = sum;
}
}

for (k=0; k<n; k++) {


for (i=0; i<n; i++) { kij (& ikj):
r = a[i][k]; • 2 loads, 1 store
for (j=0; j<n; j++) • misses/iter = 0.5
c[i][j] += r * b[k][j];
}
}

for (j=0; j<n; j++) {


for (k=0; k<n; k++) { jki (& kji):
r = b[k][j]; • 2 loads, 1 store
for (i=0; i<n; i++)
• misses/iter = 2.0
c[i][j] += a[i][k] * r;
}
}
39
C y c le s p e r in n e r lo o p ite ra t io n

Core i7 Matrix Multiply Performance


jki / kji

60

50

40
ijk / jik j
k
30 i
k
ji
20

10
kij / ikj

0
50 100 150 200 250 300 350 400 450 500 550 600 650 700 750

Array size (n) 40


Today
 Cache organization and operation
 Performance impact of caches
 The memory mountain
 Rearranging loops to improve spatial locality
 Using blocking to improve temporal locality

41
Example: Matrix Multiplication
c = (double *) calloc(sizeof(double), n*n);

/* Multiply n x n matrices a and b */


void mmm(double *a, double *b, double *c, int n) {
int i, j, k;
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
for (k = 0; k < n; k++)
c[i*n+j] += a[i*n + k]*b[k*n + j];
}

j
c a b
=i *

42
Cache Miss Analysis
 Assume:
 Matrix elements are doubles
 Cache block = 8 doubles
 Cache size C << n (much smaller than n)
n
 First iteration:
 n/8 + n = 9n/8 misses

= *
 Afterwards in cache:
(schematic)

= *
8 wide
43
Cache Miss Analysis
 Assume:
 Matrix elements are doubles
 Cache block = 8 doubles
 Cache size C << n (much smaller than n)
n
 Second iteration:
 Again:
n/8 + n = 9n/8 misses
= *
8 wide

 Total misses:
 9n/8 * n2 = (9/8) * n3

44
Blocked Matrix Multiplication
c = (double *) calloc(sizeof(double), n*n);

/* Multiply n x n matrices a and b */


void mmm(double *a, double *b, double *c, int n) {
int i, j, k;
for (i = 0; i < n; i+=B)
for (j = 0; j < n; j+=B)
for (k = 0; k < n; k+=B)
/* B x B mini matrix multiplications */
for (i1 = i; i1 < i+B; i++)
for (j1 = j; j1 < j+B; j++)
for (k1 = k; k1 < k+B; k++)
c[i1*n+j1] += a[i1*n + k1]*b[k1*n + j1];
}

j1
c a b c
=i1 * +

Block size B x B
45
Cache Miss Analysis
 Assume:
 Cache block = 8 doubles
 Cache size C << n (much smaller than n)
 Three blocks fit into cache: 3B2 < C

n/B blocks
 First (block) iteration:
 B2/8 misses for each block
 2n/B * B2/8 = nB/4
(omitting matrix c) = *

Block size B x B
 Afterwards in cache
(schematic)
= *
46
Cache Miss Analysis
 Assume:
 Cache block = 8 doubles
 Cache size C << n (much smaller than n)
 Three blocks fit into cache: 3B2 < C

n/B blocks
 Second (block) iteration:
 Same as first iteration
 2n/B * B2/8 = nB/4
= *
 Total misses: Block size B x B
 nB/4 * (n/B)2 = n3/(4B)

47
Summary
 No blocking: (9/8) * n3
 Blocking: 1/(4B) * n3

 Suggest largest possible block size B, but limit 3B2 < C!

 Reason for dramatic difference:


 Matrix multiplication has inherent temporal locality:
 Input data: 3n2, computation 2n3
 Every array elements used O(n) times!
 But program has to be written properly

48
Concluding Observations
 Programmer can optimize for cache performance
 How data structures are organized
 How data are accessed
 Nested loop structure
 Blocking is a general technique
 All systems favor “cache friendly code”
 Getting absolute optimum performance is very platform specific
Cache sizes, line sizes, associativities, etc.
 Can get most of the advantage with generic code
 Keep working set reasonably small (temporal locality)
 Use small strides (spatial locality)

49

You might also like