0% found this document useful (0 votes)
39 views

0019 Lecture7 Mem Hierarchy

The document summarizes the memory hierarchy, including different types of memory technologies and their characteristics. It discusses random access memory (RAM) types like SRAM and DRAM. It also covers cache memories, virtual memory, and non-volatile memories. The key points are that SRAM has faster access times than DRAM but is more expensive, while DRAM is used for main memory due to its lower cost despite slower access. Non-volatile memories like flash are used for storage due to retaining data when powered off.

Uploaded by

FikriZain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

0019 Lecture7 Mem Hierarchy

The document summarizes the memory hierarchy, including different types of memory technologies and their characteristics. It discusses random access memory (RAM) types like SRAM and DRAM. It also covers cache memories, virtual memory, and non-volatile memories. The key points are that SRAM has faster access times than DRAM but is more expensive, while DRAM is used for main memory due to its lower cost despite slower access. Non-volatile memories like flash are used for storage due to retaining data when powered off.

Uploaded by

FikriZain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

The Memory Hierarchy

Brad Karp
UCL Computer Science

CS 0019
31st January 2019

(lecture notes derived from material from Phil Gibbons, Randy


Bryant, and Dave O’Hallaron)

1
Today
¢ Storage technologies and trends
¢ Locality of reference
¢ Caching in the memory hierarchy

2
Random-Access Memory (RAM)
¢ Key features
§ RAM is traditionally packaged as a chip.
§ Basic storage unit is normally a cell (one bit per cell).
§ Multiple RAM chips form a memory.

¢ RAM comes in two varieties:


§ SRAM (Static RAM)
§ DRAM (Dynamic RAM)

3
SRAM vs DRAM Summary

Trans. Access Needs


per bit time refresh? Cost Applications

SRAM 6 1X No 100x Cache memories

DRAM 1 10X Yes 1X Main memories,


frame buffers

4
Enhanced DRAMs
¢ Basic DRAM cell has not changed since its invention in
1966.
§ Commercialized by Intel in 1970.
¢ DRAM cores with better interface logic and faster I/O :
§ Synchronous DRAM (SDRAM)
§ Uses a conventional clock signal instead of asynchronous control
§ Allows reuse of the row addresses (e.g., RAS, CAS, CAS, CAS)

§ Double data-rate synchronous DRAM (DDR SDRAM)


§ Double-edge clocking sends two bits per cycle per pin
§ Different types distinguished by size of small prefetch buffer:
– DDR (2 bits), DDR2 (4 bits), DDR3 (8 bits)
§ By 2010, standard for most server and desktop systems
§ Intel Core i7 supports DDR3 and DDR4 SDRAM

5
Nonvolatile Memories
¢ DRAM and SRAM are volatile memories
§ Lose information if powered off.
¢ Nonvolatile memories retain value even if powered off
§ Read-only memory (ROM): programmed during production
§ Programmable ROM (PROM): can be programmed once
§ Eraseable PROM (EPROM): can be bulk erased (UV, X-Ray)
§ Electrically eraseable PROM (EEPROM): electronic erase capability
§ Flash memory: EEPROMs, with partial (block-level) erase capability
§ Wears out after about 100,000 erasings
§ 3D XPoint (Intel Optane) & emerging NVMs
§ New materials

¢ Uses for Nonvolatile Memories


§ Firmware programs stored in a ROM (BIOS, controllers for disks,
network cards, graphics accelerators, security subsystems,…)
§ Solid state disks (replace rotating disks in thumb drives, smart phones,
mp3 players, tablets, laptops, data centers,…)
§ Disk caches
6
Traditional Bus Structure Connecting
CPU and Memory
¢ Bus: collection of parallel wires that carry address,
data, and control signals
¢ Typically shared by multiple devices

CPU chip

Register file

ALU

System bus Memory bus

I/O Main
Bus interface memory
bridge

7
Memory Read Transaction (1)
¢ CPU places address A on the memory bus.

Register file Load operation: movq A, %rax

%rax ALU

Main memory
I/O bridge 0

Bus interface x A

8
Memory Read Transaction (1)
¢ CPU places address A on the memory bus.

Register file Load operation: movq A, %rax

%rax ALU

Main memory
I/O bridge 0
A
Bus interface x A

9
Memory Read Transaction (2)
¢ Main memory reads A from the memory bus, retrieves
word x, and places it on the bus.
Register file Load operation: movq A, %rax

ALU
%rax
Main
memory
I/O bridge 0

Bus interface x A

10
Memory Read Transaction (2)
¢ Main memory reads A from the memory bus, retrieves
word x, and places it on the bus.
Register file Load operation: movq A, %rax

ALU
%rax
Main
memory
I/O bridge x 0

Bus interface x A

11
Memory Read Transaction (3)
¢ CPU read word x from the bus and copies it into
register %rax.
Register file Load operation: movq A, %rax

%rax ALU

Main memory
I/O bridge 0

Bus interface x A

12
Memory Read Transaction (3)
¢ CPU read word x from the bus and copies it into
register %rax.
Register file Load operation: movq A, %rax

%rax ALU
x

Main memory
I/O bridge 0

Bus interface x A

13
Memory Write Transaction (1)
¢ CPU places address A on bus. Main memory reads it
and waits for the corresponding data word to arrive.
Register file Store operation: movq %rax, A

%rax ALU
y

Main memory
I/O bridge 0

Bus interface A

14
Memory Write Transaction (1)
¢ CPU places address A on bus. Main memory reads it
and waits for the corresponding data word to arrive.
Register file Store operation: movq %rax, A

%rax ALU
y

Main memory
I/O bridge 0
A
Bus interface A

15
Memory Write Transaction (2)
¢ CPU places data word y on the bus.

Register file Store operation: movq %rax, A

ALU
%rax y

Main memory
I/O bridge 0

Bus interface A

16
Memory Write Transaction (2)
¢ CPU places data word y on the bus.

Register file Store operation: movq %rax, A

ALU
%rax y

Main memory
I/O bridge 0
y
Bus interface A

17
Memory Write Transaction (3)
¢ Main memory reads data word y from the bus and
stores it at address A.
Register file Store operation: movq %rax, A

ALU
%rax y

Main memory
I/O bridge 0

Bus interface y A

18
What’s Inside A Disk Drive?
Spindle
Arm
Platters

Actuator

Electronics
(including a
processor
SCSI and memory!)
connector
Image courtesy of Seagate Technology
19
Disk Geometry
¢ Disks consist of platters, each with two surfaces.
¢ Each surface consists of concentric rings called tracks.
¢ Each track consists of sectors separated by gaps.

Tracks
Surface
Track k Gaps

Spindle

Sectors
20
Disk Geometry (Multiple-Platter
View)
¢ Aligned tracks form a cylinder.
Cylinder k

Surface 0
Platter 0
Surface 1
Surface 2
Platter 1
Surface 3
Surface 4
Platter 2
Surface 5

Spindle

21
Disk Capacity
¢ Capacity: maximum number of bits that can be stored.
§ Vendors express capacity in units of gigabytes (GB), where
1 GB = 109 Bytes
¢ Capacity is determined by these technology factors:
§ Recording density (bits/in): number of bits that can be
squeezed into a 1 inch segment of a track
§ Track density (tracks/in): number of tracks that can be
squeezed into a 1 inch radial segment
§ Areal density (bits/in2): product of recording and track density

22
Recording zones
¢ Modern disks partition tracks
into disjoint subsets called
recording zones Sector
Zone


§ Each track in a zone has the
same number of sectors,
determined by the
circumference of innermost Spindle
track
§ Each zone has a different
number of sectors/track, outer
zones have more sectors/track
than inner zones
§ So we use average number of
sectors/track when computing
capacity

23
Computing Disk Capacity
Capacity = (# bytes/sector) x (avg. # sectors/track) x
(# tracks/surface) x (# surfaces/platter) x
(# platters/disk)
Example:
§ 512 bytes/sector
§ 300 sectors/track (on average)
§ 20,000 tracks/surface
§ 2 surfaces/platter
§ 5 platters/disk

Capacity = 512 x 300 x 20,000 x 2 x 5


= 30,720,000,000
= 30.72 GB

24
Disk Operation (Single-Platter View)
The disk surface
spins at a fixed
rotational rate

25
Disk Operation (Single-Platter View)
The disk surface
spins at a fixed
rotational rate

spindle
spindle
spindle
spindle
spindle

26
Disk Operation (Single-Platter View)
The disk surface
spins at a fixed The read/write head
rotational rate is attached to the end
of the arm and flies over
the disk surface on
a thin cushion of air

spindle
spindle
spindle
spindle
spindle

By moving radially, the arm


can position the read/write
head over any track

27
Disk Operation (Multi-Platter View)
Read/write heads
move in unison
from cylinder to
cylinder

Arm

Spindle

28
Disk Structure - top view of single platter

29
Disk Structure - top view of single platter

Surface organized into tracks

30
Disk Structure - top view of single platter

Surface organized into tracks

Tracks divided into sectors

31
Disk Access

Head in position above a track

32
Disk Access

Rotation is counter-clockwise

33
Disk Access – Read

About to read blue sector

34
Disk Access – Read

After BLUE read

After reading blue sector

35
Disk Access – Read

After BLUE read

Red request scheduled next

36
Disk Access – Seek

After BLUE read Seek for RED

Seek to red’s track

37
Disk Access – Rotational Latency

After BLUE read Seek for RED Rotational latency

Wait for red sector to rotate around

38
Disk Access – Read

After BLUE read Seek for RED Rotational latency After RED read

Complete read of red

39
Disk Access – Service Time
Components

After BLUE read Seek for RED Rotational latency After RED read

Data transfer Seek Rotational Data transfer


latency

40
Disk Access Time
¢ Average time to access some target sector approximated
by:
§ Taccess = Tavg seek + Tavg rotation + Tavg transfer
¢ Seek time (Tavg seek)
§ Time to position heads over cylinder containing target sector
§ Typical Tavg seek is 3—9 ms
¢ Rotational latency (Tavg rotation)
§ Time waiting for first bit of target sector to pass under r/w head
§ Tavg rotation = 1/2 x 1/RPMs x 60 sec/1 min
§ Typical rotation speed = 7,200 RPMs
¢ Transfer time (Tavg transfer)
§ Time to read bits in target sector
§ Tavg transfer = 1/RPM x 1/(avg # sectors/track) x 60 secs/1 min.
41
Disk Access Time Example
¢ Given:
§ Rotational rate = 7,200 RPM
§ Average seek time = 9 ms
§ Avg # sectors/track = 400
¢ Derived:
§ Tavg rotation =
§ Tavg transfer =
§ Taccess =

42
Disk Access Time Example
¢ Given:
§ Rotational rate = 7,200 RPM
§ Average seek time = 9 ms
§ Avg # sectors/track = 400
¢ Derived:
§ Tavg rotation =
§ Tavg transfer =
§ Taccess =

43
Disk Access Time Example
¢ Given:
§ Rotational rate = 7,200 RPM
§ Average seek time = 9 ms.
§ Avg # sectors/track = 400.
¢ Derived:
§ Tavg rotation = 1/2 x (60 secs/7200 RPM) x 1000 ms/sec = 4 ms.
§ Tavg transfer = 60/7200 RPM x 1/400 secs/track x 1000 ms/sec = 0.02 ms
§ Taccess = 9 ms + 4 ms + 0.02 ms
¢ Important points:
§ Access time dominated by seek time and rotational latency
§ First bit in a sector is the most expensive, the rest are free
§ SRAM access time is about 4 ns/doubleword, DRAM about 60 ns
§ Disk is about 40,000 times slower than SRAM,
§ 2,500 times slower then DRAM

44
Logical Disk Blocks
¢ Modern disks present a simpler abstract view of the
complex sector geometry:
§ The set of available sectors is modeled as a sequence of b-
sized logical blocks (0, 1, 2, ...)
¢ Mapping between logical blocks and actual (physical)
sectors
§ Maintained by hardware/firmware device called disk controller
(N.B. not host’s disk controller; embedded in disk package!)
§ Converts requests for logical blocks into (surface, track,
sector) triples
¢ Allows controller to set aside spare cylinders for each
zone.
§ Accounts for difference in “formatted capacity” and
“maximum capacity”
45
I/O Bus
CPU chip
Register file

ALU
System bus Memory bus

I/O Main
Bus interface memory
bridge

I/O bus Expansion slots for


other devices such
USB Graphics Disk as network adapters.
controller adapter controller

Mouse Keyboard Monitor


Disk
46
Reading a Disk Sector (1)
CPU chip CPU initiates disk read by writing
Register file command, logical block number, and
ALU
destination memory address to port
(memory-mapped address)
associated with disk controller
Main
Bus interface memory

I/O bus

USB Graphics Disk


controller adapter controller

mouse keyboard Monitor


Disk
47
Reading a Disk Sector (1)
CPU chip CPU initiates disk read by writing
Register file command, logical block number, and
ALU
destination memory address to port
(memory-mapped address)
associated with disk controller
Main
Bus interface memory

I/O bus

USB Graphics Disk


controller adapter controller

mouse keyboard Monitor


Disk
48
Reading a Disk Sector (2)
CPU chip
Register file Disk controller reads sector and
ALU
performs direct memory access
(DMA) transfer into main
memory
Main
Bus interface memory

I/O bus

USB Graphics Disk


controller adapter controller

Mouse Keyboard Monitor


Disk
49
Reading a Disk Sector (2)
CPU chip
Register file Disk controller reads sector and
ALU
performs direct memory access
(DMA) transfer into main
memory
Main
Bus interface memory

I/O bus

USB Graphics Disk


controller adapter controller

Mouse Keyboard Monitor


Disk
50
Reading a Disk Sector (3)
CPU chip
When DMA transfer completes,
Register file
disk controller notifies CPU with
ALU an interrupt (i.e., asserts physical
“interrupt” pin on the CPU).

Main
Bus interface memory

I/O bus

USB Graphics Disk


controller adapter controller

Mouse Keyboard Monitor


Disk
51
Reading a Disk Sector (3)
CPU chip
When DMA transfer completes,
Register file
disk controller notifies CPU with
ALU an interrupt (i.e., asserts physical
“interrupt” pin on the CPU).

Main
Bus interface memory

I/O bus

USB Graphics Disk


controller adapter controller

Mouse Keyboard Monitor


Disk
52
Solid State Disks (SSDs)
I/O bus

Requests to read and


write logical disk blocks
Solid State Disk (SSD)
Flash
translation layer
Flash memory
Block 0 Block B-1
Page 0 Page 1 … Page P-1 … Page 0 Page 1 … Page P-1

¢ Pages: 512B to 4KB, Blocks: 32 to 128 pages


¢ Data read/written in units of pages
¢ Page can be written only after its block has been erased
¢ A block wears out after about 100,000 repeated writes
53
SSD Performance Characteristics

Sequential read tput 550 MB/s Sequential write tput 470 MB/s
Random read tput 365 MB/s Random write tput 303 MB/s
Avg seq read time 50 us Avg seq write time 60 us

¢ Sequential access faster than random access


§ Common theme in the memory hierarchy
¢ Random writes are somewhat slower
§ Erasing a block takes a long time (~1 ms)
§ Modifying a block page requires all other pages to be copied
to new block
§ In earlier SSDs, the read/write gap was much greater; today
largely hidden by flash translation layer

Source: Intel SSD 730 product specification


54
Trade-offs between SSDs vs. Rotating
Disks
¢ Advantages
§ No moving parts à faster, less power, more rugged

¢ Disadvantages
§ Have the potential to wear out
§ Mitigated by “wear leveling logic” in flash translation layer
§ E.g. Intel SSD 730 guarantees 128 petabytes (128 x 1015
bytes) of writes before it wears out
§ In 2015, about 30 times more expensive per byte

¢ Applications
§ MP3 players, smart phones, laptops
§ Increasingly common in desktops and servers

55
The CPU-Memory Gap
The gap between DRAM, disk, and CPU speeds.

100,000,000.0

10,000,000.0
Disk
1,000,000.0

100,000.0
SSD

10,000.0 Disk seek time


Time (ns)

SSD access time


1,000.0 DRAM access time
SRAM access time
100.0 DRAM
CPU cycle time
10.0 Effective CPU cycle time

1.0

0.1
CPU

0.0
1985 1990 1995 2000 2003 2005 2010 2015
Year

56
Locality to the Rescue!

The key to bridging this CPU-Memory gap is a


fundamental property of computer programs known as
locality.

57
Today
¢ Storage technologies and trends
¢ Locality of reference
¢ Caching in the memory hierarchy

58
Locality
¢ Principle of Locality: Programs tend to use data and
instructions with addresses near or equal to those
they have used recently

¢ Temporal locality:
§ Recently referenced items are likely
to be referenced again in the near future

¢ Spatial locality:
§ Items with nearby addresses tend
to be referenced close together in time

59
Locality
¢ Principle of Locality: Programs tend to use data and
instructions with addresses near or equal to those
they have used recently

¢ Temporal locality:
§ Recently referenced items are likely
to be referenced again in the near future

¢ Spatial locality:
§ Items with nearby addresses tend
to be referenced close together in time

60
Locality Example
sum = 0;
for (i = 0; i < n; i++)
sum += a[i];
return sum;

Spatial or Temporal
¢ Data references Locality?
§ Reference array elements in
succession (stride-1 reference pattern)
§ Reference variable sum each iteration
¢ Instruction references
§ Reference instructions in sequence
§ Cycle through loop repeatedly

61
Locality Example
sum = 0;
for (i = 0; i < n; i++)
sum += a[i];
return sum;

Spatial or Temporal
¢ Data references Locality?
§ Reference array elements in
succession (stride-1 reference pattern) spatial
§ Reference variable sum each iteration
¢ Instruction references
§ Reference instructions in sequence
§ Cycle through loop repeatedly

62
Locality Example
sum = 0;
for (i = 0; i < n; i++)
sum += a[i];
return sum;

Spatial or Temporal
¢ Data references Locality?
§ Reference array elements in
succession (stride-1 reference pattern) spatial
§ Reference variable sum each iteration temporal
¢ Instruction references
§ Reference instructions in sequence
§ Cycle through loop repeatedly

63
Locality Example
sum = 0;
for (i = 0; i < n; i++)
sum += a[i];
return sum;

Spatial or Temporal
¢ Data references Locality?
§ Reference array elements in
succession (stride-1 reference pattern) spatial
§ Reference variable sum each iteration temporal
¢ Instruction references
§ Reference instructions in sequence spatial
§ Cycle through loop repeatedly

64
Locality Example
sum = 0;
for (i = 0; i < n; i++)
sum += a[i];
return sum;

Spatial or Temporal
¢ Data references Locality?
§ Reference array elements in
succession (stride-1 reference pattern) spatial
§ Reference variable sum each iteration temporal
¢ Instruction references
§ Reference instructions in sequence spatial
§ Cycle through loop repeatedly temporal

65
Qualitative Estimates of Locality
¢ Claim: Being able to look at code and get a qualitative
sense of its locality is a key skill for a programmer

¢ Question: Does this function have good locality with


respect to array a?

int sum_array_rows(int a[M][N])


{
int i, j, sum = 0;

for (i = 0; i < M; i++)


for (j = 0; j < N; j++)
sum += a[i][j];
return sum;
}
66
Qualitative Estimates of Locality
¢ Claim: Being able to look at code and get a qualitative
sense of its locality is a key skill for a programmer

¢ Question: Does this function have good locality with


respect to array a?

int sum_array_rows(int a[M][N])


{ Hint: array layout
int i, j, sum = 0;
is row-major order
for (i = 0; i < M; i++)
for (j = 0; j < N; j++)
sum += a[i][j];
return sum;
}
67
Qualitative Estimates of Locality
¢ Claim: Being able to look at code and get a qualitative
sense of its locality is a key skill for a programmer

¢ Question: Does this function have good locality with


respect to array a?

int sum_array_rows(int a[M][N])


{ Hint: array layout
int i, j, sum = 0;
is row-major order
for (i = 0; i < M; i++)
for (j = 0; j < N; j++)
sum += a[i][j]; Answer: yes
return sum;
}
68
Locality Example
¢ Question: Does this function have good locality with
respect to array a?

int sum_array_cols(int a[M][N])


{
int i, j, sum = 0;

for (j = 0; j < N; j++)


for (i = 0; i < M; i++)
sum += a[i][j];
return sum;
}

69
Locality Example
¢ Question: Does this function have good locality with
respect to array a?

int sum_array_cols(int a[M][N])


{
int i, j, sum = 0;

for (j = 0; j < N; j++)


for (i = 0; i < M; i++)
sum += a[i][j];
return sum;
}

Answer: no, unless…

70
Locality Example
¢ Question: Does this function have good locality with
respect to array a?

int sum_array_cols(int a[M][N])


{
int i, j, sum = 0;

for (j = 0; j < N; j++)


for (i = 0; i < M; i++)
sum += a[i][j];
return sum;
}

Answer: no, unless…


M is very small
71
Locality Example
¢ Question: Can you permute the loops so that the
function scans the 3-d array a with a stride-1
reference pattern (and thus has good spatial locality)?

int sum_array_3d(int a[M][N][N])


{
int i, j, k, sum = 0;

for (i = 0; i < N; i++)


for (j = 0; j < N; j++)
for (k = 0; k < M; k++)
sum += a[k][i][j];
return sum;
}

72
Locality Example
¢ Question: Can you permute the loops so that the
function scans the 3-d array a with a stride-1
reference pattern (and thus has good spatial locality)?

int sum_array_3d(int a[M][N][N])


{
int i, j, k, sum = 0;

for (i = 0; i < N; i++)


for (j = 0; j < N; j++)
for (k = 0; k < M; k++)
sum += a[k][i][j];
return sum;
}

Answer: make j the inner loop


73
Memory Hierarchies
¢ Some fundamental and enduring properties of
hardware and software:
§ Fast storage technologies cost more per byte, have less
capacity, and require more power (heat!)
§ The gap between CPU and main memory speeds is widening
§ Well-written programs tend to exhibit good locality

¢ These fundamental properties complement each other


beautifully

¢ They suggest an approach for organizing memory and


storage systems known as the memory hierarchy

74
Today
¢ Storage technologies and trends
¢ Locality of reference
¢ Caching in the memory hierarchy

75
Example Memory
Hierarchy L0:
Regs CPU registers hold words retrieved
Smaller, from the L1 cache.
faster, L1: L1 cache
and (SRAM) L1 cache holds cache lines
costlier retrieved from the L2 cache.
L2: L2 cache
(per byte)
storage
(SRAM)
L2 cache holds cache lines
devices retrieved from L3 cache.
L3: L3 cache
(SRAM)
L3 cache holds cache lines
retrieved from main memory.
Larger,
slower, L4: Main memory
and (DRAM) Main memory holds disk
cheaper blocks retrieved from local
(per byte) disks.
storage L5: Local secondary storage
devices (local disks)
Local disks hold files
retrieved from disks
on remote servers.
L6: Remote secondary storage
(e.g., Web servers)
76
Caches
¢ Cache: A smaller, faster storage device that acts as a
staging area for a subset of the data in a larger, slower
device
¢ Fundamental idea of a memory hierarchy:
§ For each k, the faster, smaller device at level k serves as a cache
for the larger, slower device at level k+1
¢ Why do memory hierarchies work?
§ Because of locality, programs tend to access the data at level k
more often than they access the data at level k+1
§ Thus, the storage at level k+1 can be slower, and thus larger and
cheaper per bit
¢ Aim (Idealized): The memory hierarchy creates a large
pool of storage that costs as much as the cheap storage
near the bottom, but that serves data to programs at the
rate of the fast storage near the top
77
General Cache Concepts

Smaller, faster, more expensive


Cache 8 9 14 3 memory caches a subset of
the blocks

Larger, slower, cheaper memory


Memory 0 1 2 3 viewed as partitioned into “blocks”

4 5 6 7
8 9 10 11
12 13 14 15

78
General Cache Concepts

Smaller, faster, more expensive


Cache 8 9 14 3 memory caches a subset of
the blocks

Data is copied in block-sized


transfer units

Larger, slower, cheaper memory


Memory 0 1 2 3 viewed as partitioned into “blocks”

4 5 6 7
8 9 10 11
12 13 14 15

79
General Cache Concepts

Smaller, faster, more expensive


Cache 8 9 14 3 memory caches a subset of
the blocks

Data is copied in block-sized


transfer units

Larger, slower, cheaper memory


Memory 0 1 2 3 viewed as partitioned into “blocks”

4 5 6 7
8 9 10 11
12 13 14 15

80
General Cache Concepts

Smaller, faster, more expensive


Cache 8 9 14 3 memory caches a subset of
the blocks

Data is copied in block-sized


4 transfer units

Larger, slower, cheaper memory


Memory 0 1 2 3 viewed as partitioned into “blocks”

4 5 6 7
8 9 10 11
12 13 14 15

81
General Cache Concepts

Smaller, faster, more expensive


Cache 8
4 9 14 3 memory caches a subset of
the blocks

Data is copied in block-sized


transfer units

Larger, slower, cheaper memory


Memory 0 1 2 3 viewed as partitioned into “blocks”

4 5 6 7
8 9 10 11
12 13 14 15

82
General Cache Concepts

Smaller, faster, more expensive


Cache 8
4 9 14 3 memory caches a subset of
the blocks

Data is copied in block-sized


transfer units

Larger, slower, cheaper memory


Memory 0 1 2 3 viewed as partitioned into “blocks”

4 5 6 7
8 9 10 11
12 13 14 15

83
General Cache Concepts

Smaller, faster, more expensive


Cache 8
4 9 14 3 memory caches a subset of
the blocks

Data is copied in block-sized


10 transfer units

Larger, slower, cheaper memory


Memory 0 1 2 3 viewed as partitioned into “blocks”

4 5 6 7
8 9 10 11
12 13 14 15

84
General Cache Concepts

Smaller, faster, more expensive


Cache 8
4 9 14
10 3 memory caches a subset of
the blocks

Data is copied in block-sized


transfer units

Larger, slower, cheaper memory


Memory 0 1 2 3 viewed as partitioned into “blocks”

4 5 6 7
8 9 10 11
12 13 14 15

85
General Cache Concepts: Hit

Cache 8 9 14 3

Memory 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

86
General Cache Concepts: Hit
Request: 14 Data in block b is needed

Cache 8 9 14 3

Memory 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

87
General Cache Concepts: Hit
Request: 14 Data in block b is needed

Block b is in cache:
Cache 8 9 14 3
Hit!

Memory 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

88
General Cache Concepts: Miss

Cache 8 9 14 3

Memory 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

89
General Cache Concepts: Miss
Request: 12 Data in block b is needed

Cache 8 9 14 3

Memory 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

90
General Cache Concepts: Miss
Request: 12 Data in block b is needed

Block b is not in cache:


Cache 8 9 14 3
Miss!

Memory 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

91
General Cache Concepts: Miss
Request: 12 Data in block b is needed

Block b is not in cache:


Cache 8 9 14 3
Miss!

Block b is fetched from


Request: 12
memory

Memory 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

92
General Cache Concepts: Miss
Request: 12 Data in block b is needed

Block b is not in cache:


Cache 8 9 14 3
Miss!

Block b is fetched from


Request: 12
memory

Memory 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

93
General Cache Concepts: Miss
Request: 12 Data in block b is needed

Block b is not in cache:


Cache 8 9 14 3
Miss!

Block b is fetched from


12 Request: 12
memory

Memory 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

94
General Cache Concepts: Miss
Request: 12 Data in block b is needed

Block b is not in cache:


Cache 8 12
9 14 3
Miss!

Block b is fetched from


Request: 12
memory

Block b is stored in cache


Memory 0 1 2 3 • Placement policy:
4 5 6 7 determines where b goes
• Replacement policy:
8 9 10 11
determines which block
12 13 14 15 gets evicted (victim)

95
General Caching Concepts:
3 Types of Cache Misses
¢ Cold (compulsory) miss
§ Cold misses occur because the cache starts empty and this is the
first reference to the block
¢ Capacity miss
§ Occurs when the set of active cache blocks (working set) is larger
than the cache
¢ Conflict miss
§ Most caches limit blocks at level k+1 to a small subset (sometimes
a singleton) of the block positions at level k
§ e.g., block i at level k+1 must be placed in block (i mod 4) at
level k
§ Conflict misses occur when the level k cache is large enough, but
multiple data objects all map to the same level k block
§ e.g., referencing blocks 0, 8, 0, 8, 0, 8, ... would miss every
time
96
Examples of Caching in the Memory
Hierarchy
Where is it Latency
Cache Type What is Cached? Managed By
Cached? (cycles)
Registers 4-8 byte words CPU core 0 Compiler
TLB Address On-Chip TLB 0 Hardware
translations MMU
L1 cache 64-byte blocks On-Chip L1 4 Hardware
L2 cache 64-byte blocks On-Chip L2 10 Hardware
Virtual Memory 4-KB pages Main memory 100 Hardware + OS
Buffer cache Parts of files Main memory 100 OS
Disk cache Disk sectors Disk controller 100,000 Disk firmware
Network buffer Parts of files Local disk 10,000,000 NFS client
cache
Browser cache Web pages Local disk 10,000,000 Web browser

Web cache Web pages Remote server disks 1,000,000,000 Web proxy
server
97
Summary
¢ The speed gap between CPU, memory, and mass
storage continues to widen

¢ Well-written programs exhibit locality

¢ Memory hierarchies based on caching close the gap by


exploiting locality

¢ Next lecture: details of modern CPU caches and (non


sequitur) the LZW compression algorithm

98

You might also like