Chapter 6
Chapter 6
SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: value is stored as a charge on capacitor (must be refreshed) very small but slower than SRAM (factor of 5 to 10)
Word line Pass transistor Capacitor
Bit line
A B
A B
CPU
Level 1
Level 2
Level n
Locality
A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality: it will tend to be referenced again soon spatial locality: nearby items will tend to be referenced soon. Why does code have locality? Our initial focus: two levels (upper, lower) block: minimum unit of data hit: data requested is in the upper level miss: data requested is not in the upper level
Cache
Two issues: How do we know if a data item is in the cache? If it is, how do we find it? Our first example: block size is one word of data "direct mapped"
For each item of data at the lower level, there is exactly one location in the cache where it might be. e.g., lots of items at the lower level share locations in the upper level
00001
00101
01001
11001
11101
Memory
Hit
20 Tag Index
10
Data
Index 0 1 2
Valid
Tag
Data
Hit
Data
4K entries
16
32
32
32
32
Mux 32
Hardware Issues
Make reading multiple words easier by using banks of memory
CPU CPU CPU
Memory
Memory bank 0
Memory bank 1
Memory bank 2
Memory bank 3
Memory
10
Performance
Increasing the block size tends to decrease miss rate:
40% 35% 30%
Miss rate
11
Performance
Simplified model: execution time = (execution cycles + stall cycles) cycle time stall cycles = # of instructions miss ratio miss penalty Two ways of improving performance: decreasing the miss ratio decreasing the miss penalty
12
Four-way set associative Set 0 1 Tag Data Tag Data Tag Data Tag Data
Eight-way set associative (fully associative) Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data
Compared to direct mapped, give a series of references that: results in a lower miss ratio using a 2-way set associative cache results in a higher miss ratio using a 2-way set associative cache assuming we use the least recently used replacement strategy
13
An implementation
Address 31 30 12 11 10 9 8 22 8 3 2 1 0
Tag
Data
Tag
Data
Tag
Data
Tag
Data
22
32
4-to-1 multiplexor
Hit
Data
14
Performance
15%
12%
9%
Miss rate
6% 3% 0% One-way Two-way Associativity Four-way 1 KB 2 KB 4 KB 8 KB Eight-way 16 KB 32 KB 64 KB 128 KB
15
Using multilevel caches: try and optimize the hit time on the 1st level cache try and optimize the miss rate on the 2nd level cache
16
Virtual Memory
Main memory can act as a cache for the secondary storage (disk)
Virtual addresses Address translation Physical addresses
Disk addresses
17
18
Page Tables
Virtual page number Page table Physical page or disk address Valid 1 1 1 1 0 1 1 0 1 1 0 1 Disk storage Physical memory
19
Page Tables
Page table register
Virtual address
31 30 29 28 27 Virtual page number 15 14 13 12 11 10 9 8 Page offset 12 Physical page number 3 2 1 0
20
Valid
Page table
18 If 0 then page is not present in memory 29 28 27 15 14 13 12 11 10 9 8 Physical page number Physical address Page offset 3 2 1 0
20
21
TLB access
No
TLB hit?
No
Write?
Yes
No
Yes
Write data into cache, update the tag, and put the data and the address into the write buffer
22
Modern Systems
Very complicated memory systems:
Characteristic Virtual address Physical address Page size TLB organization Intel Pentium Pro 32 bits 32 bits 4 KB, 4 MB A TLB for instructions and a TLB for data Both four-way set associative Pseudo-LRU replacement Instruction TLB: 32 entries Data TLB: 64 entries TLB misses handled in hardware PowerPC 604 52 bits 32 bits 4 KB, selectable, and 256 MB A TLB for instructions and a TLB for data Both two-way set associative LRU replacement Instruction TLB: 128 entries Data TLB: 128 entries TLB misses handled in hardware
Characteristic Cache organization Cache size Cache associativity Replacement Block size Write policy
Intel Pentium Pro Split instruction and data caches 8 KB each for instructions/data Four-way set associative Approximated LRU replacement 32 bytes Write-back
PowerPC 604 Split intruction and data caches 16 KB each for instructions/data Four-way set associative LRU replacement 32 bytes Write-back or write-through
23
Some Issues
Processor speeds continue to increase very fast much faster than either DRAM or disk access times Design challenge: dealing with this growing disparity Trends: synchronous SRAMs (provide a burst of data) redesign DRAM chips to provide higher bandwidth or processing restructure code to increase locality use prefetching (make cache visible to ISA)
24
Chapters 8 & 9
(partial coverage)
25
26
Processor
Interrupts
Cache
Main memory
I/O controller
I/O controller
I/O controller
Disk
Disk
Graphics output
Network
27
I/O
Important but neglected The difficulties in assessing and designing I/O systems have often relegated I/O to second class status courses in every aspect of computing, from programming to computer architecture often ignore I/O or give it scanty coverage textbooks leave the subject to near the end, making it easier for students and instructors to skip it! GUILTY! we wont be looking at I/O in much detail be sure and read Chapter 8 in its entirety. you should probably take a networking class!
28
I/O Devices
Very diverse devices behavior (i.e., input vs. output) partner (who is at the other end?) data rate
Device Keyboard Mouse Voice input Scanner Voice output Line printer Laser printer Graphics display Modem Network/LAN Floppy disk Optical disk Magnetic tape Magnetic disk Behavior input input input input output output output output input or output input or output storage storage storage storage Partner human human human human human human human human machine machine machine machine machine machine Data rate (KB/sec) 0.01 0.02 0.02 400.00 0.60 1.00 200.00 60,000.00 2.00-8.00 500.00-6000.00 100.00 1000.00 2000.00 2000.00-10,000.00
29
Tracks
Platter Sectors
Track
To access data: seek: position head over the proper track (8 to 20 ms. avg.) rotational latency: wait for desired sector (.5 / RPM) transfer: grab the data (one or more sectors) 2 to 15 MB/sec
30
31
ReadReq Data
1 3 4 4
2 Ack DataRdy
5 7
Lets look at some examples from the text Performance Analysis of Synchronous vs. Asynchronous Performance Analysis of Two Bus Schemes
32
33
Multiprocessors
Idea: create powerful computers by connecting many smaller ones good news: works for timesharing (better than supercomputer) vector processing may be coming back bad news: its really hard to write good concurrent programs many commercial failures
Processor
Processor
Processor
Processor
Processor
Processor
Cache
Cache
Cache
Cache
Cache
Cache
Single bus
Memory
Memory
Memory
Memory
I/O
Network
34
Questions
How do parallel processors share data? single address space (SMP vs. NUMA) message passing
How do parallel processors coordinate? synchronization (locks, semaphores) built into send / recieve primitives operating system protocols How are they implemented? connected by a single bus connected by a network
35
Snoop tag
Snoop tag
Snoop tag
Single bus
Memory
I/O
Synchronization provide special atomic instructions (test-and-set, swap, etc.) Network Topology
36
Concluding Remarks
Evolution vs. Revolution More often the expense of innovation comes from being too disruptive to computer users
Timeshared multiprocessor CC-UMA multiprocessor Message-passing multiprocessor Parallel processing multiprocessor Not-CC-NUMA multiprocessor CC-NUMA multiprocessor
Microprogramming
Virtual memory
Massive SIMD
Pipelining
Cache
RISC
Evolutionary
Revolutionary
Acceptance of hardware ideas requires acceptance by software people; therefore hardware people should learn about software. And if software people want good machines, they must learn more about hardware to be able to communicate with and thereby influence hardware engineers.
37