0% found this document useful (0 votes)
78 views37 pages

Chapter 6

The document discusses memory technologies like SRAM and DRAM and how they are used in a memory hierarchy. SRAM is very fast but uses more transistors while DRAM is smaller but slower. A memory hierarchy with multiple cache levels is used to exploit locality and bridge the speed gap between fast CPU and slower main memory. The key concepts covered are direct mapped caches, cache hits/misses, block size, and miss rate. Later levels discuss virtual memory, page tables, and TLBs.

Uploaded by

shubhamvslavi
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views37 pages

Chapter 6

The document discusses memory technologies like SRAM and DRAM and how they are used in a memory hierarchy. SRAM is very fast but uses more transistors while DRAM is smaller but slower. A memory hierarchy with multiple cache levels is used to exploit locality and bridge the speed gap between fast CPU and slower main memory. The key concepts covered are direct mapped caches, cache hits/misses, block size, and miss rate. Later levels discuss virtual memory, page tables, and TLBs.

Uploaded by

shubhamvslavi
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 37

Memories: Review

SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: value is stored as a charge on capacitor (must be refreshed) very small but slower than SRAM (factor of 5 to 10)
Word line Pass transistor Capacitor

Bit line

A B

A B

Exploiting Memory Hierarchy


Users want large and fast memories! 1997 SRAM access times are 2 - 25ns at cost of $100 to $250 per Mbyte. DRAM access times are 60-120ns at cost of $5 to $10 per Mbyte. Disk access times are 10 to 20 million ns at cost of $.10 to $.20 per Mbyte.

Try and give it to them anyway build a memory hierarchy

CPU

Level 1

Increasing distance from the CPU in access time

Levels in the memory hierarchy

Level 2

Level n

Size of the memory at each level

Locality
A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality: it will tend to be referenced again soon spatial locality: nearby items will tend to be referenced soon. Why does code have locality? Our initial focus: two levels (upper, lower) block: minimum unit of data hit: data requested is in the upper level miss: data requested is not in the upper level

Cache
Two issues: How do we know if a data item is in the cache? If it is, how do we find it? Our first example: block size is one word of data "direct mapped"

For each item of data at the lower level, there is exactly one location in the cache where it might be. e.g., lots of items at the lower level share locations in the upper level

Direct Mapped Cache


Mapping: address is modulo the number of blocks in the cache
Cache

00001

00101

01001

000 001 010 011 100 101 110 111


01101 10001 10101

11001

11101

Memory

Direct Mapped Cache


For MIPS:
Address (showing bit positions) 31 30 13 12 11 2 1 0
Byte offset

Hit

20 Tag Index

10

Data

Index 0 1 2

Valid

Tag

Data

1021 1022 1023 20 32

What kind of locality are we taking advantage of?

Direct Mapped Cache


Taking advantage of spatial locality:
Address (showing bit positions) 31 16 15 16 Tag Index 16 bits V Tag 128 bits Data 4 32 1 0 12 2 Byte offset Block offset

Hit

Data

4K entries

16

32

32

32

32

Mux 32

Hits vs. Misses


Read hits this is what we want! Read misses stall the CPU, fetch block from memory, deliver to cache, restart Write hits: can replace data in cache and memory (write-through) write the data only into the cache (write-back the cache later) Write misses: read the entire block into the cache, then write the word

Hardware Issues
Make reading multiple words easier by using banks of memory
CPU CPU CPU

Multiplexor Cache Cache Bus Bus Bus Cache

Memory

Memory bank 0

Memory bank 1

Memory bank 2

Memory bank 3

Memory

b. Wide memory organization

c. Interleaved memory organization

a. One-word-wide memory organization

10

Performance
Increasing the block size tends to decrease miss rate:
40% 35% 30%

Miss rate

25% 20% 15% 10% 5% 0% 4 16 Block size (bytes) 64 1 KB 8 KB 16 KB 64 KB 256 KB 256

Use split caches because there is more spatial locality in code:


Program gcc spice Block size in words 1 4 1 4 Instruction miss rate 6.1% 2.0% 1.2% 0.3% Data miss rate 2.1% 1.7% 1.3% 0.6% Effective combined miss rate 5.4% 1.9% 1.2% 0.4%

11

Performance
Simplified model: execution time = (execution cycles + stall cycles) cycle time stall cycles = # of instructions miss ratio miss penalty Two ways of improving performance: decreasing the miss ratio decreasing the miss penalty

What happens if we increase block size?

12

Decreasing miss ratio with associativity


One-way set associative (direct mapped) Block 0 1 2 3 4 5 6 7 0 1 2 3 Tag Data Two-way set associative Set Tag Data Tag Data

Four-way set associative Set 0 1 Tag Data Tag Data Tag Data Tag Data

Eight-way set associative (fully associative) Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data

Compared to direct mapped, give a series of references that: results in a lower miss ratio using a 2-way set associative cache results in a higher miss ratio using a 2-way set associative cache assuming we use the least recently used replacement strategy

13

An implementation
Address 31 30 12 11 10 9 8 22 8 3 2 1 0

Index 0 1 2 253 254 255

Tag

Data

Tag

Data

Tag

Data

Tag

Data

22

32

4-to-1 multiplexor

Hit

Data

14

Performance
15%

12%

9%

Miss rate
6% 3% 0% One-way Two-way Associativity Four-way 1 KB 2 KB 4 KB 8 KB Eight-way 16 KB 32 KB 64 KB 128 KB

15

Decreasing miss penalty with multilevel caches


Add a second level cache: often primary cache is on the same chip as the processor use SRAMs to add another cache above primary memory (DRAM) miss penalty goes down if data is in 2nd level cache Example:
CPI of 1.0 on a 500Mhz machine with a 5% miss rate, 200ns DRAM access Adding 2nd level cache with 20ns access time decreases miss rate to 2%

Using multilevel caches: try and optimize the hit time on the 1st level cache try and optimize the miss rate on the 2nd level cache

16

Virtual Memory
Main memory can act as a cache for the secondary storage (disk)
Virtual addresses Address translation Physical addresses

Disk addresses

Advantages: illusion of having more physical memory program relocation protection

17

Pages: virtual memory blocks


Page faults: the data is not in memory, retrieve it from disk huge miss penalty, thus pages should be fairly large (e.g., 4KB) reducing page faults is important (LRU is worth the price) can handle the faults in software instead of hardware using write-through is too expensive so we use writeback
Virtual address 31 30 29 28 27 Virtual page number 15 14 13 12 11 10 9 8 Page offset 3210

Translation 29 28 27 15 14 13 12 Physical page number Physical address 11 10 9 8 Page offset 3210

18

Page Tables
Virtual page number Page table Physical page or disk address Valid 1 1 1 1 0 1 1 0 1 1 0 1 Disk storage Physical memory

19

Page Tables
Page table register

Virtual address
31 30 29 28 27 Virtual page number 15 14 13 12 11 10 9 8 Page offset 12 Physical page number 3 2 1 0

20
Valid

Page table

18 If 0 then page is not present in memory 29 28 27 15 14 13 12 11 10 9 8 Physical page number Physical address Page offset 3 2 1 0

20

Making Address Translation Fast


A cache for address translations: translation lookaside buffer
Virtual page number TLB Valid 1 1 1 1 0 1 Page table Physical page Valid or disk address 1 1 1 1 0 1 1 0 1 1 0 1 Disk storage Physical memory Tag Physical page address

21

TLBs and caches


Virtual address

TLB access

TLB miss exception

No

TLB hit?

Yes Physical address

No

Write?

Yes

Try to read data from cache

No

Write access bit on?

Yes

Write protection exception Cache miss stall No Cache hit? Yes

Write data into cache, update the tag, and put the data and the address into the write buffer

Deliver data to the CPU

22

Modern Systems
Very complicated memory systems:
Characteristic Virtual address Physical address Page size TLB organization Intel Pentium Pro 32 bits 32 bits 4 KB, 4 MB A TLB for instructions and a TLB for data Both four-way set associative Pseudo-LRU replacement Instruction TLB: 32 entries Data TLB: 64 entries TLB misses handled in hardware PowerPC 604 52 bits 32 bits 4 KB, selectable, and 256 MB A TLB for instructions and a TLB for data Both two-way set associative LRU replacement Instruction TLB: 128 entries Data TLB: 128 entries TLB misses handled in hardware

Characteristic Cache organization Cache size Cache associativity Replacement Block size Write policy

Intel Pentium Pro Split instruction and data caches 8 KB each for instructions/data Four-way set associative Approximated LRU replacement 32 bytes Write-back

PowerPC 604 Split intruction and data caches 16 KB each for instructions/data Four-way set associative LRU replacement 32 bytes Write-back or write-through

23

Some Issues
Processor speeds continue to increase very fast much faster than either DRAM or disk access times Design challenge: dealing with this growing disparity Trends: synchronous SRAMs (provide a burst of data) redesign DRAM chips to provide higher bandwidth or processing restructure code to increase locality use prefetching (make cache visible to ISA)

24

Chapters 8 & 9

(partial coverage)

25

Interfacing Processors and Peripherals


I/O Design affected by many factors (expandability, resilience) Performance: access latency throughput connection between devices and the system the memory hierarchy the operating system

A variety of different users (e.g., banks, supercomputers, engineers)

26

Processor

Interrupts

Cache

Memory I/O bus

Main memory

I/O controller

I/O controller

I/O controller

Disk

Disk

Graphics output

Network

27

I/O
Important but neglected The difficulties in assessing and designing I/O systems have often relegated I/O to second class status courses in every aspect of computing, from programming to computer architecture often ignore I/O or give it scanty coverage textbooks leave the subject to near the end, making it easier for students and instructors to skip it! GUILTY! we wont be looking at I/O in much detail be sure and read Chapter 8 in its entirety. you should probably take a networking class!

28

I/O Devices
Very diverse devices behavior (i.e., input vs. output) partner (who is at the other end?) data rate
Device Keyboard Mouse Voice input Scanner Voice output Line printer Laser printer Graphics display Modem Network/LAN Floppy disk Optical disk Magnetic tape Magnetic disk Behavior input input input input output output output output input or output input or output storage storage storage storage Partner human human human human human human human human machine machine machine machine machine machine Data rate (KB/sec) 0.01 0.02 0.02 400.00 0.60 1.00 200.00 60,000.00 2.00-8.00 500.00-6000.00 100.00 1000.00 2000.00 2000.00-10,000.00

29

I/O Example: Disk Drives


Platters

Tracks

Platter Sectors

Track

To access data: seek: position head over the proper track (8 to 20 ms. avg.) rotational latency: wait for desired sector (.5 / RPM) transfer: grab the data (one or more sectors) 2 to 15 MB/sec

30

I/O Example: Buses


Shared communication link (one or more wires) Difficult design: may be bottleneck length of the bus number of devices tradeoffs (buffers for higher bandwidth increases latency) support for many different devices cost Types of buses: processor-memory (short high speed, custom design) backplane (high speed, often standardized, e.g., PCI) I/O (lengthy, different devices, standardized, e.g., SCSI) Synchronous vs. Asynchronous use a clock and a synchronous protocol, fast and small but every device must operate at same rate and clock skew requires the bus to be short dont use a clock and instead use handshaking

31

Some Example Problems

ReadReq Data

1 3 4 4

2 Ack DataRdy

5 7

Lets look at some examples from the text Performance Analysis of Synchronous vs. Asynchronous Performance Analysis of Two Bus Schemes

32

Other important issues


Bus Arbitration: daisy chain arbitration (not very fair) centralized arbitration (requires an arbiter), e.g., PCI self selection, e.g., NuBus used in Macintosh collision detection, e.g., Ethernet Operating system: polling interrupts DMA Performance Analysis techniques: queuing theory simulation analysis, i.e., find the weakest link (see I/O System Design) Many new developments

33

Multiprocessors
Idea: create powerful computers by connecting many smaller ones good news: works for timesharing (better than supercomputer) vector processing may be coming back bad news: its really hard to write good concurrent programs many commercial failures

Processor

Processor

Processor

Processor

Processor

Processor

Cache

Cache

Cache

Cache

Cache

Cache

Single bus

Memory

Memory

Memory

Memory

I/O

Network

34

Questions
How do parallel processors share data? single address space (SMP vs. NUMA) message passing

How do parallel processors coordinate? synchronization (locks, semaphores) built into send / recieve primitives operating system protocols How are they implemented? connected by a single bus connected by a network

35

Some Interesting Problems


Cache Coherency
Processor Processor Processor

Snoop tag

Cache tag and data

Snoop tag

Cache tag and data

Snoop tag

Cache tag and data

Single bus

Memory

I/O

Synchronization provide special atomic instructions (test-and-set, swap, etc.) Network Topology

36

Concluding Remarks
Evolution vs. Revolution More often the expense of innovation comes from being too disruptive to computer users
Timeshared multiprocessor CC-UMA multiprocessor Message-passing multiprocessor Parallel processing multiprocessor Not-CC-NUMA multiprocessor CC-NUMA multiprocessor

Microprogramming

Virtual memory

Massive SIMD

Pipelining

Cache

RISC

Evolutionary

Revolutionary

Acceptance of hardware ideas requires acceptance by software people; therefore hardware people should learn about software. And if software people want good machines, they must learn more about hardware to be able to communicate with and thereby influence hardware engineers.

37

You might also like