0% found this document useful (0 votes)
135 views29 pages

KTMTSS Shared Memory Multiprocessor

1) Shared memory multiprocessors allow symmetric access to main memory from any processor and dominate the server market. They are attractive for throughput and parallel programs due to fine-grained resource sharing and automatic data movement. 2) The key challenge is extending the memory hierarchy to support multiple processors accessing data through loads and stores while maintaining cache coherence. Private processor caches can cause stale copies of data, violating the expectation that reads return the most recent value. 3) Defining the "last write" in a parallel system is challenging as program order is only defined within each process, not across processes. The memory must impose a serial order on operations to each location that respects the individual program orders to provide a consistent view

Uploaded by

quỳnh nguyễn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views29 pages

KTMTSS Shared Memory Multiprocessor

1) Shared memory multiprocessors allow symmetric access to main memory from any processor and dominate the server market. They are attractive for throughput and parallel programs due to fine-grained resource sharing and automatic data movement. 2) The key challenge is extending the memory hierarchy to support multiple processors accessing data through loads and stores while maintaining cache coherence. Private processor caches can cause stale copies of data, violating the expectation that reads return the most recent value. 3) Defining the "last write" in a parallel system is challenging as program order is only defined within each process, not across processes. The memory must impose a serial order on operations to each location that respects the individual program orders to provide a consistent view

Uploaded by

quỳnh nguyễn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

L7

Shared Memory Multiprocessors


Logical design and software interactions

Shared Memory Multiprocessors

Symmetric Multiprocessors (SMPs)


• Symmetric access to all of main memory from any processor
Dominate the server market
• Building blocks for larger systems; arriving to desktop
Attractive as throughput servers and for parallel programs
• Fine-grain resource sharing
• Uniform access via loads/stores
• Automatic data movement and coherent replication in caches
• Useful for operating system too
Normal uniprocessor mechanisms to access data (reads and writes)
• Key is extension of memory hierarchy to support multiple processors

1
Supporting Programming Models

Message passing Programming models

Compilation Multiprogramming
or library Communication abstraction
User/system boundary
Shared address space Operating systems support

Hardware/software boundary
Communication hardware
Physical communication medium

• Address translation and protection in hardware (hardware SAS)


• Message passing using shared memory buffers
– can be very high performance since no OS involvement necessary
• Focus here on supporting coherent shared address space
3

Natural Extensions of Memory System


P1 Pn
Switch
P1 Pn

(Interleaved)
First-level $
$ $
Bus
(Interleaved)
Main memory
Mem I/O devices

(a) Shared cache (b) Bus-based shared memory

P1 Pn
P1 Pn

$ $
$ $
Mem Mem
Interconnection network

Interconnection network
Mem Mem

(c) Dancehall (d) Distributed-memory

2
Caches and Cache Coherence

Caches play key role in all cases


• Reduce average data access time
• Reduce bandwidth demands placed on shared interconnect
But private processor caches create a problem
• Copies of a variable can be present in multiple caches
• A write by one processor may not become visible to others
– They’ll keep accessing stale value in their caches
• Cache coherence problem
• Need to take actions to ensure visibility

Focus: Bus-based, Centralized Memory

Shared cache
• Low-latency sharing and prefetching across processors
• Sharing of working sets
• No coherence problem (and hence no false sharing either)
• But high bandwidth needs and negative interference (e.g. conflicts)
• Hit and miss latency increased due to intervening switch and cache size
• Mid 80s: to connect couple of processors on a board (Encore, Sequent)
• Today: for multiprocessor on a chip (for small-scale systems or nodes)
Dancehall
• No longer popular: everything is uniformly far away
Distributed memory
• Most popular way to build scalable systems, discussed later
6

3
Outline

Coherence and Consistency

Snooping Cache Coherence Protocols

Quantitative Evaluation of Cache Coherence Protocols

Synchronization

Implications for Parallel Software

A Coherent Memory System: Intuition

Reading a location should return latest value written (by any process)

Easy in uniprocessors
• Except for I/O: coherence between I/O devices and processors
• But infrequent so software solutions work
– uncacheable memory, uncacheable operations, flush pages, pass I/O data
through caches
Would like same to hold when processes run on different processors
• E.g. as if the processes were interleaved on a uniprocessor

But coherence problem much more critical in multiprocessors


• Pervasive
• Performance-critical
• Must be treated as a basic hardware design issue
8

4
Example Cache Coherence Problem
P1 P2 P3

u=? u=? u=7

$ 4 $ 5 $ 3
u:5 u:5

1 I/O devices
u:5 2

Memory

• Processors see different values for u after event 3


• With write back caches, value written back to memory depends on
happenstance of which cache flushes or writes back value when
– Processes accessing main memory may see very stale value
• Unacceptable to programs, and frequent!
9

Problems with the Intuition

Recall: Value returned by read should be last value written


But “last” is not well-defined
Even in seq. case, last defined in terms of program order, not time
• Order of operations in the machine language presented to processor
• “Subsequent” defined in analogous way, and well defined
In parallel case, program order defined within a process, but need to
make sense of orders across processes
Must define a meaningful semantics

10

5
Some Basic Definitions
Extend from definitions in uniprocessors to those in multiprocessors
Memory operation: a single read (load), write (store) or read-modify-
write access to a memory location
• Assumed to execute atomically w.r.t each other
Issue: a memory operation issues when it leaves processor’s internal
environment and is presented to memory system (cache, buffer …)
Perform: operation appears to have taken place, as far as processor
can tell from other memory operations it issues
• A write performs w.r.t. the processor when a subsequent read by the
processor returns the value of that write or a later write
• A read perform w.r.t the processor when subsequent writes issued by
the processor cannot affect the value returned by the read
In multiprocessors, stay same but replace “the” by “a” processor
• Also, complete: perform with respect to all processors
• Still need to make sense of order in operations from different processes
11

Sharpening the Intuition


Imagine a single shared memory and no caches
• Every read and write to a location accesses the same physical location
• Operation completes when it does so
Memory imposes a serial or total order on operations to the location
• Operations to the location from a given processor are in program order
• The order of operations to the location from different processors is
some interleaving that preserves the individual program orders
“Last” now means most recent in a hypothetical serial order that
maintains these properties
For the serial order to be consistent, all processors must see writes to
the location in the same order (if they bother to look, i.e. to read)
Note that the total order is never really constructed in real systems
• Don’t even want memory, or any hardware, to see all operations
But program should behave as if some serial order is enforced
• Order in which things appear to happen, not actually happen
12

6
Formal Definition of Coherence
Results of a program: values returned by its read operations
A memory system is coherent if the results of any execution of a program
are such that each location, it is possible to construct a hypothetical serial
order of all operations to the location that is consistent with the results of
the execution and in which:
1. operations issued by any particular process occur in the order issued by
that process, and
2. the value returned by a read is the value written by the last write to that
location in the serial order
Two necessary features:
• Write propagation: value written must become visible to others
• Write serialization: writes to location seen in same order by all
– if I see w1 after w2, you should not see w2 before w1
– no need for analogous read serialization since reads not visible to others

13

Cache Coherence Using a Bus

Built on top of two fundamentals of uniprocessor systems


• Bus transactions
• State transition diagram in cache
Uniprocessor bus transaction:
• Three phases: arbitration, command/address, data transfer
• All devices observe addresses, one is responsible
Uniprocessor cache states:
• Effectively, every block is a finite state machine
• Write-through, write no-allocate has two states: valid, invalid
• Writeback caches have one more state: modified (“dirty”)

Multiprocessors extend both these somewhat to implement coherence

14

7
Snooping-based Coherence
Basic Idea
Transactions on bus are visible to all processors
Processors or their representatives can snoop (monitor) bus and take
action on relevant events (e.g. change state)
Implementing a Protocol
Cache controller now receives inputs from both sides:
• Requests from processor, bus requests/responses from snooper
In either case, takes zero or more actions
• Updates state, responds with data, generates new bus transactions
Protocol is distributed algorithm: cooperating state machines
• Set of states, state transition diagram, actions
Granularity of coherence is typically cache block
• Like that of allocation in cache and transfer to/from cache
15

Coherence with Write-through Caches

P1 Pn
Bus snoop

$ $

Cache-memory
I/O devices transaction
Mem

• Key extensions to uniprocessor: snooping, invalidating/updating caches


– no new states or bus transactions in this case
– invalidation- versus update-based protocols
• Write propagation: even in inval case, later reads will see new value
– inval causes miss on later access, and memory up-to-date via write-through
16

8
Write-through State Transition Diagram
PrRd/— PrWr/BusWr

PrRd/BusRd BusWr/—

PrWr/BusWr Processor-initiated transactions

Bus-snooper -initiated transactions

• Two states per block in each cache, as in uniprocessor


– state of a block can be seen as p-vector
• Hardware state bits associated with only blocks that are in the cache
– other blocks can be seen as being in invalid (not-present) state in that cache
• Write will invalidate all other caches (no local change of state)
– can have multiple simultaneous readers of block,but write invalidates them
17

Is it Coherent?

Construct total order that satisfies program order, write serialization?


Assume atomic bus transactions and memory operations for now
• all phases of one bus transaction complete before next one starts
• processor waits for memory operation to complete before issuing next
• with one-level cache, assume invalidations applied during bus xaction
• (we’ll relax these assumptions in more complex systems later)

All writes go to bus + atomicity


• Writes serialized by order in which they appear on bus (bus order)
• Per above assumptions, invalidations applied to caches in bus order

How to insert reads in this order?


• Important since processors see writes through reads, so determines
whether write serialization is satisfied
• But read hits may happen independently and do not appear on bus or
enter directly in bus order
18

9
Ordering Reads

Read misses: appear on bus, and will see last write in bus order
Read hits: do not appear on bus
• But value read was placed in cache by either
– most recent write by this processor, or
– most recent read miss by this processor
• Both these transactions appear on the bus
• So reads hits also see values as being produced in consistent bus order

19

Determining Orders More Generally


A memory operation M2 is subsequent to a memory operation M1 if the operations
are issued by the same processor and M2 follows M1 in program order.
Read is subsequent to write W if read generates bus xaction that follows that for W.
Write is subsequent to read or write M if M generates bus xaction and the xaction
for the write follows that for M.
Write is subsequent to read if read does not generate a bus xaction and is not already
separated from the write by another bus xaction.
P0: R R R W R R

P1 : R R R R R W

P2 : R R R R R R

•Writes establish a partial order


•Doesn’t constrain ordering of reads, though bus will order read misses too
–any order among reads between writes is fine, as long as in program order
20

10
Problem with Write-Through

High bandwidth requirements


• Every write from every processor goes to shared bus and memory
• Consider 200MHz, 1CPI processor, and 15% instrs. are 8-byte stores
• Each processor generates 30M stores or 240MB data per second
• 1GB/s bus can support only about 4 processors without saturating
• Write-through especially unpopular for SMPs
Write-back caches absorb most writes as cache hits
• Write hits don’t go on bus
• But now how do we ensure write propagation and serialization?
• Need more sophisticated protocols: large design space
But first, let’s understand other ordering issues

21

Memory Consistency
Writes to a location become visible to all in the same order
But when does a write become visible
•How to establish orders between a write and a read by different
procs?
–Typically use event synchronization, by using more than
one location
P1 P2
/*Assume initial value of A and ag is 0*/
A = 1; while (flag == 0); /*spin idly*/
flag = 1; print A;
• Intuition not guaranteed by coherence
• Sometimes expect memory to respect order between accesses to
different locations issued by a given process
– to preserve orders among accesses to same location by different processes
• Coherence doesn’t help: pertains only to single location
22

11
Another Example of Orders

P1 P2
/*Assume initial values of A and B are 0*/
(1a) A = 1; (2a) print B;
(1b) B = 2; (2b) print A;

• What’s the intuition?


• Whatever it is, we need an ordering model for clear semantics
– across different locations as well
– so programmers can reason about what results are possible
• This is the memory consistency model

23

Memory Consistency Model

Specifies constraints on the order in which memory operations (from


any process) can appear to execute with respect to one another
• What orders are preserved?
• Given a load, constrains the possible values returned by it

Without it, can’t tell much about an SAS program’s execution


Implications for both programmer and system designer
• Programmer uses to reason about correctness and possible results
• System designer can use to constrain how much accesses can be
reordered by compiler or hardware
Contract between programmer and system

24

12
Sequential Consistency
Processors
P1 P2 Pn
issuing memory
references as
per program order

The “switch” is randomly


set after each memory
reference

Memory

• (as if there were no caches, and a single memory)


• Total order achieved by interleaving accesses from different processes
• Maintains program order, and memory operations, from all processes,
appear to [issue, execute, complete] atomically w.r.t. others
• Programmer’s intuition is maintained
“A multiprocessor is sequentially consistent if the result of any execution is
the same as if the operations of all the processors were executed in some
sequential order, and the operations of each individual processor appear in
this sequence in the order specified by its program.” [Lamport, 1979]
25

What Really is Program Order?

Intuitively, order in which operations appear in source code


• Straightforward translation of source code to assembly
• At most one memory operation per instruction
But not the same as order presented to hardware by compiler
So which is program order?
Depends on which layer, and who’s doing the reasoning
We assume order as seen by programmer

26

13
SC Example
What matters is order in which appears to execute, not executes
P1 P2
/*Assume initial values of A and B are 0*/
(1a) A = 1; (2a) print B;
(1b) B = 2; (2b) print A;

– possible outcomes for (A,B): (0,0), (1,0), (1,2); impossible under SC: (0,2)
– we know 1a->1b and 2a->2b by program order
– A = 0 implies 2b->1a, which implies 2a->1b
– B = 2 implies 1b->2a, which leads to a contradiction

– BUT, actual execution 1b->1a->2b->2a is SC, despite not program order


• appears just like 1a->1b->2a->2b as visible from results
– actual execution 1b->2a->2b-> is not SC

27

Implementing SC

Two kinds of requirements


• Program order
– memory operations issued by a process must appear to become visible (to
others and itself) in program order
• Atomicity
– in the overall total order, one memory operation should appear to complete
with respect to all processes before the next one is issued
– needed to guarantee that total order is consistent across processes
– tricky part is making writes atomic

28

14
Write Atomicity
Write Atomicity: Position in total order at which a write appears to
perform should be the same for all processes
• Nothing a process does after it has seen the new value produced by a
write W should be visible to other processes until they too have seen W
• In effect, extends write serialization to writes from multiple processes

P1 P2 P3
A=1; while (A==0);
B=1; while (B==0);
print A;

•Transitivity implies A should print as 1 under SC


•Problem if P2 leaves loop, writes B, and P3 sees new B but old
A (from its cache, say)
29

More Formally

Each process’s program order imposes partial order on set of all operations
Interleaving of these partial orders defines a total order on all operations
Many total orders may be SC (SC does not define particular interleaving)

SC Execution: An execution of a program is SC if the results it produces are


the same as those produced by some possible total order (interleaving)
SC System: A system is SC if any possible execution on that system is an SC
execution

30

15
Sufficient Conditions for SC

• Every process issues memory operations in program order


• After a write operation is issued, the issuing process waits for the write
to complete before issuing its next operation
• After a read operation is issued, the issuing process waits for the read to
complete, and for the write whose value is being returned by the read to
complete, before issuing its next operation (provides write atomicity)
Sufficient, not necessary, conditions
Clearly, compilers should not reorder for SC, but they do!
• Loop transformations, register allocation (eliminates!)
Even if issued in order, hardware may violate for better performance
• Write buffers, out of order execution
Reason: uniprocessors care only about dependences to same location
• Makes the sufficient conditions very restrictive for performance
31

Our Treatment of Ordering

Assume for now that compiler does not reorder


Hardware needs mechanisms to detect:
• Detect write completion (read completion is easy)
• Ensure write atomicity
For all protocols and implementations, we will see
• How they satisfy coherence, particularly write serialization
• How they satisfy sufficient conditions for SC (write completion and
write atomicity)
• How they can ensure SC but not through sufficient conditions
Will see that centralized bus interconnect makes it easier

32

16
SC in Write-through Example

Provides SC, not just coherence

Extend arguments used for coherence


• Writes and read misses to all locations serialized by bus into bus order
• If read obtains value of write W, W guaranteed to have completed
– since it caused a bus transaction
• When write W is performed w.r.t. any processor, all previous writes in
bus order have completed

33

Design Space for Snooping Protocols

No need to change processor, main memory, cache …


• Extend cache controller and exploit bus (provides serialization)
Focus on protocols for write-back caches
Dirty state now also indicates exclusive ownership
• Exclusive: only cache with a valid copy (main memory may be too)
• Owner: responsible for supplying block upon a request for it
Design space
• Invalidation versus Update-based protocols
• Set of states

34

17
Invalidation-based Protocols

Exclusive means can modify without notifying anyone else


• i.e. without bus transaction
• Must first get block in exclusive state before writing into it
• Even if already in valid state, need transaction, so called a write miss
Store to non-dirty data generates a read-exclusive bus transaction
• Tells others about impending write, obtains exclusive ownership
– makes the write visible, i.e. write is performed
– may be actually observed (by a read miss) only later
– write hit made visible (performed) when block updated in writer’s cache
• Only one RdX can succeed at a time for a block: serialized by bus
Read and Read-exclusive bus transactions drive coherence actions
• Writeback transactions also, but not caused by memory operation and
quite incidental to coherence protocol
– note: replaced block that is not in modified state can be dropped
35

Update-based Protocols

A write operation updates values in other caches


• New, update bus transaction
Advantages
• Other processors don’t miss on next access: reduced latency
– In invalidation protocols, they would miss and cause more transactions
• Single bus transaction to update several caches can save bandwidth
– Also, only the word written is transferred, not whole block
Disadvantages
• Multiple writes by same processor cause multiple update transactions
– In invalidation, first write gets exclusive ownership, others local
Detailed tradeoffs more complex

36

18
Invalidate versus Update

Basic question of program behavior


• Is a block written by one processor read by others before it is rewritten?
Invalidation:
• Yes => readers will take a miss
• No => multiple writes without additional traffic
– and clears out copies that won’t be used again
Update:
• Yes => readers will not miss if they had a copy previously
– single bus transaction to update all copies
• No => multiple useless updates, even to dead copies
Need to look at program behavior and hardware complexity
Invalidation protocols much more popular (more later)
• Some systems provide both, or even hybrid
37

Basic MSI Writeback Inval Protocol

States
• Invalid (I)
• Shared (S): one or more
• Dirty or Modified (M): one only
Processor Events:
• PrRd (read)
• PrWr (write)
Bus Transactions
• BusRd: asks for copy with no intent to modify
• BusRdX: asks for copy with intent to modify
• BusWB: updates memory
Actions
• Update state, perform bus transaction, flush value onto bus
38

19
State Transition Diagram
PrRd/— PrWr/—

BusRd/Flush

PrWr/BusRdX

S BusRdX/Flush

BusRdX/—

PrRd/BusRd
PrRd/—
BusRd/—
PrWr/BusRdX

• Write to shared block:


– Already have latest data; can use upgrade (BusUpgr) instead of BusRdX
• Replacement changes state of two blocks: outgoing and incoming

39

Satisfying Coherence

Write propagation is clear


Write serialization?
• All writes that appear on the bus (BusRdX) ordered by the bus
– Write performed in writer’s cache before it handles other transactions, so
ordered in same way even w.r.t. writer
• Reads that appear on the bus ordered wrt these
• Write that don’t appear on the bus:
– sequence of such writes between two bus xactions for the block must come
from same processor, say P
– in serialization, the sequence appears between these two bus xactions
– reads by P will seem them in this order w.r.t. other bus transactions
– reads by other processors separated from sequence by a bus xaction, which
places them in the serialized order w.r.t the writes
– so reads by all processors see writes in same order
40

20
Satisfying Sequential Consistency
1. Appeal to definition:
• Bus imposes total order on bus xactions for all locations
• Between xactions, procs perform reads/writes locally in program order
• So any execution defines a natural partial order
– Mj subsequent to Mi if (I) follows in program order on same processor, (ii)
Mj generates bus xaction that follows the memory operation for Mi
• In segment between two bus transactions, any interleaving of ops from
different processors leads to consistent total order
• In such a segment, writes observed by processor P serialized as follows
– Writes from other processors by the previous bus xaction P issued
– Writes from P by program order
2. Show sufficient conditions are satisfied
• Write completion: can detect when write appears on bus
• Write atomicity: if a read returns the value of a write, that write has
already become visible to all others already (can reason different cases)
41

Lower-level Protocol Choices

BusRd observed in M state: what transitition to make?


Depends on expectations of access patterns
• S: assumption that I’ll read again soon, rather than other will write
– good for mostly read data
– what about “migratory” data
• I read and write, then you read and write, then X reads and writes...
• better to go to I state, so I don’t have to be invalidated on your write

• Synapse transitioned to I state


• Sequent Symmetry and MIT Alewife use adaptive protocols
Choices can affect performance of memory system (later)

42

21
MESI (4-state) Invalidation Protocol
Problem with MSI protocol
• Reading and modifying data is 2 bus xactions, even if noone sharing
– e.g. even in sequential program
– BusRd (I->S) followed by BusRdX or BusUpgr (S->M)
Add exclusive state: write locally without xaction, but not modified
• Main memory is up to date, so cache not necessarily owner
• States
– invalid
– exclusive or exclusive-clean (only this cache has copy, but not modified)
– shared (two or more caches may have copies)
– modified (dirty)
• I -> E on PrRd if noone else has copy
– needs “shared” signal on bus: wired-or line asserted in response to BusRd

43

MESI State Transition Diagram


PrRd
PrWr/—

BusRdX/Flush
BusRd/Flush

PrWr/—
PrWr/BusRdX

BusRd/
Flush
BusRdX/Flush
PrRd/—
PrWr/BusRdX


BusRdX/Flush
PrRd/
BusRd (S
)
PrRd/—

BusRd/Flush

PrRd/
BusRd(S)

• BusRd(S) means shared line asserted on BusRd transaction


• Flush’: if cache-to-cache sharing (see next), only one cache flushes data
• MOESI protocol: Owned state: exclusive but memory not valid

44

22
Lower-level Protocol Choices

Who supplies data on miss when not in M state: memory or cache


Original, lllinois MESI: cache, since assumed faster than memory
• Cache-to-cache sharing
Not true in modern systems
• Intervening in another cache more expensive than getting from memory
Cache-to-cache sharing also adds complexity
• How does memory know it should supply data (must wait for caches)
• Selection algorithm if multiple caches have valid data
But valuable for cache-coherent machines with distributed memory
• May be cheaper to obtain from nearby cache than distant memory
• Especially when constructed out of SMP nodes (Stanford DASH)

45

Dragon Write-back Update Protocol

4 states
• Exclusive-clean or exclusive (E): I and memory have it
• Shared clean (Sc): I, others, and maybe memory, but I’m not owner
• Shared modified (Sm): I and others but not memory, and I’m the owner
– Sm and Sc can coexist in different caches, with only one Sm
• Modified or dirty (D): I and, noone else
No invalid state
• If in cache, cannot be invalid
• If not present in cache, can view as being in not-present or invalid state
New processor events: PrRdMiss, PrWrMiss
• Introduced to specify actions when block not present in cache
New bus transaction: BusUpd
• Broadcasts single word written on bus; updates other relevant caches

46

23
Dragon State Transition Diagram
PrRd/—
PrRd/— BusUpd/Update

BusRd/—
E Sc
PrRdMiss/BusRd(S) PrRdMiss/BusRd(S)

PrWr/—
PrWr/BusUpd(S)

PrWr/BusUpd(S)
BusUpd/Update

BusRd/Flush
PrWrMiss/(BusRd(S); BusUpd) PrWrMiss/BusRd(S)
Sm M
PrWr/BusUpd(S)

PrRd/— PrRd/—
PrWr/BusUpd(S)
BusRd/Flush PrWr/—

47

Lower-level Protocol Choices

Can shared-modified state be eliminated?


• If update memory as well on BusUpd transactions (DEC Firefly)
• Dragon protocol doesn’t (assumes DRAM memory slow to update)
Should replacement of an Sc block be broadcast?
• Would allow last copy to go to E state and not generate updates
• Replacement bus transaction is not in critical path, later update may be
Shouldn’t update local copy on write hit before controller gets bus
• Can mess up serialization
Coherence, consistency considerations much like write-through case

In general, many subtle race conditions in protocols


But first, let’s illustrate quantitative assessment at logical level
48

24
Assessing Protocol Tradeoffs

Tradeoffs affected by performance and organization characteristics


Decisions affect pressure placed on these
Part art and part science
• Art: experience, intuition and aesthetics of designers
• Science: Workload-driven evaluation for cost-performance
– want a balanced system: no expensive resource heavily underutilized
Methodology:
• Use simulator; choose parameters per earlier methodology (default
1MB, 4-way cache, 64-byte block, 16 processors; 64K cache for some)
• Focus on frequencies, not end performance for now
– transcends architectural details, but not what we’re really after
• Use idealized memory performance model to avoid changes of
reference interleaving across processors with machine parameters
– Cheap simulation: no need to model contention
49

Impact of Protocol Optimizations


(Computing traffic from state transitions discussed in book)
Effect of E state, and of BusUpgr instead of BusRdX
200

180 Address bus


Data bus
160
Traffic (MB/s)

140

120 80
70 Address bus
Traffic (MB/s)

100
60 Data bus
80
50
60 40
30
40
20
20
10
Raytrace/III Ill

Ex
l

t
x

t
Radiosity/3St-RdEx d

OS-Data/3St-RdEx E

0 0
Barnes/3St-RdEx
Barnes/III

LU/III

Radiosity/3St

Radix/III

Radix/3St-RdEx

Raytrace/3St-RdEx

Appl-Data/3St-RdEx

OS-Code/3St-RdEx
Appl-Code/3St

Appl-Data/3St

OS-Code/III
OS-Code/3St

OS-Data/3St
Barnes/3St

LU/3St

Ocean/III

Radiosity/III

Raytrace/3St
LU/3St-RdEx

Ocean/3St-RdEx

Appl-Code/3St-RdEx
Appl-Code/III

OS-Data/III
Radix/3St
Ocean/3S

Appl-Data/III

• MSI versus MESI doesn’t seem to matter for bw for these workloads
• Upgrades instead of read-exclusive helps
• Same story when working sets don’t fit for Ocean, Radix, Raytrace
50

25
Impact of Cache Block Size
Multiprocessors add new kind of miss to cold, capacity, conflict
• Coherence misses: true sharing and false sharing
– latter due to granularity of coherence being larger than a word
• Both miss rate and traffic matter
Reducing misses architecturally in invalidation protocol
• Capacity: enlarge cache; increase block size (if spatial locality)
• Conflict: increase associativity
• Cold and Coherence: only block size
Increasing block size has advantages and disadvantages
• Can reduce misses if spatial locality is good
• Can hurt too
– increase misses due to false sharing if spatial locality not good
– increase misses due to conflicts in fixed-size cache
– increase traffic due to fetching unnecessary data and due to false sharing
– can increase miss penalty and perhaps hit cost
51

A Classification of Cache Misses


Miss classi cation

First reference to
memory block by processor Reason
for miss

Yes Other
First access
systemwide

1. Cold No Reason for Replacement


elimination of
last copy
No Written
before Invalidation

2. Cold Yes
No Old copy Yes
with state = invalid
No Modi ed still there
word(s) accessed
during lifetime

3. False-sharing- Yes
cold Modi ed Yes
4. True-sharing- word(s) accessed Modi ed
cold No word(s) accessed Yes
No during lifetime
during lifetime
5. False-sharing- No Has block Yes
been modi ed since
inval-cap 6. True-sharing- replacement
inval-cap 7. Pure-
false-sharing 8. Pure-
true-sharing

Modi ed Yes
No word(s) accessed
during lifetime Modi ed
No
word(s) accessed Yes
during lifetime
9. Pure- 10. True-sharing-
capacity capacity
11. False-sharing-12. True-sharing-
cap-inval cap-inval

• Many mixed categories because a miss may have multiple causes


52

26
Impact of Block Size on Miss Rate
Results shown only for default problem size: varied behavior
• Need to examine impact of problem size and p as well (see text)

0.6 12

Upgrade Upgrade
False sharing False sharing
0.5 10
True sharing True sharing
Capacity Capacity
Cold Cold
0.4 8
Miss rate (%)

Miss rate (%)


0.3 6

0.2 4

0.1 2
8

8
0

6
8

8
6

4
0
Radiosity/32
Radiosity/8

Radiosity/256
Barnes/32

Barnes/64
Barnes/128

Lu/16
Lu/32

Lu/256

Ocean/8

Ocean/128

Radix/8

Radix/16

Radix/256

Raytrace/8
Barnes/8

Barnes/256

Lu/8

Lu/64
Lu/128

Ocean/16

Ocean/64

Ocean/256

Radix/32
Radix/64
Radix/128

Raytrace/16

Raytrace/32
Raytrace/64

Raytrace/128
Raytrace/256
Radiosity/64
Barnes/16

Radiosity/16

Ocean/32
Radiosity/128

•Working set doesn’t fit: impact on capacity misses much more critical
53

Impact of Block Size on Traffic


Traffic affects performance indirectly through contention
0.18
10 1.8
Address bus
0.16 9 Address bus Address bus
1.6
Traffic (bytes/instructions)

Data bus
Data bus Data bus
0.14 8
1.4
Traf fic (bytes/instruction)

7
Traf fic (bytes/FLOP)

0.12
1.2
6
0.1 1
5
0.08
0.8
4
0.06 0.6
3
0.04 0.4
2
0.02 0.2
1
2

28
4

0 0 0
Barnes/256

Radiosity/128
Radiosity/256

Raytrace/128
Raytrace/256
Barnes/8

Barnes/16
Barnes/32

Barnes/64

Radiosity/8

Radiosity/64

Raytrace/8

Raytrace/32

Raytrace/64
Barnes/128

Radiosity/16
Radiosity/32

Raytrace/16

LU/8
LU/16
LU/32
LU/64
LU/128
LU/256
Radix/8

Ocean/8
Radix/16

Radix/32

Radix/64

Radix/128

Radix/256

Ocean/16
Ocean/32
Ocean/64
Ocean/128
Ocean/256

• Results different than for miss rate: traffic almost always increases
• When working sets fits, overall traffic still small, except for Radix
• Fixed overhead is significant component
– So total traffic often minimized at 16-32 byte block, not smaller
• Working set doesn’t fit: even 128-byte good for Ocean due to capacity
54

27
Making Large Blocks More Effective

Software
• Improve spatial locality by better data structuring (more later)
• Compiler techniques
Hardware
• Retain granularity of transfer but reduce granularity of coherence
– use subblocks: same tag but different state bits
– one subblock may be valid but another invalid or dirty
• Reduce both granularities, but prefetch more blocks on a miss
• Proposals for adjustable cache size
• More subtle: delay propagation of invalidations and perform all at once
– But can change consistency model: discuss later in course
• Use update instead of invalidate protocols to reduce false sharing effect

55

Update versus Invalidate

Much debate over the years: tradeoff depends on sharing patterns


Intuition:
• If those that used continue to use, and writes between use are few,
update should do better
– e.g. producer-consumer pattern
• If those that use unlikely to use again, or many writes between reads,
updates not good
– “pack rat” phenomenon particularly bad under process migration
– useless updates where only last one will be used
Can construct scenarios where one or other is much better
Can combine them in hybrid schemes (see text)
• E.g. competitive: observe patterns at runtime and change protocol
Let’s look at real workloads
56

28
Update vs Invalidate: Miss Rates
0.60 2.50
False sharing
True sharing
0.50
Capacity 2.00
Cold
0.40
Miss rate (%)

Miss rate (%)


1.50

0.30

1.00
0.20

0.50
0.10

0.00 0.00
LU/inv

Raytrace/inv
Ocean/mix
Ocean/inv
LU/upd

Radix/inv

Radix/mix
Raytrace/upd
Ocean/upd

Radix/upd
• Lots of coherence misses: updates help
• Lots of capacity misses: updates hurt (keep data in cache uselessly)
• Updates seem to help, but this ignores upgrade and update traffic
57

Upgrade and Update Rates (Traffic)


Upgrade/update rate (%)

• Update traffic is substantial


0.00

0.50

1.00

1.50

2.00

2.50

• Main cause is multiple LU/inv

writes by a processor LU/upd

before a read by other


– many bus transactions
Ocean/inv
versus one in invalidation
case Ocean/mix

– could delay updates or use Ocean/upd

merging
• Overall, trend is away from Raytrace/inv

update based protocols as Raytrace/upd


default
– bandwidth, complexity, Upgrade/update rate (%)
large blocks trend, pack rat
0.00

1.00

2.00

3.00

4.00

5.00

6.00

7.00

8.00

for process migration


Radix/inv

• Will see later that updates Radix/mix


have greater problems for Radix/upd
scalable systems
58

29

You might also like