Multi Thread2
Multi Thread2
Krste Asanovic
Electrical Engineering and Computer Sciences
University of California, Berkeley
http://www.eecs.berkeley.edu/~krste
http://inst.eecs.berkeley.edu/~cs252
Recap: Directory Coherence
Protocols
P P
10/30/2007 3
Pipeline Hazards
t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14
LW r1, 0(r2) F D X MW
LW r5, 12(r1) F D D D D X MW
ADDI r5, r5, #12 F F F F D D D D X MW
SW 12(r1), r5 F F F F D D D D
10/30/2007 4
Multithreading
How can we guarantee no dependencies between
instructions in a pipeline?
-- One way is to interleave execution of instructions
from different program threads on same pipeline
t0 t1 t2 t3 t4 t5 t6 t7 t8 t9
10/30/2007 5
CDC 6600 Peripheral Processors
(Cray, 1964)
PC X
PC GPR1
PC 1 I$ IR GPR1
GPR1
PC 1 GPR1
1
1 D$
Y
+1
2 Thread 2
select
• Have to carry thread select down pipeline to ensure correct state
bits read/written at each pipe stage
• Appears to software (including OS) as multiple, albeit slower, CPUs
10/30/2007 7
Multithreading Costs
• Other costs?
10/30/2007 8
Thread Scheduling Policies
• Fixed interleave (CDC 6600 PPUs, 1964)
– each of N threads executes one instruction every N cycles
– if thread not ready to go in its slot, insert pipeline bubble
10/30/2007 9
Denelcor HEP
(Burton Smith, 1982)
10/30/2007 10
Tera MTA (1990-97)
• Up to 256 processors
• Up to 128 active threads per processor
• Processors and memory modules populate a sparse 3D torus
interconnection fabric
• Flat, shared main memory
– No data cache
– Sustains one main memory access per cycle per processor
• GaAs logic in prototype, 1KW/processor @ 260MHz
– CMOS version, MTA-2, 50W/processor
10/30/2007 11
MTA Architecture
• Each processor supports 128 active hardware threads
– 1 x 128 = 128 stream status word (SSW) registers,
– 8 x 128 = 1024 branch-target registers,
– 32 x 128 = 4096 general-purpose registers
10/30/2007 12
MTA Pipeline
Issue Pool Inst Fetch
• Every cycle, one
W
VLIW instruction from
one active thread is
M A C
launched into pipeline
• Instruction pipeline is
21 cycles long
Memory Pool
Write Pool
W
• Memory operations
incur ~150 cycles of
W
latency
Retry Pool
Assuming a single thread issues one
instruction every 21 cycles, and clock
Interconnection Network rate is 260 MHz…
Memory pipeline
What is single-thread performance?
Effective single-thread issue rate
is 260/21 = 12.4 MIPS
10/30/2007 13
Coarse-Grain Multithreading
10/30/2007 14
MIT Alewife (1990)
10/30/2007 15
IBM PowerPC RS64-IV (2000)
• Commercial coarse-grain multithreading CPU
• Based on PowerPC with quad-issue in-order five-
stage pipeline
• Each physical CPU supports two virtual CPUs
• On L2 cache miss, pipeline is flushed and execution
switches to second thread
– short pipeline minimizes flush penalty (4 cycles), small
compared to memory access latency
– flush pipeline to simplify exception handling
10/30/2007 16
CS252 Administrivia
• More practice problems and solutions on
multiprocessor caches coming
– Although practice problems are not graded, we’ll expect you’ll have
worked through them in the second quiz
• Reading assignment #4: Memory consistency models
– Tutorial on consistency models + Mark Hill’s Position paper
– Conflict between simpler memory models and simpler/faster
hardware
• Reminder: office hours
– Krste, Monday 1-3pm in 645 Soda
– Rose, Tuesday 5-6pm in 751 Soda
10/30/2007 17
Simultaneous Multithreading (SMT)
for OoO Superscalars
10/30/2007 18
For most apps, most execution units
lie idle in an OoO superscalar
For an 8way
superscalar.
From: Tullsen,
Eggers, and Levy,
“Simultaneous
Multithreading:
Maximizing Onchip
Parallelism, ISCA
10/30/2007 19
1995.
Superscalar Machine Efficiency
Issue width
Instruction
issue
Completely idle cycle
(vertical waste)
Time
Partially filled cycle,
i.e., IPC < 4
(horizontal waste)
10/30/2007 20
Vertical Multithreading
Issue width
Instruction
issue
Time
Partially filled cycle,
i.e., IPC < 4
(horizontal waste)
Time
Time
10/30/2007 24
Power 4
Singlethreaded predecessor to
Power 5. 8 execution units in
outoforder engine, each may
issue an instruction each cycle.
10/30/2007 25
Power
4
2 commits
Power (architected
5 register sets)
2 fetch (PC),
2 initial
10/30/2007
decodes 26
Power 5 data flow ...
10/30/2007 28
Pentium-4 Hyperthreading (2002)
• First commercial SMT design (2-way SMT)
– Hyperthreading == SMT
• Logical processors share nearly all resources of the
physical processor
– Caches, execution units, branch predictors
• Die area overhead of hyperthreading ~ 5%
• When one logical processor is stalled, the other can
make progress
– No logical processor can use all entries in queues when two threads
are active
• Processor running only one active software thread runs
at approximately same speed with or without
hyperthreading
10/30/2007 29
Pentium-4 Hyperthreading
Front End
[ Intel Technology
10/30/2007 Journal, Q1 2002 ] 30
Pentium-4 Hyperthreading
Execution Pipeline
[ Intel Technology
10/30/2007
Journal, Q1 2002 ] 31
SMT adaptation to parallelism type
For regions with high thread level For regions with low thread level
parallelism (TLP) entire machine parallelism (TLP) entire machine
width is shared by all threads width is available for instruction level
parallelism (ILP)
Issue width Issue width
Time Time
10/30/2007 32
Initial Performance of SMT
• Pentium 4 Extreme SMT yields 1.01 speedup for
SPECint_rate benchmark and 1.07 for SPECfp_rate
– Pentium 4 is dual threaded SMT
– SPECRate requires that each SPEC benchmark be run against a
vendor-selected number of copies of the same benchmark
• Running on Pentium 4 each of 26 SPEC benchmarks
paired with every other (262 runs) speed-ups from 0.90
to 1.58; average was 1.20
• Power 5, 8-processor server 1.23 faster for
SPECint_rate with SMT, 1.16 faster for SPECfp_rate
• Power 5 running 2 copies of each app speedup
between 0.89 and 1.41
– Most gained some
– Fl.Pt. apps had most cache conflicts and least gains
10/30/2007 33
Power 5 thread performance ...
Relative priority
of each thread
controllable in
hardware.
For balanced
operation, both
threads run
slower than if
they “owned”
the machine.
10/30/2007 34
Icount Choosing Policy
Fetch from thread with the least instructions in flight.
• Problem:
Spin looping thread consumes resources
• Solution:
Provide quiescing operation that allows a
thread to sleep until a memory location changes