0% found this document useful (0 votes)
58 views

Multi Thread2

This document summarizes a lecture on multithreading. It discusses how multithreading can help improve processor utilization by exploiting thread-level parallelism from multiple concurrent threads. Early examples of multithreaded hardware include the CDC 6600 peripheral processors from 1964, which interleaved instructions from different program threads on a simple pipeline. Later systems introduced more advanced hardware thread scheduling and support for many more concurrent threads, such as the Denelcor HEP from 1982 and Tera MTA from 1990-1997. Coarse-grain multithreading is useful for hiding long memory latencies in applications with large data sets and low locality.

Uploaded by

om18sahu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Multi Thread2

This document summarizes a lecture on multithreading. It discusses how multithreading can help improve processor utilization by exploiting thread-level parallelism from multiple concurrent threads. Early examples of multithreaded hardware include the CDC 6600 peripheral processors from 1964, which interleaved instructions from different program threads on a simple pipeline. Later systems introduced more advanced hardware thread scheduling and support for many more concurrent threads, such as the Denelcor HEP from 1982 and Tera MTA from 1990-1997. Coarse-grain multithreading is useful for hiding long memory latencies in applications with large data sets and low locality.

Uploaded by

om18sahu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 37

CS 252 Graduate Computer Architecture

Lecture 13: Multithreading

Krste Asanovic
Electrical Engineering and Computer Sciences
University of California, Berkeley

http://www.eecs.berkeley.edu/~krste
http://inst.eecs.berkeley.edu/~cs252
Recap: Directory Coherence
Protocols
P P

Cache Cache • k processors.


• With each cache-block in memory:
Interconnection Network k presence-bits, 1 dirty-bit
• With each cache-block in cache:
¥¥ ¥ 1 valid bit, and 1 dirty (owner) bit
Memory Directory

presence bits dirty bit

• Scale to larger numbers of processors by replacing


snoopy broadcast with point-point messages
• Requires additional directory storage
• Usually longer latency than snoopy protocols
• Often combined with snooping
– Snoop within small cluster of processors, use directory between clusters
10/30/2007 2
Multithreading
• Difficult to continue to extract ILP from a single
thread
• Many workloads can make use of thread-level
parallelism (TLP)
– TLP from multiprogramming (run independent sequential jobs)
– TLP from multithreaded applications (run one job faster using
parallel threads)
• Multithreading uses TLP to improve utilization of a
single processor

10/30/2007 3
Pipeline Hazards
t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14

LW r1, 0(r2) F D X MW
LW r5, 12(r1) F D D D D X MW
ADDI r5, r5, #12 F F F F D D D D X MW
SW 12(r1), r5 F F F F D D D D

• Each instruction may depend on the next

What can be done to cope with this?

10/30/2007 4
Multithreading
How can we guarantee no dependencies between
instructions in a pipeline?
-- One way is to interleave execution of instructions
from different program threads on same pipeline

Interleave 4 threads, T1-T4, on non-bypassed 5-stage pipe

t0 t1 t2 t3 t4 t5 t6 t7 t8 t9

T1: LW r1, 0(r2) F D X MW Prior instruction in


F D X M W a thread always
T2: ADD r7, r1, r4 completes write-
T3: XORI r5, r4, #12 F D X MW back before next
T4: SW 0(r7), r5 F D X MW instruction in
same thread reads
T1: LW r5, 12(r1) F D X MW register file

10/30/2007 5
CDC 6600 Peripheral Processors
(Cray, 1964)

• First multithreaded hardware


• 10 “virtual” I/O processors
• Fixed interleave on simple pipeline
• Pipeline has 100ns cycle time
• Each virtual processor executes one instruction every 1000ns
• Accumulator-based instruction set to reduce processor state
10/30/2007 6
Simple Multithreaded Pipeline

PC X
PC GPR1
PC 1 I$ IR GPR1
GPR1
PC 1 GPR1
1
1 D$
Y

+1

2 Thread 2
select
• Have to carry thread select down pipeline to ensure correct state
bits read/written at each pipe stage
• Appears to software (including OS) as multiple, albeit slower, CPUs

10/30/2007 7
Multithreading Costs

• Each thread requires its own user state


– PC
– GPRs

• Also, needs its own system state


– virtual memory page table base register
– exception handling registers

• Other costs?

10/30/2007 8
Thread Scheduling Policies
• Fixed interleave (CDC 6600 PPUs, 1964)
– each of N threads executes one instruction every N cycles
– if thread not ready to go in its slot, insert pipeline bubble

• Software-controlled interleave (TI ASC PPUs, 1971)


– OS allocates S pipeline slots amongst N threads
– hardware performs fixed interleave over S slots, executing whichever
thread is in that slot

• Hardware-controlled thread scheduling (HEP, 1982)


– hardware keeps track of which threads are ready to go
– picks next thread to execute based on hardware priority scheme

10/30/2007 9
Denelcor HEP
(Burton Smith, 1982)

First commercial machine to use hardware threading in main CPU


– 120 threads per processor
– 10 MHz clock rate
– Up to 8 processors
– precursor to Tera MTA (Multithreaded Architecture)

10/30/2007 10
Tera MTA (1990-97)

• Up to 256 processors
• Up to 128 active threads per processor
• Processors and memory modules populate a sparse 3D torus
interconnection fabric
• Flat, shared main memory
– No data cache
– Sustains one main memory access per cycle per processor
• GaAs logic in prototype, 1KW/processor @ 260MHz
– CMOS version, MTA-2, 50W/processor
10/30/2007 11
MTA Architecture
• Each processor supports 128 active hardware threads
– 1 x 128 = 128 stream status word (SSW) registers,
– 8 x 128 = 1024 branch-target registers,
– 32 x 128 = 4096 general-purpose registers

• Three operations packed into 64-bit instruction (short VLIW)


– One memory operation,
– One arithmetic operation, plus
– One arithmetic or branch operation

• Thread creation and termination instructions

• Explicit 3-bit “lookahead” field in instruction gives number of


subsequent instructions (0-7) that are independent of this one
– c.f. instruction grouping in VLIW
– allows fewer threads to fill machine pipeline
– used for variable-sized branch delay slots

10/30/2007 12
MTA Pipeline
Issue Pool Inst Fetch
• Every cycle, one
W
VLIW instruction from
one active thread is
M A C
launched into pipeline
• Instruction pipeline is
21 cycles long

Memory Pool
Write Pool

W
• Memory operations
incur ~150 cycles of
W
latency

Retry Pool
Assuming a single thread issues one
instruction every 21 cycles, and clock
Interconnection Network rate is 260 MHz…

Memory pipeline
What is single-thread performance?
Effective single-thread issue rate
is 260/21 = 12.4 MIPS
10/30/2007 13
Coarse-Grain Multithreading

Tera MTA designed for supercomputing applications


with large data sets and low locality
– No data cache
– Many parallel threads needed to hide large memory latency

Other applications are more cache friendly


– Few pipeline bubbles when cache getting hits
– Just add a few threads to hide occasional cache miss
latencies
– Swap threads on cache misses

10/30/2007 14
MIT Alewife (1990)

• Modified SPARC chips


– register windows hold different thread
contexts
• Up to four threads per node
• Thread switch on local cache miss

10/30/2007 15
IBM PowerPC RS64-IV (2000)
• Commercial coarse-grain multithreading CPU
• Based on PowerPC with quad-issue in-order five-
stage pipeline
• Each physical CPU supports two virtual CPUs
• On L2 cache miss, pipeline is flushed and execution
switches to second thread
– short pipeline minimizes flush penalty (4 cycles), small
compared to memory access latency
– flush pipeline to simplify exception handling

10/30/2007 16
CS252 Administrivia
• More practice problems and solutions on
multiprocessor caches coming
– Although practice problems are not graded, we’ll expect you’ll have
worked through them in the second quiz
• Reading assignment #4: Memory consistency models
– Tutorial on consistency models + Mark Hill’s Position paper
– Conflict between simpler memory models and simpler/faster
hardware
• Reminder: office hours
– Krste, Monday 1-3pm in 645 Soda
– Rose, Tuesday 5-6pm in 751 Soda

10/30/2007 17
Simultaneous Multithreading (SMT)
for OoO Superscalars

• Techniques presented so far have all been “vertical”


multithreading where each pipeline stage works on
one thread at a time
• SMT uses fine-grain control already present inside
an OoO superscalar to allow instructions from
multiple threads to enter execution on same clock
cycle. Gives better utilization of machine resources.

10/30/2007 18
For most apps, most execution units
lie idle in an OoO superscalar
For an 8­way 
superscalar.

From: Tullsen, 
Eggers, and Levy,
“Simultaneous 
Multithreading: 
Maximizing On­chip 
Parallelism, ISCA 
10/30/2007 19
1995.
Superscalar Machine Efficiency

Issue width
Instruction
issue
Completely idle cycle
(vertical waste)

Time
Partially filled cycle,
i.e., IPC < 4
(horizontal waste)

10/30/2007 20
Vertical Multithreading

Issue width
Instruction
issue

Second thread interleaved


cycle-by-cycle

Time
Partially filled cycle,
i.e., IPC < 4
(horizontal waste)

• What is the effect of cycle-by-cycle interleaving?


– removes vertical waste, but leaves some horizontal waste
10/30/2007 21
Chip Multiprocessing (CMP)
Issue width

Time

• What is the effect of splitting into multiple processors?


– reduces horizontal waste,
– leaves some vertical waste, and
– puts upper limit on peak throughput of each thread.
10/30/2007 22
Ideal Superscalar Multithreading
[Tullsen, Eggers, Levy, UW, 1995]
Issue width

Time

• Interleave multiple threads to multiple issue slots with


no restrictions
10/30/2007 23
O-o-O Simultaneous Multithreading
[Tullsen, Eggers, Emer, Levy, Stamm, Lo, DEC/UW, 1996]

• Add multiple contexts and fetch engines and allow


instructions fetched from different threads to issue
simultaneously
• Utilize wide out-of-order superscalar processor
issue queue to find instructions to issue from
multiple threads
• OOO instruction window already has most of the
circuitry required to schedule from multiple
threads
• Any single thread can utilize whole machine

10/30/2007 24
Power 4
Single­threaded predecessor to 
Power 5.  8 execution units in
out­of­order engine, each may
issue an instruction each cycle.

10/30/2007 25
Power
4

2 commits
Power (architected
5 register sets)

2 fetch (PC),
2 initial
10/30/2007
decodes 26
Power 5 data flow ...

Why only 2 threads? With 4, one of the shared


resources (physical registers, cache, memory
bandwidth) would be prone to bottleneck
10/30/2007 27
Changes in Power 5 to support SMT
• Increased associativity of L1 instruction cache and
the instruction address translation buffers
• Added per thread load and store queues
• Increased size of the L2 (1.92 vs. 1.44 MB) and L3
caches
• Added separate instruction prefetch and buffering per
thread
• Increased the number of virtual registers from 152 to
240
• Increased the size of several issue queues
• The Power5 core is about 24% larger than the
Power4 core because of the addition of SMT support

10/30/2007 28
Pentium-4 Hyperthreading (2002)
• First commercial SMT design (2-way SMT)
– Hyperthreading == SMT
• Logical processors share nearly all resources of the
physical processor
– Caches, execution units, branch predictors
• Die area overhead of hyperthreading ~ 5%
• When one logical processor is stalled, the other can
make progress
– No logical processor can use all entries in queues when two threads
are active
• Processor running only one active software thread runs
at approximately same speed with or without
hyperthreading

10/30/2007 29
Pentium-4 Hyperthreading
Front End

Resource divided Resource shared


between logical CPUs between logical CPUs

[ Intel Technology
10/30/2007 Journal, Q1 2002 ] 30
Pentium-4 Hyperthreading
Execution Pipeline

[ Intel Technology
10/30/2007
Journal, Q1 2002 ] 31
SMT adaptation to parallelism type
For regions with high thread level For regions with low thread level
parallelism (TLP) entire machine parallelism (TLP) entire machine
width is shared by all threads width is available for instruction level
parallelism (ILP)
Issue width Issue width

Time Time

10/30/2007 32
Initial Performance of SMT
• Pentium 4 Extreme SMT yields 1.01 speedup for
SPECint_rate benchmark and 1.07 for SPECfp_rate
– Pentium 4 is dual threaded SMT
– SPECRate requires that each SPEC benchmark be run against a
vendor-selected number of copies of the same benchmark
• Running on Pentium 4 each of 26 SPEC benchmarks
paired with every other (262 runs) speed-ups from 0.90
to 1.58; average was 1.20
• Power 5, 8-processor server 1.23 faster for
SPECint_rate with SMT, 1.16 faster for SPECfp_rate
• Power 5 running 2 copies of each app speedup
between 0.89 and 1.41
– Most gained some
– Fl.Pt. apps had most cache conflicts and least gains

10/30/2007 33
Power 5 thread performance ...
Relative priority
of each thread
controllable in
hardware.

For balanced
operation, both
threads run
slower than if
they “owned”
the machine.
10/30/2007 34
Icount Choosing Policy
Fetch from thread with the least instructions in flight.

Why does this enhance throughput?


10/30/2007 35
SMT Fetch Policies (Locks)

• Problem:
Spin looping thread consumes resources

• Solution:
Provide quiescing operation that allows a
thread to sleep until a memory location changes

Load and start


loop: watching 0(r2)
ARM r1, 0(r2)
BEQ r1, got_it
QUIESCE Inhibit scheduling
BR loop of thread until
got_it: activity observed
on 0(r2)
10/30/2007 36
Time (processor cycle)
Summary: Multithreaded Categories
Simultaneous
Superscalar Fine-Grained Multiprocessing
Coarse-Grained Multithreading

Thread 1 Thread 3 Thread 5


Thread 2 Thread 4 Idle slot
10/30/2007 37

You might also like