0% found this document useful (0 votes)
15 views60 pages

Onur 447 Spring15 Lecture14 Simd Afterlecture

Uploaded by

sonia_096
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views60 pages

Onur 447 Spring15 Lecture14 Simd Afterlecture

Uploaded by

sonia_096
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

18-447

Computer Architecture
Lecture 14: SIMD Processing
(Vector and Array Processors)

Prof. Onur Mutlu


Carnegie Mellon University
Spring 2015, 2/18/2015
Agenda for Today & Next Few Lectures
 Single-cycle Microarchitectures

 Multi-cycle and Microprogrammed Microarchitectures

 Pipelining

 Issues in Pipelining: Control & Data Dependence Handling,


State Maintenance and Recovery, …

 Out-of-Order Execution

 Issues in OoO Execution: Load-Store Handling, …

 Alternative Approaches to Instruction Level Parallelism


2
Approaches to (Instruction-Level) Concurrency
 Pipelining
 Out-of-order execution
 Dataflow (at the ISA level)
 SIMD Processing (Vector and array processors, GPUs)
 VLIW
 Decoupled Access Execute
 Systolic Arrays

3
Reminder: Announcements
 Lab 3 due this Friday (Feb 20)
 Pipelined MIPS
 Competition for high performance
 You can optimize both cycle time and CPI
 Document and clearly describe what you do during check-off

 Homework 3 due Feb 25


 A lot of questions that enable you to learn the concepts via
hands-on exercise
 Remember this is all for your benefit (to learn and prepare for
exams)
 HWs have very little contribution to overall grade
 Solutions to almost all questions are online anyway
 But I would still like you to do them (for your benefit)!
4
Homework 3.1: Feedback Form
 Due Monday Feb 23

 I would like your feedback on the course


 Easy to fill in
 Can submit anonymously, if you wish
 Worth 0.25% of your grade
 Need to get checked off after submitting to get your grade
points
 Can email
 If anonymous, show that you are turning in and have a TA
check you off

5
Readings for Today
 Lindholm et al., "NVIDIA Tesla: A Unified Graphics and
Computing Architecture," IEEE Micro 2008.

 Fatahalian and Houston, “A Closer Look at GPUs,” CACM


2008.

6
Recap of Last Lecture
 OoO Execution as Restricted Data Flow
 Memory Disambiguation or Unknown Address Problem
 Memory Dependence Handling
 Conservative, Aggressive, Intelligent Approaches
 Load Store Queues
 Design Choices in an OoO Processor
 Combining OoO+Superscalar+Branch Prediction
 Example OoO Processor Designs

 Data Flow (at the ISA level) Approach to Concurrency


 Characteristics
 Supporting dynamic instances of a node: Tagging, Context IDs, Frames
 Example Operation
 Advantages and Disadvantages
 Combining Data Flow and Control Flow: Getting the Best of Both Worlds

7
Reminder: Intel Pentium 4 Simplified
Mutlu+, “Runahead Execution,”
HPCA 2003.

8
Reminder: Alpha 21264

Kessler, “The Alpha 21264 Microprocessor,” IEEE Micro, March-April 1999. 9


Review: Data Flow:
Exploiting Irregular Parallelism
Review: Pure Data Flow Pros and Cons
 Advantages
 Very good at exploiting irregular parallelism
 Only real dependencies constrain processing

 Disadvantages
 Debugging difficult (no precise state)
 Interrupt/exception handling is difficult (what is precise state
semantics?)
 Implementing dynamic data structures difficult in pure data
flow models
 Too much parallelism? (Parallelism control needed)
 High bookkeeping overhead (tag matching, data storage)
 Instruction cycle is inefficient (delay between dependent
instructions), memory locality is not exploited

11
Review: Combining Data Flow and Control Flow
 Can we get the best of both worlds?

 Two possibilities

 Model 1: Keep control flow at the ISA level, do dataflow


underneath, preserving sequential semantics

 Model 2: Keep dataflow model, but incorporate some control


flow at the ISA level to improve efficiency, exploit locality, and
ease resource management
 Incorporate threads into dataflow: statically ordered instructions;
when the first instruction is fired, the remaining instructions
execute without interruption in control flow order (e.g., one can
pipeline them)

12
Review: Data Flow Summary
 Data Flow at the ISA level has not been (as) successful

 Data Flow implementations under the hood (while


preserving sequential ISA semantics) have been very
successful
 Out of order execution

13
Approaches to (Instruction-Level) Concurrency
 Pipelining
 Out-of-order execution
 Dataflow (at the ISA level)
 SIMD Processing (Vector and array processors, GPUs)
 VLIW
 Decoupled Access Execute
 Systolic Arrays

14
SIMD Processing:
Exploiting Regular (Data) Parallelism
Flynn’s Taxonomy of Computers
 Mike Flynn, “Very High-Speed Computing Systems,” Proc.
of IEEE, 1966

 SISD: Single instruction operates on single data element


 SIMD: Single instruction operates on multiple data elements
 Array processor
 Vector processor
 MISD: Multiple instructions operate on single data element
 Closest form: systolic array processor, streaming processor
 MIMD: Multiple instructions operate on multiple data
elements (multiple instruction streams)
 Multiprocessor
 Multithreaded processor
16
Data Parallelism
 Concurrency arises from performing the same operations
on different pieces of data
 Single instruction multiple data (SIMD)
 E.g., dot product of two vectors

 Contrast with data flow


 Concurrency arises from executing different operations in parallel (in
a data driven manner)

 Contrast with thread (“control”) parallelism


 Concurrency arises from executing different threads of control in
parallel

 SIMD exploits instruction-level parallelism


 Multiple “instructions” (more appropriately, operations) are
concurrent: instructions happen to be the same
17
SIMD Processing
 Single instruction operates on multiple data elements
 In time or in space
 Multiple processing elements

 Time-space duality

 Array processor: Instruction operates on multiple data


elements at the same time using different spaces

 Vector processor: Instruction operates on multiple data


elements in consecutive time steps using the same space

18
Array vs. Vector Processors
ARRAY PROCESSOR VECTOR PROCESSOR

Instruction Stream Same op @ same time


Different ops @ time
LD VR  A[3:0] LD0 LD1 LD2 LD3 LD0
ADD VR  VR, 1 AD0 AD1 AD2 AD3 LD1 AD0
MUL VR  VR, 2
ST A[3:0]  VR MU0 MU1 MU2 MU3 LD2 AD1 MU0
ST0 ST1 ST2 ST3 LD3 AD2 MU1 ST0
Different ops @ same space AD3 MU2 ST1
MU3 ST2
Time Same op @ space ST3

Space Space

19
SIMD Array Processing vs. VLIW
 VLIW: Multiple independent operations packed together by the compiler

20
SIMD Array Processing vs. VLIW
 Array processor: Single operation on multiple (different) data elements

21
Vector Processors
 A vector is a one-dimensional array of numbers
 Many scientific/commercial programs use vectors
for (i = 0; i<=49; i++)
C[i] = (A[i] + B[i]) / 2

 A vector processor is one whose instructions operate on


vectors rather than scalar (single data) values
 Basic requirements
 Need to load/store vectors  vector registers (contain vectors)
 Need to operate on vectors of different lengths  vector length
register (VLEN)
 Elements of a vector might be stored apart from each other in
memory  vector stride register (VSTR)
 Stride: distance between two elements of a vector

22
Vector Processors (II)
 A vector instruction performs an operation on each element
in consecutive cycles
 Vector functional units are pipelined
 Each pipeline stage operates on a different data element

 Vector instructions allow deeper pipelines


 No intra-vector dependencies  no hardware interlocking
within a vector
 No control flow within a vector
 Known stride allows prefetching of vectors into
registers/cache/memory

23
Vector Processor Advantages
+ No dependencies within a vector
 Pipelining, parallelization work well
 Can have very deep pipelines, no dependencies!

+ Each instruction generates a lot of work


 Reduces instruction fetch bandwidth requirements

+ Highly regular memory access pattern


 Can interleave vector data elements across multiple memory banks for
higher memory bandwidth (to tolerate memory bank access latency)
 Prefetching a vector is relatively easy

+ No need to explicitly code loops


 Fewer branches in the instruction sequence
24
Vector Processor Disadvantages
-- Works (only) if parallelism is regular (data/SIMD parallelism)
++ Vector operations
-- Very inefficient if parallelism is irregular
-- How about searching for a key in a linked list?

Fisher, “Very Long Instruction Word architectures and the ELI-512,” ISCA 1983. 25
Vector Processor Limitations
-- Memory (bandwidth) can easily become a bottleneck,
especially if
1. compute/memory operation balance is not maintained
2. data is not mapped appropriately to memory banks

26
Vector Processing in More Depth
Vector Registers
 Each vector data register holds N M-bit values
 Vector control registers: VLEN, VSTR, VMASK
 Maximum VLEN can be N
 Maximum number of elements stored in a vector register
 Vector Mask Register (VMASK)
 Indicates which elements of vector to operate on

 Set by vector test instructions

 e.g., VMASK[i] = (Vk[i] == 0)


M-bit wide M-bit wide
V0,0 V1,0
V0,1 V1,1

V0,N-1 V1,N-1

28
Vector Functional Units
 Use deep pipeline to execute
element operations
V V V
 fast clock cycle
1 2 3

 Control of deep pipeline is


simple because elements in
vector are independent

Six stage multiply pipeline

V1 * V2  V3

Slide credit: Krste Asanovic 29


Vector Machine Organization (CRAY-1)
 CRAY-1
 Russell, “The CRAY-1
computer system,”
CACM 1978.

 Scalar and vector modes


 8 64-element vector
registers
 64 bits per element
 16 memory banks
 8 64-bit scalar registers
 8 24-bit address registers

30
Loading/Storing Vectors from/to Memory
 Requires loading/storing multiple elements

 Elements separated from each other by a constant distance


(stride)
 Assume stride = 1 for now

 Elements can be loaded in consecutive cycles if we can


start the load of one element per cycle
 Can sustain a throughput of one element per cycle

 Question: How do we achieve this with a memory that


takes more than 1 cycle to access?
 Answer: Bank the memory; interleave the elements across
banks
31
Memory Banking
 Memory is divided into banks that can be accessed independently;
banks share address and data buses (to minimize pin cost)
 Can start and complete one bank access per cycle
 Can sustain N parallel accesses if all N go to different banks

Bank Bank Bank Bank


0 1 2 15

MDR MAR MDR MAR MDR MAR MDR MAR

Data bus

Address bus

CPU
Picture credit: Derek Chiou 32
Vector Memory System
 Next address = Previous address + Stride
 If stride = 1 & consecutive elements interleaved across
banks & number of banks >= bank latency, then can
sustain 1 element/cycle throughput
Bas
Stride
Vector Registers e

Address
Generator +

0 1 2 3 4 5 6 7 8 9 A B C D E F
Memory Banks
Picture credit: Krste Asanovic 33
Scalar Code Example
 For I = 0 to 49
 C[i] = (A[i] + B[i]) / 2

 Scalar code (instruction and its latency)


MOVI R0 = 50 1
MOVA R1 = A 1 304 dynamic instructions
MOVA R2 = B 1
MOVA R3 = C 1
X: LD R4 = MEM[R1++] 11 ;autoincrement addressing
LD R5 = MEM[R2++] 11
ADD R6 = R4 + R5 4
SHFR R7 = R6 >> 1 1
ST MEM[R3++] = R7 11
DECBNZ --R0, X 2 ;decrement and branch if NZ
34
Scalar Code Execution Time (In Order)
 Scalar execution time on an in-order processor with 1 bank
 First two loads in the loop cannot be pipelined: 2*11 cycles
 4 + 50*40 = 2004 cycles

 Scalar execution time on an in-order processor with 16


banks (word-interleaved: consecutive words are stored in
consecutive banks)
 First two loads in the loop can be pipelined
 4 + 50*30 = 1504 cycles

 Why 16 banks?
 11 cycle memory access latency
 Having 16 (>11) banks ensures there are enough banks to
overlap enough memory operations to cover memory latency
35
Vectorizable Loops
 A loop is vectorizable if each iteration is independent of any
other

 For I = 0 to 49
 C[i] = (A[i] + B[i]) / 2
 Vectorized loop (each instruction and its latency):
MOVI VLEN = 50 1
7 dynamic instructions
MOVI VSTR = 1 1
VLD V0 = A 11 + VLN - 1
VLD V1 = B 11 + VLN – 1
VADD V2 = V0 + V1 4 + VLN - 1
VSHFR V3 = V2 >> 1 1 + VLN - 1
VST C = V3 11 + VLN – 1
36
Basic Vector Code Performance
 Assume no chaining (no vector data forwarding)
 i.e., output of a vector functional unit cannot be used as the
direct input of another
 The entire vector register needs to be ready before any
element of it can be used as part of another operation
 One memory port (one address generator)
 16 memory banks (word-interleaved)

1 1 11 49 11 49 4 49 1 49 11 49

V0 = A[0..49] V1 = B[0..49] ADD SHIFT STORE

 285 cycles
37
Vector Chaining
 Vector chaining: Data forwarding from one vector
functional unit to another

V V V V V
LV v1 1 2 3 4 5
MULV v3,v1,v2
ADDV v5, v3, v4

Chain Chain

Load
Unit
Mult. Add

Memory

Slide credit: Krste Asanovic 38


Vector Code Performance - Chaining
 Vector chaining: Data forwarding from one vector
functional unit to another
1 1 11 49 11 49

Strict assumption:
Each memory bank
4 49 has a single port
(memory bandwidth
bottleneck)
These two VLDs cannot be 1 49
pipelined. WHY?

11 49

VLD and VST cannot be


 182 cycles pipelined. WHY?
39
Vector Code Performance – Multiple Memory Ports
 Chaining and 2 load ports, 1 store port in each bank
1 1 11 49

1 11 49

4 49

1 49

11 49
 79 cycles
 19X perf. improvement!
40
Questions (I)
 What if # data elements > # elements in a vector register?
 Idea: Break loops so that each iteration operates on #
elements in a vector register
 E.g., 527 data elements, 64-element VREGs
 8 iterations where VLEN = 64
 1 iteration where VLEN = 15 (need to change value of VLEN)
 Called vector stripmining

 What if vector data is not stored in a strided fashion in


memory? (irregular memory access to a vector)
 Idea: Use indirection to combine/pack elements into vector
registers
 Called scatter/gather operations

41
Gather/Scatter Operations

Want to vectorize loops with indirect accesses:


for (i=0; i<N; i++)
A[i] = B[i] + C[D[i]]

Indexed load instruction (Gather)


LV vD, rD # Load indices in D vector
LVI vC, rC, vD # Load indirect from rC base
LV vB, rB # Load B vector
ADDV.D vA,vB,vC # Do add
SV vA, rA # Store result

42
Gather/Scatter Operations
 Gather/scatter operations often implemented in hardware
to handle sparse matrices
 Vector loads and stores use an index vector which is added
to the base register to generate the addresses
Index Vector Data Vector (to Store) Stored Vector (in Memory)

0 3.14 Base+0 3.14


2 6.5 Base+1 X
6 71.2 Base+2 6.5
7 2.71 Base+3 X
Base+4 X
Base+5 X
Base+6 71.2
Base+7 2.71

43
Conditional Operations in a Loop
 What if some operations should not be executed on a vector
(based on a dynamically-determined condition)?
loop: if (a[i] != 0) then b[i]=a[i]*b[i]
goto loop

 Idea: Masked operations


 VMASK register is a bit mask determining which data element
should not be acted upon
VLD V0 = A
VLD V1 = B
VMASK = (V0 != 0)
VMUL V1 = V0 * V1
VST B = V1
 Does this look familiar? This is essentially predicated execution.
44
Another Example with Masking
for (i = 0; i < 64; ++i)
if (a[i] >= b[i]) Steps to execute the loop in SIMD code
c[i] = a[i]
1. Compare A, B to get
else
VMASK
c[i] = b[i]
2. Masked store of A into C

A B VMASK 3. Complement VMASK


1 2 0
2 2 1 4. Masked store of B into C
3 2 1
4 10 0
-5 -4 0
0 -3 1
6 5 1
-7 -8 1

45
Masked Vector Instructions
Simple Implementation Density-Time Implementation
– execute all N operations, turn off – scan mask vector and only execute
result writeback according to mask elements with non-zero masks

M[7]=1 A[7] B[7] M[7]=1


M[6]=0 A[6] B[6] M[6]=0 A[7] B[7]
M[5]=1 A[5] B[5] M[5]=1
M[4]=1 A[4] B[4] M[4]=1
M[3]=0 A[3] B[3] M[3]=0 C[5]

M[2]=0 C[4]
M[1]=1
M[2]=0 C[2]
M[0]=0
M[1]=1 C[1] C[1]

Write data port

M[0]=0 C[0]
Which one is better?
Write Enable Write data port
Tradeoffs?
Slide credit: Krste Asanovic 46
Some Issues
 Stride and banking
 As long as they are relatively prime to each other and there
are enough banks to cover bank access latency, we can
sustain 1 element/cycle throughput

 Storage of a matrix
 Row major: Consecutive elements in a row are laid out
consecutively in memory
 Column major: Consecutive elements in a column are laid out
consecutively in memory
 You need to change the stride when accessing a row versus
column

47
48
Minimizing Bank Conflicts
 More banks

 Better data layout to match the access pattern


 Is this always possible?

 Better mapping of address to bank


 E.g., randomized mapping
 Rau, “Pseudo-randomly interleaved memory,” ISCA 1991.

49
Array vs. Vector Processors, Revisited
 Array vs. vector processor distinction is a “purist’s”
distinction

 Most “modern” SIMD processors are a combination of both


 They exploit data parallelism in both time and space
 GPUs are a prime example we will cover in a bit more detail

50
Remember: Array vs. Vector Processors
ARRAY PROCESSOR VECTOR PROCESSOR

Instruction Stream Same op @ same time


Different ops @ time
LD VR  A[3:0] LD0 LD1 LD2 LD3 LD0
ADD VR  VR, 1 AD0 AD1 AD2 AD3 LD1 AD0
MUL VR  VR, 2
ST A[3:0]  VR MU0 MU1 MU2 MU3 LD2 AD1 MU0
ST0 ST1 ST2 ST3 LD3 AD2 MU1 ST0
Different ops @ same space AD3 MU2 ST1
MU3 ST2
Time Same op @ space ST3

Space Space

51
Vector Instruction Execution
VADD A,B  C

Execution using Execution using


one pipelined four pipelined
functional unit functional units

A[6] B[6] A[24] B[24] A[25] B[25] A[26] B[26] A[27] B[27]
A[5] B[5] A[20] B[20] A[21] B[21] A[22] B[22] A[23] B[23]
A[4] B[4] A[16] B[16] A[17] B[17] A[18] B[18] A[19] B[19]
A[3] B[3] A[12] B[12] A[13] B[13] A[14] B[14] A[15] B[15]

C[2] C[8] C[9] C[10] C[11]

C[1] C[4] C[5] C[6] C[7]

C[0] C[0] C[1] C[2] C[3]

Slide credit: Krste Asanovic 52


Vector Unit Structure
Functional Unit

Partitioned
Vector
Registers
Elements 0, Elements 1, Elements 2, Elements 3,
4, 8, … 5, 9, … 6, 10, … 7, 11, …

Lane

Memory Subsystem

Slide credit: Krste Asanovic 53


Vector Instruction Level Parallelism
Can overlap execution of multiple vector instructions
 Example machine has 32 elements per vector register and 8 lanes
 Completes 24 operations/cycle while issuing 1 vector instruction/cycle

Load Unit Multiply Unit Add Unit


load
mul
add
time
load
mul
add

Instruction
issue

Slide credit: Krste Asanovic 54


Automatic Code Vectorization
for (i=0; i < N; i++)
C[i] = A[i] + B[i];
Scalar Sequential Code Vectorized Code

load load load

Iter. 1 load load load

add Time add add

store store store

load
Iter. Iter.
Iter. 2 load 1 2 Vector Instruction

add
Vectorization is a compile-time reordering of
operation sequencing
 requires extensive loop dependence analysis
store
Slide credit: Krste Asanovic 55
Vector/SIMD Processing Summary
 Vector/SIMD machines are good at exploiting regular data-
level parallelism
 Same operation performed on many data elements
 Improve performance, simplify design (no intra-vector
dependencies)

 Performance improvement limited by vectorizability of code


 Scalar operations limit vector machine performance
 Remember Amdahl’s Law
 CRAY-1 was the fastest SCALAR machine at its time!

 Many existing ISAs include (vector-like) SIMD operations


 Intel MMX/SSEn/AVX, PowerPC AltiVec, ARM Advanced SIMD
56
SIMD Operations in Modern ISAs
Intel Pentium MMX Operations
 Idea: One instruction operates on multiple data elements
simultaneously
 Ala array processing (yet much more limited)
 Designed with multimedia (graphics) operations in mind
No VLEN register
Opcode determines data type:
8 8-bit bytes
4 16-bit words
2 32-bit doublewords
1 64-bit quadword

Stride always equal to 1.

Peleg and Weiser, “MMX Technology


Extension to the Intel Architecture,”
IEEE Micro, 1996.

58
MMX Example: Image Overlaying (I)
 Goal: Overlay the human in image 1 on top of the background in image 2

59
MMX Example: Image Overlaying (II)

60

You might also like