Modle 01 - HPC Introduction To Pipeline
Modle 01 - HPC Introduction To Pipeline
Computing
Introduction to Pipeline
1
Introduction
– Computer performance has been
increasing phenomenally over the
last five decades.
– Brought out by Moore’s Law:
● Transistors per square inch roughly double
every eighteen months.
– Moore’s law is not exactly a law:
● but has held good for nearly 50 years.
2
Moore’s Law
• Processor performance:
• Twice as fast after every 2 years
(roughly).
• Memory capacity:
• Twice as much after every 18 months
(roughly).
4
Interpreting Moore’s Law
● Moore's law is not about just the density of
transistors on a chip that can be achieved:
– But about the density of transistors at which the
cost per transistor is the lowest.
● As more transistors are made on a chip:
– The cost to make each transistor reduces.
– But the chance that the chip will not work due to a
defect rises.
● Moore observed in 1965 there is a transistor
density or complexity:
– At which "a minimum cost" is achieved.
5
How Did Performance Improve?
● Till 1980s, most of the performance improvements
came from using innovations in manufacturing
technologies:
– VLSI
– Reduction in feature size
● Improvements due to innovations in manufacturing
technologies have slowed down since 1980s:
– Smaller feature size gives rise to increased resistance,
capacitance, propagation delays.
– Larger power dissipation.
(Aside: What is the power consumption of Intel Pentium Processor?
Roughly 100 watts idle)
6
How Did Performance Improve?
Cont…
14
Intel Core 2 Duo
• Code named “conroe”
• Homogeneous cores
• Bus based chip
interconnect.
• Shared on-die Cache
Memory.
Source: Intel Corp.
17
Why Sharing On-Die L2?
18
Xeon and Opteron
Dual-Core Dual-Core Dual- Dual-Core
Core
Memory
Controller PCI-E
Hub Bridge PCI-E PCI-E
PCI-E
I/O Hub Bridge Bridge Bridge
PCI-E
Bridge
I/O Hub
Dual-
Dual-
Core
PCI-E Core
Memory Bridge
I/O Hub PCI-E
Controller Bridge
Hub PCI-E
Bridge
PCI-E PCI-E
Bridge Bridge
I/O Hub
20
20
WHY
Now as for why, let me also try to explain that.
There are numerous reasons, the FSB is one, but not the only one.
The biggest factor is something called instructions per clock cycle (IPC).
The Ghz number of a processor shows how fast the clock cycles are.
But the Core 2 Duo has a much higher IPC, meaning it processes more data
each clock cycle than the Pentium D.
Put into its simplest terms, think of it this way, say the Pentium D has 10
clock cycles in a given amount of time, while the Core 2 Duo would have only 6.
But the Core 2 Duo could processor 10 instruction per clock cycle, but the
Pentium D only 4.
In the given amount of time, the Core 2 Duo would process 60 instructions,
but the Pentium D only 40.
This is why it actually performs faster even with a lower clock speed.
21
Today’s Objectives
● Study some preliminary concepts:
– Amdahl’s law, performance
benchmarking, etc.
● RISC versus CISC architectures.
● Types of parallelism in programs
versus types of parallel
computers.
● Basic concepts in pipelining.
22
GO TO SLIDE NO 69
23
Measuring performance
● Real Application
– Problems occurs due to the dependencies on
OS or COMPILER
● Modified Application
– To enhance the probability of need
– Focus on one particular aspect of the system
performance
● Kernels
– To isolate performance of individual features
of M/c
24
Toy Benchmarks
● The performance of different
computers can be compared by running
some standard programs:
– Quick sort, Merge sort, etc.
● But, the basic problem remains:
– Even if you select based on a toy
benchmark, the system may not perform
well in a specific application.
– What can be a solution then?
25
Synthetic Benchmarks
● Basic Principle: Analyze the distribution of
instructions over a large number of practical
programs.
● Synthesize a program that has the same
instruction distribution as a typical program:
– Need not compute something meaningful.
● Dhrystone, Khornerstone, Linpack are some
of the older synthetic benchmarks:
– More recent is SPEC..( Standard performance
evaluation corporation )
26
SPEC Benchmarks
● SPEC: Standard Performance Evaluation
Corporation:
– A non-profit organization (www.spec.org)
● CPU-intensive benchmark for evaluating
processor performance of workstation:
– Generations: SPEC89, SPEC92, SPEC95,
and SPEC2000 …
– Emphasizing memory system performance
in SPEC2000.
27
Problems with Benchmarks
● SPEC89 benchmark included a small
kernel called matrix 300:
– Consists of 8 different 300*300 matrix
operations.
– Optimization of this inner-loop resulted in
performance improvement by a factor of 9.
● Optimizing performance can discard 25%
Dhrystone code
● Solution: Benchmark suite
28
Other SPEC Benchmarks
● SPECviewperf: 3D graphics performance
– For applications such as CAD/CAM, visualization,
content creations, etc.
● SPEC JVM98: performance of client-side
Java virtual machine.
● SPEC JBB2000: Server-side Java application
● SPEC WEB2005: evaluating WWW servers
– Contains multiple workloads utilizing both http
and https, dynamic content implemented in PHP
and JSP.
29
BAPCo
● Non-profit consortium
www.bapco.com
● SYSmark 2004 SE
– Office productivity benchmark
30
Instruction Set Architecture
(ISA)
● Programmer visible part of a processor:
– Instruction Set (what operations can be
performed?)
– Instruction Format (how are instructions
specified?)
– Registers (where are data located?)
– Addressing Modes (how is data accessed?)
– Exceptional Conditions (what happens if
something goes wrong?)
31
ISA cont…
● ISA is important:
– Not only from the programmer’s
perspective.
– From processor design and
implementation perspectives as well.
32
Evolution of Instruction Sets
Single Accumulator
(Manchester Mark I,
IBM 700 series 1953)
Stack
(Burroughs, HP-3000 1960-70)
33
Different Types of ISAs
36
Types of GPR Computers
● Register-Register (0,3)
● Register-Memory (1,2)
● Register-Memory (2,2) (3,3)
37
Quantitative Principle of
Computer Design
● MAKE COMMON CASE FAST
– Frequent case over infrequent case
– Improvement is easy to the Frequent case
– Quantifies overall performance gain due to
improve in the part of computation.
38
Computer System Components
CPU
Caches
Processor-Memory Bus
Adapter
RAM Peripheral Buses
Controllers Controllers
I/O devices
Displays Networks
Keyboards
39
Amdahl’s Law
● Quantifies overall performance gain due to
improve in a part of a computation. (CPU Bound)
● Amdahl’s Law:
– Performance improvement gained from using some
faster mode of execution is limited by the amount
of time the enhancement is actually used
Performance for entire task using enhancement when possible
Speedup =
Performance for entire task without using enhancement
OR
Execution time for a task without enhancement
Speedup =
Execution time for the task using enhancement
40
Amdahl’s Law and Speedup
● Speedup tells us:
– How much faster a machine will run due to an
enhancement.
● For using Amdahl’s law two things should
be considered:
– 1st… FRACTION ENHANCED :-Fraction of the
computation time in the original machine
that can use the enhancement
– It is always less than or equal to 1
● If a program executes in 30 seconds and 15
seconds of exec. uses enhancement, fraction = ½
41
Amdahl’s Law and Speedup
– 2nd… SPEEDUP ENHANCED :-Improvement
gained by enhancement, that is how
much faster the task would run if the
enhanced mode is used for entire
program.
– Means the time of original mode over
the time of enhanced mode
– It is always greater than 1
● If enhanced task takes 3.5 seconds and
original task took 7secs, we say the
speedup is 2.
42
Amdahl’s Law Equations
Fractionenhanced
Execution timenew = Execution timeold x (1 – Fractionenhanced) +
Speedupenhanced
Execution Timeold 1
Speedupoverall = =
Execution Timenew Fractionenhanced
(1 – Fractionenhanced) +
Speedupenhanced
Use previous equation,
Solve for speedup
1
Speedupoverall = ~ 1.56
=
(1 – 0.4) + (0.4/10)
44
Corollary of Amdahl’s Law
1. Amdahl’s law express the law of diminishing
returns. The incremental improvement in speed
up gained by an additional improvement in the
performance of just a portion of the
computation diminishes as improvements are
added.
45
Amdahl’s Law Example
Assume that we make an enhancement to a computer that improves some mode
of execution by a factor of 10. Enhanced mode is used 50% of the time.
Measured as a percentage of the execution time when the enhanced mode is in
use. Recall that Amdahl’s law depends on the fraction of the original,
Unenhanced execution time that could make use of enhanced mode. Thus we
can’t directly use this 50% measurement to compute speed up with Amdahl’s
law. What is the speed up we have obtained from the fast mode? What
percentage of original the original execution time has been converted to fast
mode?
● Solution:- If an enhancement is only usable for a fraction of task, we can’t
speed up the task by more than the reciprocal of (1- Fraction).
● Speedup = 1 = 1/(1-0.5) = 2
(1 – Fractionenhanced)
1
Speedupoverall =
Fractionenhanced
(1 – Fractionenhanced) +
Speedupenhanced
● 1
2=
Fractionenhanced 46
(1 – Fractionenhanced) +
10
Amdahl’s Law -Example
● A common transformation required in graphics engines is SQRT. Implementations of
FPSQRT is vary significantly in performance, especially among processors designed for
graphics. Suppose FPSQRT is responsible for 20% of the execution time of a critical
graphics benchmark. One proposal to enhance the FPSQRT H/W & speed up this
operation by a factor of 10. The other alternative is just to try to make FP
instructions in the graphics processor run faster by a factor of 1.6. The FP
instructions are responsible for a total of 50% of the execution time for the
application. The design team believes that they can make all FP instructions run 1.6
times faster with the same effort as required for the fast SQRT. Compare these 2
design alternatives to suggest which one is better.
● Solution:-
Design
1 Design
2
Fraction Enhanced= 20% 0.2 Fraction Enhanced= 50% 0.5
Speed up Enhanced = 10 Speed up Enhanced = 1.6
Seed up FPSQRT Overall = 1/ (1-0.2) + (0.2/10) Seed up FP Overall = 1/ (1-0.5) + (0.5/1.6)
= 1/0.82 ~ 1.22 = 1/0.8125 ~ 1.23
= =
Improving the performance of FP operations overall is slightly better
because of the higher frequency .
47
High Performance
Computing
Lecture-2:
A Few Basic Concepts
48
CPU Performance Equation
● Computers using a clock running at a constant
rate.
● CPU time = CPU clock cycles for a program X
clock cycle time
CPU clock cycles for a program
=
Clock Rate
CPI can be defined in terms of No. of clock cycles & instruction
count. ( IPC CPI)
CPI =
CPU clock cycles for a program
Instruction count
49
CPU Performance Equation
● CPU time = Instruction count X clock cycle
time X cycle per instruction.
50
CPU Performance Equation
n
● CPU clock cycle = Σ IC i X CPI i
Where :-
i=1 IC I No. of times
n instructions i
is executed in
Σ IC i X CPI i a program.
i=1
● CPI overall= Instruction Count
CPI IAverage No.
of clock pre
instructions.
51
Performance Measurements
● Performance measurement is
important:
– Helps us to determine if one
processor (or computer) works
faster than another.
– A computer exhibits higher
performance if it executes programs
faster.
52
Clock-Rate Based Performance
Measurement
● Comparing performance based on
clock rates is obviously meaningless:
– Execution time=CPI × Clock cycle time
– Please remember:
● Higher CPI need not mean better
performance.
● Also, a processor with a higher clock
54
MIPS and MFLOPS
● Used extensively 30 years back.
● MIPS: millions of instructions processed
per second.
● MFLOPS: Millions of FLoating point
OPerations completed per Second
55
Problems with MIPS
● Three significant problems with
using MIPS:
● So severe, made some one term:
– “MeaninglessInformation about
Processing Speed”
● Problem 1:
– MIPS is instruction set dependent.
56
Problems with MIPS
cont…
● Problem 2:
– MIPS varies between programs on
the same computer.
● Problem 3:
– MIPS can vary inversely to
performance!
● Let’s look at an example as to why
MIPS doesn’t work…
57
A MIPS Example
● Consider the following computer:
Instruction counts (in millions) for each
instruction class
Code type- A (1 cycle) B (2 cycle) C (3 cycle)
Compiler 1 5 1 1
Compiler 2 10 1 1
The machine runs at 100MHz.
Instruction A requires 1 clock cycle, Instruction B requires 2
clock cycles, Instruction C requires 3 clock cycles.
n
100 MHz
MIPS1 = = 69.9
1.43
7 x 106 x 1.43
CPU Time1 = = 0.10 seconds
100 x 106
12 x 106 x 1.25
CPU Time2 = = 0.15 seconds
100 x 106
– Frequency of FPSQRT = 2%
– CPI of FPSQRT = 20
61
CPU Performance Equation
● Suppose we have made the following measurements :-
– Frequency of FP operations (Other than FPSQRT) = 25 %
– Average CPI of FP operations = 4.0
– Average CPI of other operations = 1.33
– Frequency of FPSQRT = 2%
– CPI of FPSQRT = 20
– Assume that the 2 design alternatives are to decrease the CPI of FPSQRT to 2 or
to decrease the average CPI Of all FP operations to 2.5. Compare these 2 design
alternatives using the CPU performance equation.
● Solutions:- First observe that only the CPI changes, the clock rate and instruction
count remain identical. We can start by finding original CPI without enhancement:-
n
S CPIi x Ni
= (4 x 25%) + (1.33 x 75%) ~ 2
CPI original = CPU Clock Cycles
Instruction Count
=
i =1
Instruction Count
=
We can compute the CPI for the enhanced FPSQRT by subtracting the cycles saved from
the original CPI :-
CPI with new FPSQRT = CPI Original – [2% x (CPI old FPSQRT – CPI new FPSQRT)]
= 2 – [(2/10)x (20-2)] = 1.64 62
CPU Performance Equation
● We can compute the CPI for the enhanced of all FP instructions the same way or by
adding FP and NON FP CPIs.
● CPI new FP = (75% x 1.33) + (25% x 2.5) = 1.6225
● Since the CPI of the overall FP enhancement is slightly lower, its performance will be
marginally better, specifically the speed up for the overall FP enhancement is :-
● Speed up over all for FP = 2/1.6225 = 1.23
● Speed up over all for FPSQRT = 2/1.64 = 1.22
63
Today’s Objectives
● Study some preliminary concepts:
– Amdahl’s law, performance
benchmarking, etc.
● RISC versus CISC architectures.
● Types of parallelism in programs
versus types of parallel
computers.
● Basic concepts in pipelining.
64
RISC/CISC Controversy
● RISC: Reduced Instruction Set Computer
● CISC: Complex Instruction Set Computer
● Genesis of CISC architecture:
– Implementing commonly used instructions in
hardware can lead to significant performance
benefits.
– For example, use of a FP processor can lead to
performance improvements.
● Genesis of RISC architecture:
– The rarely used instructions can be eliminated
to save chip space --- on chip cache and large 65
number of registers can be provided.
Features of A CISC Processor
● Rich instruction set:
– Some simple, some very complex
● Complex addressing modes:
– Orthogonal addressing (Every possible
addressing mode for every instruction).
Many instructions take multiple cycles:
Large variation in CPI
Instructions are of variable sizes
Small number of registers
Microcode control
No (or inefficient) pipelining 66
Features of a RISC Processor
● Small number of instructions
● Small number of addressing modes
Large number of registers (>32)
Instructions execute in one or two clock cycles
Uniformed length instructions and fixed
instruction format.
Register-Register Architecture:
Separate memory instructions (load/store)
Separate instruction/data cache
Hardwired control
Pipelining (Why CISC are not pipelined?)
67
CISC vs. RISC Organizations
Microprogrammed
Control Unit Hardwared
Cache Control Unit
Microprogrammed
Control Memory Instruction Data
Cache Cache
70
Architectural Classifications
● Flynn’s Classifications [1966]
– Based on multiplicity of instruction streams
& data stream in a computer.
● Feng’s Classification [1972]
– Based on serial & parallel processing.
● Handler’s Classification [1977]
– Determined by the degree of parallelism &
pipeline in various subsystem level.
71
Flynn’s Classification
● SISD (Single Instruction Single Data):
– Uniprocessors.
● MISD (Multiple Instruction Single
Data):
– No practical examples exist
● SIMD (Single Instruction Multiple
Data):
– Specialized processors
● MIMD (Multiple Instruction Multiple
Data):
– General purpose, commercially important
72
SISD
IS Memory
Processing DS
Control unit Module
Unit
73
SIMD
Processing DS Memory
Unit 1 Module
IS DS
Control Processing Memory
unit Unit 2 Module
DS Memory
Processing
Unit n Module
IS 74
MIMD
Control IS Processing DS Memory
unit Unit 1 Module
Control IS DS Memory
Processing
unit Unit 2 Module
DS Memory
Control IS Processing
unit Unit n Module
75
Classification for MIMD
Computers
● Shared Memory:
– Processors communicate through a shared
memory.
– Typically processors connected to each
other and to the shared memory through a
bus.
● Distributed Memory:
– Processors do not share any physical
memory.
– Processors connected to each other
through a network. 76
Shared Memory
● Shared memory located at a
centralized location:
– May consist of several interleaved
modules –-- same distance (access
time) from any processor.
– Also called Uniform Memory Access
(UMA) model.
77
Distributed Memory
● Memory is distributed to each processor:
– Improves scalability.
● Non-Uniform Memory Access (NUMA)
– (a) Message passing architectures – No
processor can directly access another
processor’s memory.
– (b) Distributed Shared Memory (DSM)–
Memory is distributed, but the address
space is shared.
78
UMA vs. NUMA Computers
P1 P2 Pn P1 P2 Pn
Cache Cache Cache Cache Cache Cache
Bus
Main Main Main
Memory Memory Memory
Main
Memory
Network
82
face up to face
83
Parallel Computer Structure
● Emphasize on parallel processing
● Basic architectural features of parallel
computers are :
– Pipelined computers Which performs overlapped
computation to exploit temporal parallelism (task
task is
broken into multiple stages).
stages
– Array processor uses multiple synchronized
arithmetic logic units to achieve spatial parallelism
(duplicate
duplicate hardware performs multiple tasks at
once).
once
– Multiprocessor system achieve asynchronous
parallelism through a set of interactive processors
with shared resources. 84
Pipelined Computer
● IF ID OF EX Segments
● Multiple pipeline cycles
● A pipeline cycle can be set equal to the
delay of the slowest stage.
● Operations of all stages is synchronized
under a common clock.
● Interface latches are used between the
segments to hold the intermediate result.
85
Scalar Pipeline
Scalar data SP1
SP2
Scalar
Reg.
..
.
..
.
K . .
SPn
Scalar fetch
M
E I I K
IS OF
M F D
Vector Pipeline
O Vector fetch
K
R VP1
Y Vector
VP2
.
Instruction Preprocessing Reg. .
.
VPn
Vector data
88
I/O
Scalar Processing
Control Unit P: Processor
Control M: Memory
Processor
Duplicate Hardware Performs
Data Bus Control Multiple Tasks At Once
Memory
PE1 PE 2
PE n
P Control
P P
M M . .... M
Array
Processing
91
I/O channels
. . . . . .
Memory
module 1
Input-output Interprocessor-memory
Memory Interconnection Connection network
module 2 Network [Bus , crossbar or multiport]
.
.
. LM1
. . . .
Memory P 1
module n
Inter- P 2 LM2
Shared Memory
processor
interrupt
network
P n LM n
94
GO TO MIPS DATAPATH
SLIDES
95
ILP Exploitation Through
Pipelining
96
Pipelining
● Pipelining incorporates the
concept of overlapped execution:
– Usedin many everyday applications
without our notice.
● Has proved to be a very popular
and successful way to exploit ILP:
– Instructionpipes are being used in
almost all modern processors.
97
A Pipeline Example
● Consider two alternate ways in which
an engineering college can work:
– Approach 1. Admit a batch of students
and next batch admitted only after
already admitted batch completes (i.e.
admit once every 4 years).
– Approach 2. Admit students every year.
– In the second approach:
● Average number of students graduating per
year increases four times.
98
Pipelining
99
Pipelined Execution
Time
100
Advantages of Pipelining
● An n-stage pipeline:
– Can improve performance upto n times.
● Not much investment in hardware:
– No replication of hardware resources
necessary.
– The principle deployed is to keep the
units as busy as possible.
● Transparent to the programmers:
– Easy to use
101
Basic Pipelining Terminologies
● Pipeline cycle (or Processor cycle):
– The time required to move an instruction
one step further in the pipeline.
– Not to be confused with clock cycle.
● Synchronous pipeline:
– Pipeline cycle is constant (clock-driven).
● Asynchronous pipeline:
– Time for moving from stage to stage varies
– Handshaking communication between stages
102
Principle of linear pipelining
● Assembly lines, where items are assembled
continuously from separate part along a moving
conveyor belt.
● Partition of assembly depends on factors like:-
– Quality of working units
– Desired processing speed
– Cost effectiveness
● All assembly stations must have equal
processing.
● Lowest station becomes the bottleneck or
congestion.
103
Precedence relation
● A set of subtask { T1,T2,……,Tn} for a given
task T ,that some task Tj can not start until
some earlier task Ti ,where (i<j)finishes.
● Pipeline consists of cascade of processing
stages.
● Stages are combinational circuits over data
stream flowing through pipe.
● Stages are separated by high speed interface
latches (Holding intermediate results between
stages.)
● Control must be under a common clock.
104
Synchronous Pipeline
- Transfers between stages are simultaneous.
- One task or operation enters the pipeline
per cycle.
L L L L L
Input Output
S1 S2 Sk
Clock
m d
105
Asynchronous Pipeline
- Transfers performed when individual stages are
ready.
- Handshaking protocol between processors.
Input Output
106
A Few Pipeline Concepts
Si Si+1
m d
Pipeline cycle :
Latch delay : d
= max {m } + d
Pipeline frequency : f
f = 1 /
107
Example on Clock period
● Suppose the time delays of the 4 stages are
1=60ns,2 = 50ns, 3 = 90ns, 4 = 80ns & the
interface latch has a delay of l = 10ns. What is
the value of pipeline cycle time ?
● Hence the cycle time of this pipeline can be
granted to be like :- = 90 + 10 =100ns
● Clock frequency of the pipeline (f) =1/100 =10 Mhz
● If it is non-pipeline then = 60+50+90+80 = 280ns
= max {m } + d
108
Ideal Pipeline Speedup
● k-stage pipeline processes n tasks in k + (n-1)
clock cycles:
– k cycles for the first task and n-1 cycles for the
remaining n-1 tasks.
● Total time to process n tasks
● Tk = [ k + (n-1)]
● For the non-pipelined processor
T1 = n k
109
Pipeline Speedup Expression
Speedup=
T1 nk nk
Sk =
Tk = [ k + (n-1)] = k + (n-1)
111
Throughput of pipeline
● Number of result task that can be completed
by a pipeline per unit time.
n n
W = = =
k*+(n-1) [k+(n-1)]
112
Pipelines: A Few Basic
Concepts
● Pipeline increases instruction throughput:
– But, does not decrease the execution time of
the individual instructions.
– In fact, slightly increases execution time of
each instruction due to pipeline overheads.
● Pipeline overhead arises due to a
combination of:
– Pipeline register delay / Latch between stages
– Clock skew
113
Pipelines: A Few Basic
Concepts
● Pipeline register delay:
– Caused due to set up time
● Clock skew:
– the maximum delay between clock arrival
at any two registers.
● Once clock cycle is as small as the
pipeline overhead:
– No further pipelining would be useful.
– Very deep pipelines may not be useful.
114
Drags on Pipeline Performance
115
Exercise
● Consider an unpipelined processor:
– Takes 4 cycles for ALU and other operations
– 5 cycles for memory operations.
– Assume the relative frequencies:
● ALU and other=60%,
● memory operations=40%
– Cycle time =1ns
● Compute speedup due to pipelining:
– Ignore effects of branching.
– Assume pipeline overhead = 0.2ns
116
Solution
● Average instruction execution time for
large number of instructions:
– unpipelined= 1ns * (60%*4+ 40%*5) =4.4ns
– Pipelined=1 + 0.2 = 1.2ns
● Speedup=4.4/1.2=3.7 times
117
Pipeline Hazards
● Hazards can result in incorrect
operations:
– Structural hazards: Two instructions
requiring the same hardware unit at same
time.
– Data hazards: Instruction depends on result
of a prior instruction that is still in pipeline
● Data dependency
– Control hazards: Caused by delay in decisions
about changes in control flow (branches and
jumps).
● Control dependency
118
Pipeline Interlock
● Pipeline interlock:
– Resolving
of pipeline hazards through
hardware mechanisms.
● Interlock hardware detects all
hazards:
– Stallsappropriate stages of the
pipeline to resolve hazards.
119
MIPS Pipelining Stages
● 5 stages of MIPS Pipeline:
– IF Stage:
● Needs access to the Memory to load the instruction.
● Needs a adder to update the PC.
– ID Stage:
● Needs access to the Register File in reading operand.
● Needs an adder (to compute the potential branch target).
– EX Stage:
● Needs an ALU.
– MEM Stage:
● Needs access to the Memory.
– WB Stage:
● Needs access to the Register File in writing.
120
Further MIPS Enhancements
cont…
122
Summary
Cont…
● Two main types of parallel computers:
– SIMD
– MIMD
● Instruction pipelines are found in almost all
modern processors:
– Exploits instruction-level parallelism
– Transparent to the programmers
● Hazards can slowdown a pipeline:
– In the next lecture, we shall examine hazards in
more detail and available ways to resolve hazards.
123
References
[1]J.L. Hennessy & D.A. Patterson,
“Computer Architecture: A Quantitative
Approach”. Morgan Kaufmann Publishers,
3rd Edition, 2003
[2]John Paul Shen and Mikko Lipasti,
“Modern Processor Design,” Tata Mc-
Graw-Hill, 2005
124