0% found this document useful (0 votes)
58 views19 pages

Computer Architecture: Pipelining Basics

Pipelining is a technique used in computer architecture to overlap the execution of instructions to increase throughput. It breaks down instruction execution into stages, including fetch, decode, execute, memory, and writeback. This allows subsequent instructions to begin execution before previous instructions finish. While pipelining improves performance, it introduces hazards such as structural, data, and control hazards. Solutions to hazards include duplicating resources, forwarding results between stages, freezing the pipeline, branch prediction, and delayed branching.

Uploaded by

Elisée Ndjabu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views19 pages

Computer Architecture: Pipelining Basics

Pipelining is a technique used in computer architecture to overlap the execution of instructions to increase throughput. It breaks down instruction execution into stages, including fetch, decode, execute, memory, and writeback. This allows subsequent instructions to begin execution before previous instructions finish. While pipelining improves performance, it introduces hazards such as structural, data, and control hazards. Solutions to hazards include duplicating resources, forwarding results between stages, freezing the pipeline, branch prediction, and delayed branching.

Uploaded by

Elisée Ndjabu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Computer Architecture

Pipelining Basics
Sequential Processing

Computer Architecture & Network Lab 2


Pipelined Processing

Computer Architecture & Network Lab 3


Basic Steps of Execution

1. Instruction fetch step (IF)


2. Instruction decode/register fetch step (ID)
3. Execution/effective address step (EX)
4. Memory access (MEM)
5. Register write-back step (WB)

Computer Architecture & Network Lab 4


Pipelined Instruction Execution

 Sequential Execution

 Pipelined Execution

ADD $3, $1, $2

SUB $4, $5, $6

AND $7, $8, $9

Computer Architecture & Network Lab 5


Basic Pipeline

Computer Architecture & Network Lab 6


Major Hurdles of Pipelining

 Structural Hazard
 Data Hazard
 Control Hazard

Computer Architecture & Network Lab 7


Structural Hazard

IF ID EX MEM WB

IF ID EX MEM WB

IF ID EX MEM WB

IF ID EX MEM WB

Instruction Clock number


number 1 2 3 4 5 6 7 8 9
Load Instruction IF ID EX MEM WB
Instruction i + 1 IF ID EX MEM WB
Instruction i + 2 IF ID EX MEM WB WB
Instruction i + 3 IF ID EX MEM WB
Instruction i + 4 IF ID EX MEM

Computer Architecture & Network Lab 8


Solutions to Structural Hazard

 Resource Duplication
 example
− Separate I and D caches for memory access conflict
− Time-multiplexed or multi-port register file for register file access
conflict

Computer Architecture & Network Lab 9


Data Hazard (RAW hazard)

ADD $1, $2, $3


SUB $4, $1, $5

ID WB
IF EX MEM
R W

ID WB
IF EX MEM
R W
Time

Computer Architecture & Network Lab 10


Solutions to Data Hazard

 Freezing the pipeline

 (Internal) Forwarding

 Compiler scheduling

Computer Architecture & Network Lab 11


Freezing The Pipeline

 ALU result to next instruction

ADD $1, $2, $3 IF ID EX MEM WWB


R

SUB $4, $1, $5 stall stall IF ID EX


R
 Load result to next instruction

LW $1, 32($6) IF ID EX MEM WWB


R

SUB $4, $1, $5 stall stall IF ID EX


R

Computer Architecture & Network Lab 12


(Internal) Forwarding

 ALU result to next instruction (Stall X)

ADD $1, $2, $3 IF ID EX MEM WWB


R

SUB $4, $1, $5 IF ID EX MEM WWB


R
 Load result to next instruction (Stall 1)

LW $1, 32($6) IF ID EX MEM WWB


R

SUB $4, $1, $5 stall IF ID EX MEM


R

Computer Architecture & Network Lab 13


Control Hazard

 Caused by PC-changing instructions


(Branch, Jump, Call/Return)

For 5-stage pipeline, 3 cycle penalty


15% branch frequency. CPI = 1.45

Computer Architecture & Network Lab 14


Solutions to Control Hazard

 Optimized branch processing


 Branch prediction
 Delayed branch

Computer Architecture & Network Lab 15


Optimized Branch Processing

1. Find out branch taken or not early


→ simplified branch condition

2. Compute branch target address early


→ extra hardware

Computer Architecture & Network Lab 16


Branch Prediction

 Predict-not-taken

Computer Architecture & Network Lab 17


Delayed Branch

 Semantics of delayed branch

Computer Architecture & Network Lab 18


Delayed Branch

Computer Architecture & Network Lab 19

You might also like