0% found this document useful (0 votes)
6 views

Module 3 Pipelining

Pipelining is a technique that enhances CPU performance by allowing multiple tasks to be processed concurrently through dedicated segments. While it reduces cycle time and increases system throughput, it also introduces complexities and potential hazards such as data, instruction, and structural hazards that can cause stalls. Instruction pipelines typically consist of multiple stages, including fetching, decoding, and executing instructions, each contributing to the overall efficiency of the processing system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Module 3 Pipelining

Pipelining is a technique that enhances CPU performance by allowing multiple tasks to be processed concurrently through dedicated segments. While it reduces cycle time and increases system throughput, it also introduces complexities and potential hazards such as data, instruction, and structural hazards that can cause stalls. Instruction pipelines typically consist of multiple stages, including fetching, decoding, and executing instructions, each contributing to the overall efficiency of the processing system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Basic concepts of pipelining:

Performance of a computer can be increased by increasing the performance of the CPU.


This can be done by executing more than one task at a time. This procedure is referred to as
pipelining.
The concept of pipelining is to allow the processing of a new task even though the
processing of previous task has not ended.
Pipelining:
Definition: Pipelining is a technique of decomposing a sequential process into
suboperations, with each subprocess being executed in a special dedicated segment that
operates concurrently with all other segments.
A pipeline can be visualized as a collection of processing segments through which binary
information flows. Each segment performs partial processing dictated by the way the task is
partitioned. The result obtained from the computation in each segment is transferred to the
next segment in the pipeline. The final result is obtained after the data have passed through
all segments.
Pipeline Performance:

Advantages of Pipelining:
1. The cycle time of the processor is reduced.
2. It increases the throughput of the system
3. It makes the system reliable.
Disadvantages of Pipelining:
1. The design of pipelined processor is complex and costly to manufacture.
2. The instruction latency is more.
Instruction Pipeline:
Four-segment instruction pipeline:
Six-segment instruction pipeline:
If the instruction processing is split into six phases, the instruction pipeline will have six
different for execution of instruction.
Let us consider the following decomposition of the instruction execution:
● Fetch Instruction (FI): Fetch the instruction from memory
● Decode Instruction ((DI): Decode the instruction.
● Calculate address (CA): calculate the effective address.
● Fetch Operands (FO): Fetch each operand from memory.
● Execute Instruction (EI): execute the instruction.
● Write Operand (WO): Store the result in memory.

Stage 1: Instruction Fetch (IF):


• The CPU fetches the instruction from memory using the Program Counter (PC).
• The PC is updated to point to the next instruction.

Stage 2: Instruction Decode (DI):

• The fetched instruction is decoded to determine the operation type.


• The control unit generates signals for execution.

Stage 3: Calculate address (CA):

• Address calculations for memory operations are performed based on the addressing
scheme.

Stage 4: Fetch Operands (FO)

• Operands are fetched from the address calculated in stage 3.

Stage 5: Execute Instruction (EI):

• Instruction is executed in this stage.

Stage 6: Write Operand (WO):

• The computed result is written back to registers/memory.


• For store instructions, no write-back occurs.
Pipeline Hazards
Difficulties in Instruction Pipeline:

Stalls: The periods in which the decode unit, execute unit, and the write unit are idle are
called stalls. They are also referred to as bubbles in the pipeline.
Hazard: Any condition that causes the pipeline to stall is called a hazard. There are three
types of hazards are possible:
• Data Hazard: A data hazard is any condition in which either the source or the
destination operands of an instruction are not available at the time expected in the
pipeline. As a result, some operation has to be delayed, and the pipeline stalls.
• Instruction hazards: The pipeline may also be stalled because of a delay in the
availability of an instruction. For example, this may be a result of a miss in the cache,
requiring the instruction to e fetched from the main memory. Such hazards are often
called control hazards or instruction hazards.
• Structural hazard: Structural hazard is the situation when two instructions require
the use of a given hardware resource at the same time. The most common case in
which this hazard may arise is in access to memory.
Data Hazard:

Control Hazards:
Occur due to branch (conditional jump) instructions.
Example: If the pipeline fetches the next instruction before knowing the outcome of a
conditional branch.
A variety of approaches have been taken for dealing with conditional branches:
● Multiple streams
● Prefetch branch target
● Loop buffer
● Branch prediction
● Delayed branch

You might also like