Computer Architecture Note 2
Computer Architecture Note 2
TECHNOLOGY
…Only the best is good for Bells.
OTA, NIGERIA
CEN 401
COMPUTER ORGANIZATION AND
ARCHITECTURE
LECTURE NOTE 2
www.bellsuniversity.edu.n
Outlines
Pipelining
Pipeline Hazards
Parallel Processing
The performance of a computer can be increased by increasing the performance of the CPU. This
can be done by executing more than one task at a time. This procedure is referred to as pipelining.
The concept of pipelining is to allow the processing of a new task even though the processing of a
previous task has not ended. Pipelining is a technique of decomposing a sequential process into
suboperations, with each subprocess being executed in a special dedicated segment that operates
concurrently with all other segments. A pipeline can be visualized as a collection of processing
segments through which binary information flows. Each segment performs partial processing
dictated by the way the task is partitioned. The result obtained from the computation in each
segment is transferred to the next segment in the pipeline. The final result is obtained after the data
3
have passed through all segments.
Load ( (A+B)*C)
Bells University of Technology, Ota,
Nigeria
Pipeline Hazards
There are situations in pipelining when the next instruction cannot execute in the following clock
cycle. These events are called hazards, and there are three different types:
Structural hazard: It means that the hardware cannot support the combination of instructions
that we want to execute in the same clock cycle. When two (or more) instructions in the pipeline
require the same resource, a structural hazard occurs. A resource can be a register, memory, or
ALU. As a result, for a portion of the pipeline, instructions must be performed in series rather
than parallel. For example, a structural hazard in the laundry room would occur if we used a
washer-dryer combination instead of a separate washer and dryer, or if our roommate was busy
doing something else and wouldn’t put clothes away. 5
Data hazard occur when the pipeline must be stalled because one step must wait for another to
complete. Suppose you found a sock at the folding station for which no match existed. One
possible strategy is to run down to your room and search through your clothes bureau to see if
you can find the match. Obviously, while you are doing the search, loads must wait that have
completed drying and are ready to fold as well as those that have finished washing and are ready
to dry. In a pipeline, data hazards arise from the dependence of one instruction on an earlier one
that is still in the pipeline
Control hazard arises from the need to make a decision based on the results of one instruction
while others are executing. Suppose our laundry crew was given the happy task of cleaning the
uniforms of a football team. Given how filthy the laundry is, we need to determine whether the
detergent and water temperature setting we select is strong enough to get the uniforms clean but
not so strong that the uniforms wear out sooner. In our laundry pipeline, we have to wait until
after the second stage to examine the dry uniform to see if we need to change the washer setup or
not.
Parallel processing is a term used to denote a large class of techniques that are used to provide
simultaneous data-processing tasks to increase the computational speed of a computer system. The
purpose of parallel processing is to speed up the computer processing capability and increase its
throughput.
Throughput: It is the amount of processing that can be accomplished during a given interval of
time. The amount of hardware increases with parallel processing and with it, the cost of the
system increases. However, technological developments have reduced hardware costs to the
point where parallel processing techniques are economically feasible.
Speedup of a pipeline processing: The speedup of a pipeline processing over an equivalent
non-pipeline processing is defined by the ratio:
8
S = Tseq / Tpipe = n*m / (m+n -1)
There are a variety of ways that parallel processing can be classified. It can be considered from the
internal organization of the processors, from the interconnection structure between processors, or
from the flow of information through the system.
One classification introduced by M. J. Flynn considers the organization of a computer system by
the number of instructions and data items that are manipulated simultaneously. The normal operation
of a computer is to fetch instructions from memory and execute them in the processor. The sequence
of instructions read from memory constitutes an instruction stream. The operations performed on the
data in the processor constitute a data stream. Parallel processing may occur in the instruction
stream, in the data stream, or both. 10
11