0% found this document useful (0 votes)
3 views

Computer Architecture Note 2

Uploaded by

chidobisam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Computer Architecture Note 2

Uploaded by

chidobisam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 12

BELLS UNIVERSITY OF

TECHNOLOGY
…Only the best is good for Bells.
OTA, NIGERIA

CEN 401
COMPUTER ORGANIZATION AND
ARCHITECTURE

LECTURE NOTE 2

www.bellsuniversity.edu.n
Outlines
 Pipelining
 Pipeline Hazards
 Parallel Processing

Bells University of Technology, Ota,


Nigeria
Pipelining

The performance of a computer can be increased by increasing the performance of the CPU. This
can be done by executing more than one task at a time. This procedure is referred to as pipelining.
The concept of pipelining is to allow the processing of a new task even though the processing of a
previous task has not ended. Pipelining is a technique of decomposing a sequential process into
suboperations, with each subprocess being executed in a special dedicated segment that operates
concurrently with all other segments. A pipeline can be visualized as a collection of processing
segments through which binary information flows. Each segment performs partial processing
dictated by the way the task is partitioned. The result obtained from the computation in each
segment is transferred to the next segment in the pipeline. The final result is obtained after the data
3
have passed through all segments.

Bells University of Technology, Ota,


Nigeria
Pipelining
Consider the following operation: Result=(A+B)*C
 First the A and B values are Fetched which is nothing but a
“Fetch Operation”.
 The result of the Fetch operations is given as input to the
Addition operation, which is an Arithmetic operation.
 The result of the Arithmetic operation is again given to the
Data operand C which is fetched from the memory and using
another arithmetic operation which is Multiplication in this
scenario is executed.
 Finally the Result is again stored in the “Result” variable.
In this process, 5 pipelines were used, which are:
 Fetch Operation (A), Fetch Operation(B)
 Addition of (A & B),
 Fetch Operation(C)
 Multiplication of ((A+B), C) 4

 Load ( (A+B)*C)
Bells University of Technology, Ota,
Nigeria
Pipeline Hazards

There are situations in pipelining when the next instruction cannot execute in the following clock
cycle. These events are called hazards, and there are three different types:
 Structural hazard: It means that the hardware cannot support the combination of instructions
that we want to execute in the same clock cycle. When two (or more) instructions in the pipeline
require the same resource, a structural hazard occurs. A resource can be a register, memory, or
ALU. As a result, for a portion of the pipeline, instructions must be performed in series rather
than parallel. For example, a structural hazard in the laundry room would occur if we used a
washer-dryer combination instead of a separate washer and dryer, or if our roommate was busy
doing something else and wouldn’t put clothes away. 5

Bells University of Technology, Ota,


Nigeria
Pipeline Hazards

 Data hazard occur when the pipeline must be stalled because one step must wait for another to
complete. Suppose you found a sock at the folding station for which no match existed. One
possible strategy is to run down to your room and search through your clothes bureau to see if
you can find the match. Obviously, while you are doing the search, loads must wait that have
completed drying and are ready to fold as well as those that have finished washing and are ready
to dry. In a pipeline, data hazards arise from the dependence of one instruction on an earlier one
that is still in the pipeline

Bells University of Technology, Ota,


Nigeria
Pipeline Hazards

 Control hazard arises from the need to make a decision based on the results of one instruction
while others are executing. Suppose our laundry crew was given the happy task of cleaning the
uniforms of a football team. Given how filthy the laundry is, we need to determine whether the
detergent and water temperature setting we select is strong enough to get the uniforms clean but
not so strong that the uniforms wear out sooner. In our laundry pipeline, we have to wait until
after the second stage to examine the dry uniform to see if we need to change the washer setup or
not.

Bells University of Technology, Ota,


Nigeria
Parallel Processing

Parallel processing is a term used to denote a large class of techniques that are used to provide
simultaneous data-processing tasks to increase the computational speed of a computer system. The
purpose of parallel processing is to speed up the computer processing capability and increase its
throughput.
 Throughput: It is the amount of processing that can be accomplished during a given interval of
time. The amount of hardware increases with parallel processing and with it, the cost of the
system increases. However, technological developments have reduced hardware costs to the
point where parallel processing techniques are economically feasible.
 Speedup of a pipeline processing: The speedup of a pipeline processing over an equivalent
non-pipeline processing is defined by the ratio:
8
S = Tseq / Tpipe = n*m / (m+n -1)

Bells University of Technology, Ota,


Nigeria
Parallel Processing

Parallel processing can be viewed from various levels of complexity:


 At the lowest level, parallel and serial operations can be distinguished by the type of registers
used. Shift registers operate serially one bit at a time, while registers with parallel load operate
with all the bits of the word simultaneously.
 Parallel processing at a higher level of complexity can be achieved by having a multiplicity of
functional units that perform identical or different operations simultaneously. Parallel processing
is established by distributing the data among the multiple functional units. For example, the
arithmetic, logic, and shift operations can be separated into three units, and the operands are
diverted to each unit under the supervision of a control unit. 9

Bells University of Technology, Ota,


Nigeria
Parallel Processing

There are a variety of ways that parallel processing can be classified. It can be considered from the
internal organization of the processors, from the interconnection structure between processors, or
from the flow of information through the system.
One classification introduced by M. J. Flynn considers the organization of a computer system by
the number of instructions and data items that are manipulated simultaneously. The normal operation
of a computer is to fetch instructions from memory and execute them in the processor. The sequence
of instructions read from memory constitutes an instruction stream. The operations performed on the
data in the processor constitute a data stream. Parallel processing may occur in the instruction
stream, in the data stream, or both. 10

Bells University of Technology, Ota,


Nigeria
Parallel Processing

Flynn's classification divides computers into four major groups as follows:


 Single instruction stream, single data stream (SISD)

 Single instruction stream, multiple data stream (SIMD)

 Multiple instruction stream, single data stream (MISD)

 Multiple instruction stream, multiple data stream (MIMD)

11

Bells University of Technology, Ota,


Nigeria
Parallel Processing
 SISD represents the organization of a single computer containing a control unit, a processor unit,
and a memory unit. Instructions are executed sequentially and the system may or may not have
internal parallel processing capabilities. Parallel processing in this case may be achieved using
multiple functional units or by pipeline processing.
 SIMD represents an organization that includes many processing units under the supervision of a
common control unit. All processors receive the same instruction from the control unit but
operate on different items of data. The shared memory unit must contain multiple modules so
that it can communicate with all the processors simultaneously.
 MISD structure is only of theoretical interest since no practical system has been constructed
using this organization.
 MIMD organization refers to a computer system capable of processing several programs
12 at the
same time. Most multiprocessor and multicomputer systems can be classified in this category.
Bells University of Technology, Ota,
Nigeria

You might also like