0% found this document useful (0 votes)
1 views5 pages

Pipeline and Vector Processing

The document discusses computer arithmetic, focusing on instruction sets, CISC and RISC characteristics, and parallel processing techniques. It explains parallel processing as a method to enhance computational speed by executing tasks simultaneously, and classifies computer systems based on instruction and data streams. Additionally, it covers pipelining as a technique for overlapping execution phases and vector processing for handling large data sets.

Uploaded by

GADDAM SRUJAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views5 pages

Pipeline and Vector Processing

The document discusses computer arithmetic, focusing on instruction sets, CISC and RISC characteristics, and parallel processing techniques. It explains parallel processing as a method to enhance computational speed by executing tasks simultaneously, and classifies computer systems based on instruction and data streams. Additionally, it covers pipelining as a technique for overlapping execution phases and vector processing for handling large data sets.

Uploaded by

GADDAM SRUJAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

COMPUTER ARITHMETIC

UNIT-5

P1:- INSTRUCTION SET


CISC CHARACTERISTICS
RISC CHARACTERISTICS

P2 – PIPELINE AND VECTOR PROCESSING

PARALLEL PROCESSING
PIPELINING
ARITHMETIC PIPELINE
INSTRUCTION PIPELINE
RISC PIPELINE
VECTOR PROCESSING
ARRAY PROCESSING

G SRUJAN REDDY COMPUTER ORGANIZATION AND ARCHITECTURE


COMPUTER ARITHMETIC
I. PARALLEL PROCESSING

 Sequentially processing each instruction takes a long time and much CPU time is
wasted.
 If the instructions are executed all together in parallel, the time taken is the same as
the time for a single instruction execution in a serial manner. In this case, the CPU
time is saved.
 Parallel Processing is a term used to define a technique that is used to provide
simultaneous data processing tasks to increase the computational speed of a computer
system.
 The purpose of parallel processing is to speed up the computer processing capability
and increase its throughput. Throughput is the amount of processing that can be
accomplished during a given interval of time.
 Due to parallel processing, the hardware requirement increases and the cost of
the system also increases.
 Parallel Processing can be viewed at various levels of complexity.
i. At the lowest level of complexity, we distinguish between serial and parallel
operations by the type of registers used. Shift registers operate serially one bit
at a time, while in parallel load operations, the data is loaded at a time.
ii. At a higher level of complexity, we use functional units that perform identical
or different operations simultaneously. Parallel processing is established by
distributing the data among multiple functional units.
 Multiple Functional Units: Consider an example of ALU as explained below.

G SRUJAN REDDY COMPUTER ORGANIZATION AND ARCHITECTURE


COMPUTER ARITHMETIC
 Here the control unit is separated into 8 functional units.
 Depending on the operation specified by the instruction, the operands in the registers are
applied to one of the units.
 The adder and integer multiplier perform the arithmetic operations.
 The floating-point operations are separated into 3 units operating in parallel.
 The logic unit and the incrementer can be performed concurrently on different data.
 A multifunctional organization is usually associated with a complex control unit to
coordinate all the activities among the various components.
 There are a variety of ways that parallel processing can be classified. It can be
considered from the internal organization of the processors, from the interconnection
structure between processors, or the flow of information through the system.
 One classification introduced by M. J. Flynn considers the organization of a computer
system by the number of instructions and data items that are manipulated
simultaneously.
 The normal operation of a computer is to fetch instructions from memory and execute
them in the processor.
 The sequence of instructions read from memory constitutes an instruction stream. The
operations performed on the data in the processor constitute a data stream.
 Parallel processing may occur in the instruction stream, in the data stream, or both.
Flynn's classification divides computers into four major groups as follows:
i. SISD -Single Instruction Stream, Single Data Stream
ii. SIMD - Single Instruction Stream, Multiple Data Stream
iii. MISD - Multiple Instruction Stream, Single Data Stream
iv. MIMD - Multiple Instruction Stream, Multiple Data Stream
 SISD represents the organization of a single computer containing a control unit, a
processor unit, and a memory unit. Instructions are executed sequentially and the system
may or may not have internal parallel processing capabilities. Parallel processing in this
case may be achieved using multiple functional units or by pipeline processing
 SIMD represents an organization that includes many processing units under the
supervision of a common control unit. All processors receive the same instruction from
the control unit but operate on different items of data. The shared memory unit must
contain multiple modules so that it can communicate with all the processors
simultaneously
 MISD structure is only of theoretical interest since no practical system has been
constructed using this organization.
 MIMD organization refers to a computer system capable of processing several programs
at the same time. Most multiprocessor and multicomputer systems can be classified in
this category
 Pipeline processing is an implementation technique where arithmetic suboperations or
the phases of a computer instruction cycle overlap in execution.
 Vector processing deals with computations involving large vectors and matrices.
 Array processors perform computations on large arrays of data

G SRUJAN REDDY COMPUTER ORGANIZATION AND ARCHITECTURE


COMPUTER ARITHMETIC
II. PIPELINING

 A pipeline can be visualized as a collection of processing segments through which


binary information flows
 Each segment performs partial processing dictated by the way the task is partitioned
 The result obtained from the computation in each segment is transferred to the next
segment in the pipeline
 The final result is obtained after the data have passed through all segments
 The registers provide isolation between each segment so that each can operate on
distinct data simultaneously.
 Perhaps the simplest way of viewing the pipeline structure is to imagine that each
segment consists of an input register followed by a combinational circuit. The register
holds the data and the combinational circuit performs the suboperation in the
particular segment. The output of the combinational circuit in a given segment is
applied to the input register of the next segment. A clock is applied to all registers
after enough time has elapsed to perform all segment activity. In this way, the
information flows through the pipeline one step at a time.
 The example of a pipeline is shown here
Consider the implementation of instruction (Ai*Bi +Ci) i=1,2,3….7

The suboperations performed in each segment are as follows.

R1  Ai
R2  Bi
R 3  R 1 + R2
R4  Ci
R 5  R 3 + R5
 The first clock pulses place the operands A1 and B1 into R1 and R2 Registers.
 During the second clock pulse, the product of R1 and R2 is placed in R3, operands
A2 and B2 are placed in R1 and R2 and Ci is places in R4

 The complete pipelining process for the above instruction is shown in the table
below

G SRUJAN REDDY COMPUTER ORGANIZATION AND ARCHITECTURE


COMPUTER ARITHMETIC

G SRUJAN REDDY COMPUTER ORGANIZATION AND ARCHITECTURE

You might also like