0% found this document useful (0 votes)
29 views6 pages

Arciticher

The document discusses parallelism in computer architecture, highlighting its ability to enhance computational speed by utilizing multiple processors simultaneously. It covers types of parallelism such as bit-level, instruction-level, task, and data-level parallelism, as well as architectural trends and Flynn's classification of parallel architectures. The conclusion emphasizes the importance of parallelism in making programs faster through concurrent computations.

Uploaded by

souravbag9883
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views6 pages

Arciticher

The document discusses parallelism in computer architecture, highlighting its ability to enhance computational speed by utilizing multiple processors simultaneously. It covers types of parallelism such as bit-level, instruction-level, task, and data-level parallelism, as well as architectural trends and Flynn's classification of parallel architectures. The conclusion emphasizes the importance of parallelism in making programs faster through concurrent computations.

Uploaded by

souravbag9883
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

,

BENGAL INSTITUTE OF TECHNOLOGY


CONTINUOUS ASSESSMENT 2 (CA2): Report Writing

Name: SOURAV BAG Student Roll :12100121010

Semester: SEM-4 Stream: CSE(A)

Paper code: PCC-CS402 Paper Name: Computer


Architecture.
Topic: Discuss the concept of
parallelism in computer
architecture.

TITLE:- Discuss the concept of parallelism in computer


architecture.

1.ABSTRACT:
Parallel computer architecture adds a new dimension in the
development of computer system by using more and more number
of processors. In principle, performance achieved by utilizing large
number of processors is higher than the performance of a single
processor at a given point of time.

2.INTRODUCTION:
* Parallel Processing:
1. Parallel processing can be described as a class of techniques
which enables the system to achieve simultaneous data-processing
tasks to increase the computational speed of a computersystem. 2.
A parallel processing system can carry out simultaneous
dataprocessing to achieve faster executiontime.
3.
For

instance, while an instruction is being processed in the ALU


component of the CPU, the next instruction can be read
frommemory.
4. The primary purpose of parallel processing is to enhance the
computer processing capability and increase itsthroughput
5. A parallel processing system can be achieved by having a
multiplicity of functional units that perform identical or different
operationssimultaneously.
6. The data can be distributed among various multiple
functionalunits.
7. The following diagram shows one possible way of separating
the execution unit into eight functional units operating inparallel.
8. The operation performed in each functional unit is indicated
in each block if the diagram:
* Advantages of Parallel Computing over Serial Computing are as
follows:
1. It saves time and money as many resources working
together will reduce the time and cut potentialcosts
2. It can be impractical to solve larger problems on
SerialComputing
3. It can take advantage of non-local resources when the
localresources are finite.
4. Serial Computing ‘wastes’ the potential computing power,
thus Parallel Computing makes better work ofhardware.

3.MAIN CONTENT:
*Types of Parallelism:
1. Bit-level parallelism: It is the form of parallel computing
which is based on the increasing processor’s size. It reduces the
number of instructions that the system must execute in order to
perform a task on large-sized data. Example: Consider a scenario
where an 8bit processor must compute the sum of two 16-bit
integers. It must first sum up the 8 lower-order bits, then add the 8
higher-order bits, thus requiring two instructions to perform the
operation. A 16- bit processor can perform the operation with just
oneinstruction.
2. Instruction-level parallelism: A processor can only address
less than one
instructionforeachclockcyclephase.Theseinstructionscanbereordere
dand grouped which are later on executed concurrently without
affecting the result of the program. This is called instruction-
levelparallelism.
3. Task Parallelism: Task parallelism employs the
decomposition of a task into subtasks and then allocating each of
the subtasks for execution. The processors perform execution of
sub tasksconcurrently.
4. Data-level parallelism (DLP) – Instructions from a single
stream operate concurrently on several data – Limited by non-
regular data manipulation patterns and by memorybandwidth.
*Architectural Trends:
1.When multiple operations are executed in parallel, the number of
cycles needed to execute the program isreduced.
2. However, resources are needed to support each of the
concurrentactivities.
3. Resources are also needed to allocate localstorage.
4. The best performance is achieved by an intermediate action
plan that usesresources to utilize a degree of parallelism and a
degree oflocality.
5. Generally, the history of computer architecture has been
divided into four generations having following basic
technologies− • Vacuum tubes • Transistors •
Integratedcircuits • VLSI
6. Till 1985, the duration was dominated by the growth in
bitlevelparallelism
7. 4-bit microprocessors followed by 8-bit, 16-bit, and soon.
8. To reduce the number of cycles needed to perform a full 32-bit
operation, the
widthofthedatapathwasdoubled.Lateron,64bitoperationswerei
ntroduced.
9. The growth in instruction-level-parallelism dominated the
mid80s tomid-90s.
10. The RISC approach showed that it was simple to pipeline the
steps of instruction
processingsothatonanaverageaninstructionisexecutedinalmostever
ycycle.
11. Growth in compiler technology has made instruction pipelines
more productive.
12. n mid-80s, microprocessor-based computers consistedof
• An integer processingunit • A floating-pointunit • A
cachecontroller • SRAMs for the cachedata • Tagstorage
13. As chip capacity increased, all these components were
merged into a singlechip.
14. Thus, a single chip consisted of separate hardware for
integer arithmetic, floating point operations, memory operations
and branchoperations
15. Other than pipelining individual instructions, it fetches
multiple instructions at a time and sends them in parallel to
different functional units whenever possible. This type of
instruction level parallelism is called superscalarexecution.
*FLYNN‘S CLASSIFICATION:

Flynn's taxonomy is a specific classification of parallel computer


architectures that are based on the number of concurrent
instruction (single or multiple) and data streams (single or multiple)
available in thearchitecture
The four categories in Flynn's taxonomy are thefollowing:

1. (SISD) single instruction, singledata 2. (SIMD) single


instruction, multipledata 3. (MISD) multiple instruction,
singledata 4. (MIMD) multiple instruction, multipledata
Instruction stream: is the sequence of instructions
asexecuted by themachine
Data Stream is a sequence of data including input, or
partialor temporary result, called by the instructionStream.
Instructions are decoded by the control unit and then ctrl
unit send the instructions to the processing units for
execution.•
Data Stream flows between the processors and memory
bidirectionally

4.CONCLUSION:
The term Parallelism refers to techniques to make
programs faster by performing several computations at
the same time. This requires hardware with multiple
processing units. In many cases the sub-computations are of
the same structure, but this is not necessary. Graphic
computations on a GPU are parallelism.

5.REFERENCES:
https://www.javatpoint.com/what-is-
parallelcomputing#:~:text=Generally%2C%20it
%20is%20a %20kind,which%20combines
%20results%20upon% 20completion.
https://www.geeksforgeeks.org/introduction-
toparallel-computing. (LENGTH:896).

You might also like