0% found this document useful (0 votes)
401 views11 pages

Types of Parallel Computing

There are three main types of parallel computing: bit-level, instruction-level, and task parallelism. Bit-level parallelism splits operations into smaller instructions to perform on multiple processors simultaneously. Instruction-level parallelism executes multiple instructions within a single clock cycle. Task parallelism decomposes large problems into independent subtasks that can be solved concurrently. Parallel computing is used for applications like databases, simulations, multimedia, science/engineering, and more. It allows problems to be solved faster using multiple resources but is more complex to implement than serial computing.

Uploaded by

prakashvivek990
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
401 views11 pages

Types of Parallel Computing

There are three main types of parallel computing: bit-level, instruction-level, and task parallelism. Bit-level parallelism splits operations into smaller instructions to perform on multiple processors simultaneously. Instruction-level parallelism executes multiple instructions within a single clock cycle. Task parallelism decomposes large problems into independent subtasks that can be solved concurrently. Parallel computing is used for applications like databases, simulations, multimedia, science/engineering, and more. It allows problems to be solved faster using multiple resources but is more complex to implement than serial computing.

Uploaded by

prakashvivek990
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Types of parallel computing

From the open-source and proprietary parallel computing vendors, there are generally
three types of parallel computing available, which are discussed below:

1. Bit-level parallelism: The form of parallel computing in which every task is


dependent on processor word size. In terms of performing a task on large-sized
data, it reduces the number of instructions the processor must execute. There is a
need to split the operation into series of instructions. For example, there is an 8-
bit processor, and you want to do an operation on 16-bit numbers. First, it must
operate the 8 lower-order bits and then the 8 higher-order bits. Therefore, two
instructions are needed to execute the operation. The operation can be performed
with one instruction by a 16-bit processor.
2. Instruction-level parallelism: In a single CPU clock cycle, the processor decides
in instruction-level parallelism how many instructions are implemented at the same
time. For each clock cycle phase, a processor in instruction-level parallelism can
have the ability to address that is less than one instruction. The software approach
in instruction-level parallelism functions on static parallelism, where the computer
decides which instructions to execute simultaneously.
3. Task Parallelism: Task parallelism is the form of parallelism in which the tasks are
decomposed into subtasks. Then, each subtask is allocated for execution. And, the
execution of subtasks is performed concurrently by processors.

Applications of Parallel Computing


There are various applications of Parallel Computing, which are as follows:

o One of the primary applications of parallel computing is Databases and Data


mining.
o The real-time simulation of systems is another use of parallel computing.
o The technologies, such as Networked videos and Multimedia.
o Science and Engineering.
o Collaborative work environments.
o The concept of parallel computing is used by augmented reality, advanced
graphics, and virtual reality.

Advantages of Parallel computing


Parallel computing advantages are discussed below:

o In parallel computing, more resources are used to complete the task that led to
decrease the time and cut possible costs. Also, cheap components are used to
construct parallel clusters.
o Comparing with Serial Computing, parallel computing can solve larger problems
in a short time.
o For simulating, modeling, and understanding complex, real-world phenomena,
parallel computing is much appropriate while comparing with serial computing.
o When the local resources are finite, it can offer benefit you over non-local
resources.
o There are multiple problems that are very large and may impractical or impossible
to solve them on a single computer; the concept of parallel computing helps to
remove these kinds of issues.
o One of the best advantages of parallel computing is that it allows you to do several
things in a time by using multiple computing resources.
o Furthermore, parallel computing is suited for hardware as serial computing wastes
the potential computing power.

Disadvantages of Parallel Computing


There are many limitations of parallel computing, which are as follows:

o It addresses Parallel architecture that can be difficult to achieve.


o In the case of clusters, better cooling technologies are needed in parallel
computing.
o It requires the managed algorithms, which could be handled in the parallel
mechanism.
o The multi-core architectures consume high power consumption.
o The parallel computing system needs low coupling and high cohesion, which is
difficult to create.
o The code for a parallelism-based program can be done by the most technically
skilled and expert programmers.

o Although parallel computing helps you out to resolve computationally and the
data-exhaustive issue with the help of using multiple processors, sometimes it
affects the conjunction of the system and some of our control algorithms and does
not provide good outcomes due to the parallel option.
o Due to synchronization, thread creation, data transfers, and more, the extra cost
sometimes can be quite large; even it may be exceeding the gains because of
parallelization.
o Moreover, for improving performance, the parallel computing system needs
different code tweaking for different target architectures

Fundamentals of Parallel Computer Architecture


Parallel computer architecture is classified on the basis of the level at which the hardware
supports parallelism. There are different classes of parallel computer architectures, which
are as follows:

Multi-core computing
A computer processor integrated circuit containing two or more distinct processing cores
is known as a multi-core processor, which has the capability of executing program
instructions simultaneously. Cores may implement architectures like VLIW, superscalar,
multithreading, or vector and are integrated on a single integrated circuit die or onto
multiple dies in a single chip package. Multi-core architectures are classified as
heterogeneous that consists of cores that are not identical, or they are categorized as
homogeneous that consists of only identical cores.

Symmetric multiprocessing
In Symmetric multiprocessing, a single operating system handles multiprocessor
computer architecture having two or more homogeneous, independent processors that
treat all processors equally. Each processor can work on any task without worrying about
the data for that task is available in memory and may be connected with the help of using
on-chip mesh networks. Also, all processor contains a private cache memory.

Distributed computing
On different networked computers, the components of a distributed system are located.
These networked computers coordinate their actions with the help of communicating
through HTTP, RPC-like message queues, and connectors. The concurrency of
components and independent failure of components are the characteristics of distributed
systems. Typically, distributed programming is classified in the form of peer-to-peer,
client-server, n-tier, or three-tier architectures. Sometimes, the terms parallel computing
and distributed computing are used interchangeably as there is much overlap between
both.

Massively parallel computing


In this, several computers are used simultaneously to execute a set of instructions in
parallel. Grid computing is another approach where numerous distributed computer
system execute simultaneously and communicate with the help of the Internet to solve a
specific problem.

Why parallel computing?


There are various reasons why we need parallel computing, such are discussed below:

o Parallel computing deals with larger problems. In the real world, there are multiple
things that run at a certain time but at numerous places simultaneously, which is
difficult to manage. In this case, parallel computing helps to manage this kind of
extensively huge data.
o Parallel computing is the key to make data more modeling, dynamic simulation
and for achieving the same. Therefore, parallel computing is needed for the real
world too.
o With the help of serial computing, parallel computing is not ideal to implement
real-time systems; also, it offers concurrency and saves time and money.
o Only the concept of parallel computing can organize large datasets, complex, and
their management.
o The parallel computing approach provides surety the use of resources effectively
and guarantees the effective use of hardware, whereas only some parts of
hardware are used in serial computation, and some parts are rendered idle.

Future of Parallel Computing


From serial computing to parallel computing, the computational graph has completely
changed. Tech giant likes Intel has already started to include multicore processors with
systems, which is a great step towards parallel computing. For a better future, parallel
computation will bring a revolution in the way of working the computer. Parallel
Computing plays an important role in connecting the world with each other more than
before. Moreover, parallel computing's approach becomes more necessary with multi-
processor computers, faster networks, and distributed systems.

Difference Between serial computation and Parallel Computing


Serial computing refers to the use of a single processor to execute a program, also known
as sequential computing, in which the program is divided into a sequence of instructions,
and each instruction is processed one by one. Traditionally, the software offers a simpler
approach as it has been programmed sequentially, but the processor's speed significantly
limits its ability to execute each series of instructions. Also, sequential data structures are
used by the uni-processor machines in which data structures are concurrent for parallel
computing environments.

As compared to benchmarks in parallel computing, in sequential programming,


measuring performance is far less important and complex because it includes identifying
bottlenecks in the system. With the help of benchmarking and performance regression
testing frameworks, benchmarks can be achieved in parallel computing. These testing
frameworks include a number of measurement methodologies like multiple repetitions
and statistical treatment. With the help of moving data through the memory hierarchy,
the ability to avoid this bottleneck is mainly evident in parallel computing. Parallel
computing comes at a greater cost and may be more complex. However, parallel
computing deals with larger problems and helps to solve problems faster.
Flynn's Classification of Computers
Architecture

M.J. Flynn proposed a classification for the organization of a computer system by the
number of instructions and data items that are manipulated simultaneously.

The sequence of instructions read from memory constitutes an instruction stream.

The operations performed on the data in the processor constitute a data stream.
SISD
SISD stands for 'Single Instruction and Single Data Stream'. It represents the
organization of a single computer containing a control unit, a processor unit, and a
memory unit.

Instructions are executed sequentially, and the system may or may not have internal
parallel processing capabilities.

Most conventional computers have SISD architecture like the traditional Von-Neumann
computers.

Parallel processing, in this case, may be achieved by means of multiple functional units or
by pipeline processing.

1. Where, CU = Control Unit, PE = Processing Element, M = Memory

Instructions are decoded by the Control Unit and then the Control Unit sends the
instructions to the processing units for execution.

Data Stream flows between the processors and memory bi-directionally.


Examples:

Older generation computers, minicomputers, and workstations

SIMD
SIMD stands for 'Single Instruction and Multiple Data Stream'. It represents an
organization that includes many processing units under the supervision of a common
control unit.

All processors receive the same instruction from the control unit but operate on different
items of data.

The shared memory unit must contain multiple modules so that it can communicate with
all the processors simultaneously.
SIMD is mainly dedicated to array processing machines. However, vector processors can
also be seen as a part of this group.

MISD
MISD stands for 'Multiple Instruction and Single Data stream'.

MISD structure is only of theoretical interest since no practical system has been
constructed using this organization.

In MISD, multiple processing units operate on one single-data stream. Each processing
unit operates on the data independently via separate instruction stream.
1. Where, M = Memory Modules, CU = Control Unit, P = Processor Units

MIMD
MIMD stands for 'Multiple Instruction and Multiple Data Stream'.

In this organization, all processors in a parallel computer can execute different instructions
and operate on various data at the same time.

In MIMD, each processor has a separate program and an instruction stream is generated
from each program.
1. Where, M = Memory Module, PE = Processing Element, and CU = Control Unit

Examples:

Cray T90, Cray T3E, IBM-SP2

You might also like