Types of Parallel Computing
Types of Parallel Computing
From the open-source and proprietary parallel computing vendors, there are generally
three types of parallel computing available, which are discussed below:
o In parallel computing, more resources are used to complete the task that led to
decrease the time and cut possible costs. Also, cheap components are used to
construct parallel clusters.
o Comparing with Serial Computing, parallel computing can solve larger problems
in a short time.
o For simulating, modeling, and understanding complex, real-world phenomena,
parallel computing is much appropriate while comparing with serial computing.
o When the local resources are finite, it can offer benefit you over non-local
resources.
o There are multiple problems that are very large and may impractical or impossible
to solve them on a single computer; the concept of parallel computing helps to
remove these kinds of issues.
o One of the best advantages of parallel computing is that it allows you to do several
things in a time by using multiple computing resources.
o Furthermore, parallel computing is suited for hardware as serial computing wastes
the potential computing power.
o Although parallel computing helps you out to resolve computationally and the
data-exhaustive issue with the help of using multiple processors, sometimes it
affects the conjunction of the system and some of our control algorithms and does
not provide good outcomes due to the parallel option.
o Due to synchronization, thread creation, data transfers, and more, the extra cost
sometimes can be quite large; even it may be exceeding the gains because of
parallelization.
o Moreover, for improving performance, the parallel computing system needs
different code tweaking for different target architectures
Multi-core computing
A computer processor integrated circuit containing two or more distinct processing cores
is known as a multi-core processor, which has the capability of executing program
instructions simultaneously. Cores may implement architectures like VLIW, superscalar,
multithreading, or vector and are integrated on a single integrated circuit die or onto
multiple dies in a single chip package. Multi-core architectures are classified as
heterogeneous that consists of cores that are not identical, or they are categorized as
homogeneous that consists of only identical cores.
Symmetric multiprocessing
In Symmetric multiprocessing, a single operating system handles multiprocessor
computer architecture having two or more homogeneous, independent processors that
treat all processors equally. Each processor can work on any task without worrying about
the data for that task is available in memory and may be connected with the help of using
on-chip mesh networks. Also, all processor contains a private cache memory.
Distributed computing
On different networked computers, the components of a distributed system are located.
These networked computers coordinate their actions with the help of communicating
through HTTP, RPC-like message queues, and connectors. The concurrency of
components and independent failure of components are the characteristics of distributed
systems. Typically, distributed programming is classified in the form of peer-to-peer,
client-server, n-tier, or three-tier architectures. Sometimes, the terms parallel computing
and distributed computing are used interchangeably as there is much overlap between
both.
o Parallel computing deals with larger problems. In the real world, there are multiple
things that run at a certain time but at numerous places simultaneously, which is
difficult to manage. In this case, parallel computing helps to manage this kind of
extensively huge data.
o Parallel computing is the key to make data more modeling, dynamic simulation
and for achieving the same. Therefore, parallel computing is needed for the real
world too.
o With the help of serial computing, parallel computing is not ideal to implement
real-time systems; also, it offers concurrency and saves time and money.
o Only the concept of parallel computing can organize large datasets, complex, and
their management.
o The parallel computing approach provides surety the use of resources effectively
and guarantees the effective use of hardware, whereas only some parts of
hardware are used in serial computation, and some parts are rendered idle.
M.J. Flynn proposed a classification for the organization of a computer system by the
number of instructions and data items that are manipulated simultaneously.
The operations performed on the data in the processor constitute a data stream.
SISD
SISD stands for 'Single Instruction and Single Data Stream'. It represents the
organization of a single computer containing a control unit, a processor unit, and a
memory unit.
Instructions are executed sequentially, and the system may or may not have internal
parallel processing capabilities.
Most conventional computers have SISD architecture like the traditional Von-Neumann
computers.
Parallel processing, in this case, may be achieved by means of multiple functional units or
by pipeline processing.
Instructions are decoded by the Control Unit and then the Control Unit sends the
instructions to the processing units for execution.
SIMD
SIMD stands for 'Single Instruction and Multiple Data Stream'. It represents an
organization that includes many processing units under the supervision of a common
control unit.
All processors receive the same instruction from the control unit but operate on different
items of data.
The shared memory unit must contain multiple modules so that it can communicate with
all the processors simultaneously.
SIMD is mainly dedicated to array processing machines. However, vector processors can
also be seen as a part of this group.
MISD
MISD stands for 'Multiple Instruction and Single Data stream'.
MISD structure is only of theoretical interest since no practical system has been
constructed using this organization.
In MISD, multiple processing units operate on one single-data stream. Each processing
unit operates on the data independently via separate instruction stream.
1. Where, M = Memory Modules, CU = Control Unit, P = Processor Units
MIMD
MIMD stands for 'Multiple Instruction and Multiple Data Stream'.
In this organization, all processors in a parallel computer can execute different instructions
and operate on various data at the same time.
In MIMD, each processor has a separate program and an instruction stream is generated
from each program.
1. Where, M = Memory Module, PE = Processing Element, and CU = Control Unit
Examples: