0% found this document useful (0 votes)
35 views

Parallel Computing

This document discusses parallel computing concepts including Flynn's taxonomy of computer architectures, permutation networks, data parallelism, grid computing, and hyperthreading. Flynn's taxonomy classifies architectures based on the number of instruction and data streams processed simultaneously. Permutation networks rearrange data elements to allow parallel processing. Data parallelism involves applying the same operations to different data streams simultaneously. Grid computing uses distributed resources across a network as a virtual supercomputer. Hyperthreading allows a single physical processor to behave like multiple logical processors by sharing resources.

Uploaded by

shivrajchangle00
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Parallel Computing

This document discusses parallel computing concepts including Flynn's taxonomy of computer architectures, permutation networks, data parallelism, grid computing, and hyperthreading. Flynn's taxonomy classifies architectures based on the number of instruction and data streams processed simultaneously. Permutation networks rearrange data elements to allow parallel processing. Data parallelism involves applying the same operations to different data streams simultaneously. Grid computing uses distributed resources across a network as a virtual supercomputer. Hyperthreading allows a single physical processor to behave like multiple logical processors by sharing resources.

Uploaded by

shivrajchangle00
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Parallel Computing

Q1. Discuss Flynn’s classification of parallel computers.?

ANS: Parallel computing is computing where the jobs are broken into
discrete parts that can be executed concurrently. Each part is further broken
down into a series of instructions. Instructions from each piece execute
simultaneously on different CPUs. The breaking up of different parts of a
task among multiple processors will help to reduce the amount of time to run
a program. Parallel systems deal with the simultaneous use of multiple
computer resources that can include a single computer with multiple
processors, a number of computers connected by a network to form a
parallel processing cluster, or a combination of both. Parallel systems are
more difficult to program than computers with a single processor because the
architecture of parallel computers varies accordingly and the processes of
multiple CPUs must be coordinated and synchronized. The difficult problem
of parallel processing is portability.
An Instruction Stream is a sequence of instructions that are read from
memory. Data Stream is the operations performed on the data in the
processor.
Flynn’s taxonomy is a classification scheme for computer architectures
proposed by Michael Flynn in 1966. The taxonomy is based on the number
of instruction streams and data streams that can be processed
simultaneously by a computer architecture.

Q2. Differentiate between perfect shuffle permutation and butter-fly permutation. Also,
discuss the role of permutation network in parallel computing.

In computer science, a permutation network is a type of network used to perform a


permutation on a set of data. It is a sequence of interconnected switches that can
be used to rearrange the order of data elements. The switches are arranged in a way
that allows them to perform a specific permutation on the data.

A perfect shuffle permutation is a type of permutation that rearranges the


elements of an array by interleaving the first half of the array with the second half.
For example, if we have an array of 8 elements, the perfect shuffle permutation
would rearrange the elements as follows: 1, 5, 2, 6, 3, 7, 4, 8.

On the other hand, a butterfly permutation is a type of permutation that is used in


the context of fast Fourier transforms (FFT). It is a permutation that rearranges the
input data in a way that allows the FFT algorithm to compute the Fourier transform
of the data. The butterfly permutation is so named because the diagram of the
permutation looks like a butterfly.
The role of permutation network in parallel computing is to provide a way to
perform a permutation on a set of data in parallel. By using a permutation network, it
is possible to rearrange the order of data elements in a way that allows them to be
processed in parallel. This can be useful in situations where it is necessary to perform
a large number of computations on a large amount of data.

Q3. Discuss the concept of Data parallelism with suitable example.


Data Parallelism
Data parallelism is a different kind of parallelism that, instead of relying on process
or task concurrency, is related to both the flow and the structure of the information.
An analogy might revisit the automobile factory from our example in the previous
section. There we looked at how the construction of an automobile could be
transformed into a pipelined process. Here, because the construction of cars along
one assembly has no relation to the construction of the same kinds of cars along any
other assembly line, there is no reason why we can’t duplicate the same assembly line
multiple times; two assembly lines will result in twice as many cars being produced
in the same amount of time as a single assembly line.
For data parallelism, the goal is to scale the throughput of processing based on the
ability to decompose the data set into concurrent processing streams, all
performing the same set of operations. For example, a customer
address standardization process iteratively grabs an address and attempts to
transform it into a standard form. This task is adaptable to data parallelism and can
be sped up by a factor of 4 by instantiating four address standardization processes
and streaming one-fourth of the address records through each instantiation
(Figure 14.3). Data parallelism is a more finely grained parallelism in that we achieve
our performance improvement by applying the same small set of tasks iteratively
over multiple streams of data.

Q4. Write short notes on Grid computing and Hyperthreading.


Grid computing is a type of distributed computing that involves a network of
computers working together to perform a task that would be difficult for a single
machine. The machines on the network work under the same protocol to act as a
virtual supercomputer. The task that they work on may include analyzing huge
datasets or simulating situations that require high computing power. Computers on
the network contribute resources like processing power and storage capacity to the
network. Grid computing is a subset of distributed computing, where a virtual
supercomputer comprises machines on a network connected by some bus, mostly
Ethernet or sometimes the Internet. It can also be seen as a form of Parallel
Computing where instead of many CPU cores on a single machine, it contains
multiple cores spread across various locations 1.
Hyperthreading is a technology that allows a single physical processor to behave
like two logical processors. It uses virtual kernels, also known as logical cores, to
share resources such as runtime and cache. This differs from having separate physical
CPU cores. Hyperthreading is the increased efficiency of each core as resources are
shared between logical cores.

You might also like