0% found this document useful (0 votes)
1 views

Unit01-Parallel Computing Introduction

The document outlines the course structure for Parallel and Distributed Computing (CS404) for Fall 2023-24, covering topics such as parallel computing, distributed systems, memory architectures, programming models, and algorithms. It details various types of parallel computers, their architectures, and the principles of parallel programming, including message passing and multithreading. Additionally, it introduces fundamental concepts like Von Neumann architecture and Flynn's taxonomy for classifying computer architectures.

Uploaded by

haq4ibtisam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Unit01-Parallel Computing Introduction

The document outlines the course structure for Parallel and Distributed Computing (CS404) for Fall 2023-24, covering topics such as parallel computing, distributed systems, memory architectures, programming models, and algorithms. It details various types of parallel computers, their architectures, and the principles of parallel programming, including message passing and multithreading. Additionally, it introduces fundamental concepts like Von Neumann architecture and Flynn's taxonomy for classifying computer architectures.

Uploaded by

haq4ibtisam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Parallel and Distributed Computing (CS404) Fall 2023-24

th
BSCS – 7

Parallel and Distributed Computing

Unit 1: Introduction to Parallel Computing..................................... 1


What is Parallel Computing? ............................................................................................................. 1
Parallel Computing .................................................................................................................... 1
Parallel Computer ..................................................................................................................... 1
Parallel Computer Models ......................................................................................................... 2
Classes of Parallel Computer Architecture ................................................................................. 2
Concepts and Terminology ................................................................................................................ 4
Von Neumann Architecture ....................................................................................................... 4
Flynn’s Classical Taxonomy........................................................................................................ 5
Parallel Computing Terminology ............................................................................................... 9
Uses, Limitations and Costs of Parallel Computing .......................................................................... 10
Uses of Parallel Computing...................................................................................................... 10
Limitations of Parallel Computing ............................................................................................ 12

Unit 2: Introduction to Distributed Systems ................................. 14


Distributed Systems ........................................................................................................................ 14
Operational Layers of Distributed Computing.......................................................................... 14
Middleware and Distributed Systems .............................................................................................. 15
Types of Distributed Systems........................................................................................................... 16
High performance distributed computing................................................................................ 16
Distributed Information Systems ............................................................................................. 18
Pervasive systems ................................................................................................................... 21

Unit 3: Parallel Computer Memory ................................................. 24


Memory Hierarchies ........................................................................................................................ 24
Parallel Computer Memory Architecture ......................................................................................... 25
Shared Memory ...................................................................................................................... 25
Distributed Memory ................................................................................................................ 27
Hybrid Distributed-Shared Memory ........................................................................................ 28

I
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

Unit 04: Parallel Programming Models .......................................... 30


Introduction .................................................................................................................................... 30
Parallel Programming Models ......................................................................................................... 30
Data parallel ............................................................................................................................ 30
Task parallel ............................................................................................................................ 31
Process centric ........................................................................................................................ 31
Shared-distributed memory .................................................................................................... 32
Message Passing ..................................................................................................................... 32

Unit 05: Designing Parallel Programs ............................................ 33


Design Methodology ....................................................................................................................... 33
Steps in Designing Parallel Program ................................................................................................ 33
Understand the Problem ......................................................................................................... 33
Partitioning ............................................................................................................................. 34
Communication ....................................................................................................................... 35
Synchronization ...................................................................................................................... 37

Unit 06: Message Passing Interface (MPI) ...................................... 39


Introduction .................................................................................................................................... 39
MPI Programming Model ................................................................................................................ 39
MPI Basics ....................................................................................................................................... 39
C Language Binding ........................................................................................................................ 41

Unit 07: Multithreaded Programming............................................ 43


Threads overview ............................................................................................................................ 43
Kernel-level threads ................................................................................................................ 43
User-level threads ................................................................................................................... 44
Multithreaded Programming .......................................................................................................... 44
Multithreading on a Single Processor ...................................................................................... 44
Multithreaded Programming on Multiple Processors .............................................................. 44
Programming language support .............................................................................................. 44

Unite 08: Parallel Algorithms ........................................................ 45


What is an Algorithm? .................................................................................................................... 45

II
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

Parallel Algorithms.......................................................................................................................... 45
Algorithmic Notations for Parallel Algorithms ................................................................................. 45
Parallel Models for Parallel Algorithms ........................................................................................... 46
Shared-Memory Model ................................................................................................................... 47
PRAM Model ........................................................................................................................... 47
Network Models ............................................................................................................................. 48
Directed Acyclic Graph Models ........................................................................................................ 49
Parallel Algorithm Techniques ......................................................................................................... 49

Unit 09: GPU Architecture and Programming .............................. 50


Graphics Processing Unit (GPU)....................................................................................................... 50
GPU Architecture ............................................................................................................................ 50
Hardware Structure ................................................................................................................. 51
GPU Programming Model ............................................................................................................... 54

Unit 10: ................................................................................................. 55


Fault Tolerance ............................................................................................................................... 55
Basic Concepts ........................................................................................................................ 55
Concurrency control ........................................................................................................................ 58
Interconnection topologies .............................................................................................................. 59
Parallel Networks Topologies .................................................................................................. 60

III
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

Unit 1: Introduction to Parallel Computing


What is Parallel Computing?

Parallel Computing
Parallel computing is a type of computing architecture in which several processors execute or process an
application or computation simultaneously. It refers to the process of breaking down larger problems into
smaller, independent parts, which are often similar. These parts can be executed simultaneously by multiple
processors communicating via shared memory. After processing the results are combined as part of an
overall algorithm. Parallel computing is also known as parallel processing.

There are generally four types of parallel computing:

1. Bit-level parallelism: increases processor word size, which reduces the quantity of instructions the
processor must execute in order to perform an operation on variables greater than the length of the
word.
2. Instruction-level parallelism: the two forms are hardware approach and software approach.
a. The hardware approach implements dynamic parallelism where the processor decides at
run-time which instructions to execute in parallel.
b. The software approach implements static parallelism where the compiler decides which
instructions to execute in parallel.
3. Task parallelism: the parallelization of computer code across multiple processors that runs several
different tasks at the same time on the same data.
4. Superword-level parallelism: a vectorization technique that can exploit parallelism of inline code. It
involves identifying scalar instructions in a large basic block that perform the same operation, and
combining them into a superword operation on a multi-word object, if dependences do not prevent
it.

Based on communication frequency, parallel applications are typically classified as either:

i. Fine-grained parallelism: where subtasks will communicate several times per second
ii. Coarse-grained parallelism: where subtasks do not communicate several times per second, or
iii. Embarrassing parallelism: where subtasks rarely or never communicate

Parallel Computer
A parallel computer is a set of processors that are able to work cooperatively to solve a computational
problem. Parallel computers offer the potential to concentrate computational resources on important
computational problems. Computational resources include processors, memory, or I/O bandwidth, etc.
Parallel computing also includes parallel supercomputers that have hundreds or thousands of processors,
networks of workstations, multiple-processor workstations, and embedded systems.

A parallel computer is simply a collection of processors, typically of the same type, interconnected in a certain
fashion to allow the coordination of their activities and the exchange of data. The processors are assumed to
be located within a small distance of one another, and are primarily used to solve a given problem jointly.

Lecturer: Mairaj 1
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

While in distributed systems, a set of possibly many different types of processors are distributed over a large
geographic area. In distributed systems, the primary goals are:
 to use the available distributed resources, and
 to collect information and transmit it over a network connecting the various processors

Parallel Computer Models

Single Machine Model


Von Neumann computer is based on a single machine model. It comprises a central processing unit (CPU)
connected to a storage unit (memory). The CPU executes a stored program that specifies a sequence of
“read and write” operations on the memory. It is also called sequential machine model.

Multicomputer Model
The multicomputer is an idealized parallel computer model. Each node consists of a von Neumann machine
(a CPU and memory). A node can communicate with other nodes by sending and receiving messages over an
interconnection network. Each computer executes its own program. This program may access local memory
and may send and receive messages over the network.

Figure 1: Multi-computer model

Classes of Parallel Computer Architecture


Parallel computers can be classified according to variety of architectural features and modes of operations.
Some of these criteria include:
 The type and number of processors.
 The interconnections among the processors and the corresponding communication schemes.
 The overall control and synchronization
 The input/output operations

Distributed-Memory MIMD Computer


MIMD stands for multiple instructions multiple data.

 MIMD means that each processor can execute a separate stream of instructions on its own local
data.
 Distributed memory means that memory is distributed among the processors, rather than placed in
a central location.

The principal difference between a multicomputer and the distributed-memory MIMD computer is the cost
of sending and receiving of messages among the nodes. In this architecture, the cost of messaging between
two nodes is dependent on the location of nodes and other network traffic. Examples of this class of
machine include the IBM SP, Intel Paragon, Thinking Machines CM5, Cray T3D, Meiko CS-2, and nCUBE.

Lecturer: Mairaj 2
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

Figure 2: Distributed-memory MIMD computer

Shared-Memory MIMD Computer


Shared-memory MIMD machines are also known as multiprocessor systems. In multiprocessor computers,
all processors share access to a common memory, typically via a bus or a hierarchy of buses.

Programs developed for multicomputers can also execute efficiently on multiprocessors, because shared
memory permits an efficient implementation of message passing. Examples of this class of machine include
the Silicon Graphics Challenge, Sequent Symmetry, and the many multiprocessor workstations.

Figure 3: Shared-memory MIMD computer

In the idealized multiprocessor model, any processor can access any memory element in the same amount
of time. This architecture usually introduces some form of memory hierarchy. Such as copies of frequently
used data items are stored in a cache associated with each processor. Access to this cache is much faster
than access to the shared memory.

SIMD (Single Instruction Multiple Data) computer


In SIMD machines, all processors execute the same instruction stream on a different piece of data. This
approach is appropriate only for specialized problems, such as image processing and certain numerical
simulations. These applications are characterized by a high degree of regularity. Multicomputer algorithms
cannot, in general, be executed efficiently on SIMD computers. The “MasPar MP” is an example of this class
of machine.

Lecturer: Mairaj 3
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

Figure 4: SIMD computer

Concepts and Terminology

Von Neumann Architecture


Von Neumann architecture was first published by John von Neumann in 1945. It is also known as the Von
Neumann model or Princeton architecture. This computer architecture design consists of a Control Unit,
Arithmetic and Logic Unit (ALU), Memory Unit, Registers and Inputs/Outputs.

Von Neumann architecture is based on the stored-program computer concept, where instruction data and
program data are stored in the same memory. This design is still used in most computers produced today.
Von Neumann architecture is the design upon which many general purpose computers are based. The key
elements of von Neumann architecture are:
 data and instructions are both stored as binary digits
 data and instructions are both stored in primary storage
 instructions are fetched from memory one at a time and in order (serially)
 the processor decodes and executes an instruction, before cycling around to fetch the next
instruction
 the cycle continues until no more instructions are available

Design Elements of Von Neumann Architecture

Figure 5: Von Neumann architecture

Central Processing Unit (CPU)


The Central Processing Unit (CPU) is the electronic circuit responsible for executing the instructions of a
computer program. It is sometimes referred to as the microprocessor or processor. The CPU contains the
ALU, CU and a variety of registers.

Registers
Registers are high speed storage areas in the CPU. All data must be stored in a register before it can be
processed.

Lecturer: Mairaj 4
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

MAR Memory Address Register Holds the memory location of data that needs to be accessed
MDR Memory Data Register Holds data that is being transferred to or from memory
AC Accumulator Where intermediate arithmetic and logic results are stored
PC Program Counter Contains the address of the next instruction to be executed
CIR Current Instruction Register Contains the current instruction during processing
Table 1: Registers in CPU – Von Neumann architecture

Arithmetic and Logic Unit (ALU)


The ALU allows arithmetic (add, subtract etc) and logic (AND, OR, NOT etc) operations to be carried out.

Control Unit (CU)


The control unit controls the operation of the computer’s ALU, memory and input/output devices, telling
them how to respond to the program instructions it has just read and interpreted from the memory unit.
The control unit also provides the timing and control signals required by other computer components.

Buses
Buses are the means by which data is transmitted from one part of a computer to another, connecting all
major internal components to the CPU and memory. A standard CPU system bus is comprised of a control
bus, data bus and address bus.

Address Bus Carries the addresses of data between the processor and memory
Data Bus Carries data between the processor, the memory unit and the input/output
devices
Control Bus Carries control signals or commands from the CPU, and status signals from
other devices, in order to control and coordinate all the activities within the
computer
Table 2: Types of buses

Memory Unit
The memory unit consists of RAM, sometimes referred to as primary or main memory. Unlike secondary
memory, primary memory is faster and directly accessible by the CPU. RAM is split into partitions. Each
partition consists of an address and its contents (both in binary form). The addresses uniquely identify every
location in the memory. Loading data from permanent memory (hard drive), into the faster and directly
accessible temporary memory (RAM), allows the CPU to operate much faster.

Flynn’s Classical Taxonomy


Flynn's taxonomy is a classification of computer architectures, proposed by Michael J. Flynn in 1966 and
extended in 1972. This classification system has been used as a tool in design of modern processors and their
functionalities. Flynn’ classifications are based upon the number of concurrent instruction (or control)
streams and data streams available in the architecture.

Flynn's taxonomy distinguishes multi-processor computer architectures according to how they can be
classified along the two independent dimensions of Instruction Stream and Data Stream. Each of these
dimensions can have only one of two possible states – Single or Multiple.

According to Flynn’s taxonomy, the parallel computers can be classified as:


1. Single Instruction stream Single Data stream (SISD)
2. Single Instruction stream Multiple Data stream (SIMD)

Lecturer: Mairaj 5
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

3. Multiple Instruction stream Single Data stream (MISD)


4. Multiple Instruction stream Multiple Data stream (MIMD)

Figure 6: Flynn's taxonomy

Single Instruction Stream, Single Data Stream (SISD):


A sequential computer which exploits no parallelism in either the instruction or data streams. Single control
unit (CU) fetches single instruction stream (IS) from memory. The CU then generates appropriate control
signals to direct single processing element (PE) to operate on single data stream (DS) i.e., one operation at a
time. Examples of SISD architecture are the traditional uniprocessor machines, such as older generation
mainframes, minicomputers, workstations and single processor/core PCs.

Figure 7: Single Instruction Stream, Single Data Stream (SISD)

This class of computers is characterized as:


 A serial (non-parallel) computer
 Single Instruction: Only one instruction stream is being acted on by the CPU during any one clock
cycle
 Single Data: Only one data stream is being used as input during any one clock cycle
 Deterministic execution

Single instruction stream, multiple data streams (SIMD)


A single instruction is simultaneously applied to multiple different data streams. Instructions can be
executed sequentially, such as by pipelining, or in parallel by multiple functional units. Flynn's 1972 paper
subdivided SIMD down into three further categories:

1. Array Processor
These receive the one (same) instruction but each parallel processing unit has its own separate and
distinct memory and register file. The modern term for an array processor is "single instruction,
multiple threads" (SIMT).

Lecturer: Mairaj 6
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

2. Pipelined Processor
These receive the one (same) instruction but then read data from a central resource, each processes
fragments of that data, then writes back the results to the same central resource. In Flynn's 1977
paper the resource is main memory. For modern CPUs the resource is now more typically the
register file. Alternative name for this type of register-based SIMD is "packed SIMD".

3. Associative Processor
These receive the one (same) instruction but in each parallel processing unit an independent
decision is made, based on data local to the unit, as to whether to perform the execution or whether
to skip it. The modern term for associative processor is "Predicated" (or masked) SIMD.

Some modern designs (GPUs in particular) take features of more than one of these subcategories. GPUs of
today are SIMT (single instruction multiple threads) but also are Associative i.e. each processing element in
the SIMT array is also predicated.

Figure 8: Single instruction stream, multiple data streams (SIMD)

This class of computers is characterized as:


 Type of parallel computer
 Single Instruction: All processing units execute the same instruction at any given clock cycle
 Multiple Data: Each processing unit can operate on a different data element
 Best suited for specialized problems characterized by a high degree of regularity, such as
graphics/image processing.
 Synchronous (lockstep) and deterministic execution
 Two varieties: Processor Arrays and Vector Pipelines
 Most modern computers and graphics processor units (GPUs) employ SIMD instructions and
execution units
 Examples:
o Processor Arrays: Thinking Machines CM-2, MasPar MP-1 & MP-2, ILLIAC IV
o Vector Pipelines: IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP, NEC SX-2, Hitachi S820,
ETA10

Lecturer: Mairaj 7
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

Multiple instruction streams, single data stream (MISD)


Multiple instructions operate on one data stream. This is an uncommon architecture which is generally used
for fault tolerance. Heterogeneous systems operate on the same data stream and must agree on the result.
Examples include the Space Shuttle flight control computer.

Figure 9: Multiple instruction streams, single data stream (MISD)

MISD computers are characterized as:


 A type of parallel computer
 Multiple Instructions: Each processing unit operates on the data independently via separate
instruction streams.
 Single Data: A single data stream is fed into multiple processing units.
 Few actual examples of this class of parallel computer have ever existed
 Some conceivable uses might be:
o multiple frequency filters operating on a single signal stream
o multiple cryptography algorithms attempting to crack a single coded message

Multiple instruction streams, multiple data streams (MIMD)


These are multiple autonomous processors that simultaneously execute different instructions on different
data. MIMD architectures include multi-core superscalar processors, and distributed systems, using either
one shared memory space or a distributed memory space.

Figure 10: Multiple instruction streams, multiple data streams (MIMD)

MIMD computers have following characteristics:


 A type of parallel computer
 Multiple Instruction: Every processor may be executing a different instruction stream
 Multiple Data: Every processor may be working with a different data stream
 Execution can be synchronous or asynchronous, deterministic or non-deterministic

Lecturer: Mairaj 8
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

 Currently, the most common type of parallel computer


 Examples: most current supercomputers, networked parallel computer clusters and "grids", multi-
processor SMP computers, multi-core PCs.
 Many MIMD architectures also include SIMD execution sub-components

Parallel Computing Terminology


Some of the more commonly used terms associated with parallel computing are listed below:

CPU
Modern day CPUs consist of one or more cores. A core is a distinct execution unit with its own instruction
stream. Cores with a CPU may be organized into one or more sockets - each socket with its own distinct
memory. When a CPU consists of two or more sockets, usually hardware infrastructure supports memory
sharing across sockets.

Node
A node is a standalone "computer in a box". Node usually comprised of multiple CPUs/processors/cores,
memory, network interfaces, etc. Nodes are networked together to comprise a supercomputer.

Task
A task is a logically discrete section of computational work. A task is typically a program or program-like set
of instructions that is executed by a processor. A parallel program consists of multiple tasks running on
multiple processors.

Pipelining
Pipelining is the breaking of a task into steps performed by different processor units, with inputs streaming
through, much like an assembly line; a type of parallel computing.

Shared Memory
It describes a computer architecture where all processors have direct access to common physical memory. In
a programming, it specifies a model where parallel tasks all have the same "picture" of memory and can
directly address and access the same logical memory locations regardless of where the physical memory
actually exists.

Symmetric Multi-Processor (SMP)


SMP is the shared memory hardware architecture where multiple processors share a single address space
and have equal access to all resources, such as memory, disk, etc.

Distributed Memory
In hardware, distributed memory refers to the network based “memory access” for physical memory (which
is not common). As a programming model, tasks can only logically "see" local machine memory and must use
communications to access memory on other machines where other tasks are executing.

Communications
Parallel tasks typically need to exchange data. There are several ways this can be accomplished, such as
through a shared memory bus or over a network.

Lecturer: Mairaj 9
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

Synchronization
The coordination of parallel tasks in real time, very often associated with communications. Synchronization
usually involves waiting by at least one task. Therefore, it may increase the wall clock execution time of a
parallel application.

Computational Granularity
In parallel computing, granularity is a quantitative or qualitative measure of the ratio of computation to
communication.

 Coarse: relatively large amounts of computational work are done between communication events
 Fine: relatively small amounts of computational work are done between communication events

Observed Speedup
It is one of the simplest and most widely used indicators for measuring the performance of a parallel
program. Speedup is defined as the ratio between the “wall-clock time of serial execution” and “wall-clock
time of parallel execution”

𝑤𝑎𝑙𝑙_𝑐𝑙𝑜𝑐𝑘 𝑡𝑖𝑚𝑒 𝑜𝑓 𝑠𝑒𝑟𝑖𝑎𝑙 𝑒𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛


𝑆𝑝𝑒𝑒𝑑𝑢𝑝 =
𝑤𝑎𝑙𝑙_𝑐𝑙𝑜𝑐𝑘 𝑡𝑖𝑚𝑒 𝑜𝑓 𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 𝑒𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛

Parallel Overhead
It is the required execution time that is unique to parallel tasks, as opposed to that for doing useful work.
Parallel overhead can include factors such as Task start-up time, Synchronizations, Data communications,
Software overhead imposed by parallel languages, libraries, operating system, etc. and Task termination
time.

Massively Parallel
It refers to the hardware that comprises a given parallel system, having many processing elements. The
meaning of "many" keeps increasing, but currently, the largest parallel computers are comprised of
processing elements numbering in the hundreds of thousands to millions.

Embarrassingly (IDEALY) Parallel


It is the solving many similar, but independent tasks simultaneously that require little or zero coordination
between the tasks.

Scalability
Scalability refers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate
increase in parallel speedup with the addition of more resources. Factors that contribute to scalability
include Hardware (particularly memory-CPU bandwidths) and network communication properties,
Application algorithm, Parallel overhead related, Characteristics of the specific application.

Uses, Limitations and Costs of Parallel Computing

Uses of Parallel Computing


The primary goal of parallel computing is to increase available computation power for faster application
processing and problem solving. Parallel processing is generally implemented in operational environments or
scenarios that require massive computation or processing power.

Lecturer: Mairaj 10
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

Parallel computing infrastructure is typically housed within a single datacenter where several processors are
installed in a server rack. Computation requests are distributed in small chunks by the application servers
that are then executed simultaneously on each server.

The importance of parallel computing continues to grow with the increasing usage of multicore processors
and GPUs. GPUs work together with CPUs to increase the throughput of data and the number of concurrent
calculations within an application. Using the power of parallelism, a GPU can complete more work than a
CPU in a given amount of time.

Some advantages of Parallel Computing over Serial Computing are:


 It saves time and money as many resources working together will reduce the time and reduce the
potential costs.
 It can be impractical to solve larger problems on Serial Computing.
 It can take advantage of non-local resources when the local resources are finite. Non-local resources
include resources on wide area network or on the Internet.
 Serial Computing “wastes” the potential computing power. Parallel Computing makes better use of
the hardware.
 A single compute resource can only do one thing at a time. Multiple compute resources can do many
things simultaneously. Parallel computing provides concurrency and saves time and money.
 In real-world many things happen at a certain time but at different places concurrently. This data is
extensively huge to manage. Real-world data needs more dynamic simulation and modeling, and for
achieving the same, parallel computing is the key.
 Complex, large datasets and their management can be organized only and only by using parallel
computing approach.
 Moreover, it is impractical to implement real-time systems using serial computing.

Applications of Parallel Computing


Some important applications of parallel computing are:

Real-Time Simulation of Systems


Real-time simulation refers to a “computer model” of a physical system that can execute at the same rate as
actual "wall clock" time. In other words, the computer model runs at the same rate as the actual physical
system. Real-time simulation extensively used in

 Engineering fields
o Statistical power grid protection tests
o aircraft design and simulation
o motor drive controller design methods and space robot integration, etc
 Computer gaming
 Industrial market for operator training and off-line controller tuning

Science and Engineering


Parallel computing is being used to model difficult problems in many areas of science and engineering, such
as:

 Atmosphere, Earth, Environment


 Physics - applied, nuclear, particle, condensed matter, high pressure, fusion, photonics

Lecturer: Mairaj 11
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

 Bioscience, Biotechnology, Genetics


 Chemistry, Molecular Sciences
 Geology, Seismology
 Mechanical Engineering - from prosthetics to spacecraft
 Electrical Engineering, Circuit Design, Microelectronics
 Computer Science, Mathematics
 Defense, Weapons

Industrial and Commercial Applications


Commercial and industrial applications provide an equal or even greater driving force in the development of
faster computers. These applications require the processing of large amounts of data in sophisticated ways.
For example:

 Data analysis and "Big Data," databases, data mining


 Artificial Intelligence (AI)
 Oil exploration
 Web search engines, web based business services
 Medical imaging and diagnosis
 Pharmaceutical design
 Financial and economic modeling
 Management of national and multi-national corporations
 Image processing, advanced graphics, augmented and virtual reality, particularly in the
entertainment industry
 Networked video and multi-media technologies
 Collaborative work environments

Global Applications
Parallel computing is now being used extensively around the world, in a wide variety of applications. Some
of them are:

 Research
 Finance
 Logistic services
 Information processing services
 Aerospace
 Telecommunication
 Defense
 Health and medicines , and so on …

Limitations of Parallel Computing


In multicomputer model accesses to local (same-node) memory are less expensive than accesses to remote
(different-node) memory. That is, read and write are less costly than send and receive. Hence, it is desirable
that accesses to local data be more frequent than accesses to remote data. This property is called locality.
Locality is fundamental requirement for parallel software, in addition to concurrency and scalability. The
importance of locality depends on the ratio of remote to local access costs. This ratio can vary from 10:1 to

Lecturer: Mairaj 12
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)
Parallel and Distributed Computing (CS404) Fall 2023-24
th
BSCS – 7

1000:1 or greater, depending on the relative performance of the local computer, the network, and the
mechanisms used to move data to and from the network.

Lecturer: Mairaj 13
Note: These handouts/notes are not equivalent and/or replacement of the text/reference books.
Qurtuba University of Science and Information Technology Peshawar
(Computer Science Department)

You might also like