0% found this document useful (0 votes)
16 views10 pages

Parallel Programming

Parallel computing enables simultaneous execution of computations to enhance problem-solving speed, crucial in light of limitations posed by Moore's Law. It addresses increasing processing demands from applications like weather forecasting and AI by utilizing multiple processors, thus improving efficiency and scalability. The document outlines the need for parallel computing, its applications, programming platforms, and the importance of optimizing communication costs for performance.

Uploaded by

diyanshaadvani98
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views10 pages

Parallel Programming

Parallel computing enables simultaneous execution of computations to enhance problem-solving speed, crucial in light of limitations posed by Moore's Law. It addresses increasing processing demands from applications like weather forecasting and AI by utilizing multiple processors, thus improving efficiency and scalability. The document outlines the need for parallel computing, its applications, programming platforms, and the importance of optimizing communication costs for performance.

Uploaded by

diyanshaadvani98
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Presented by-

Sushant Kr. Ray Piyush Singh Tanish Sinha


(28027) (28055) (28032)
Introduction to Parallel
Computing
Parallel computing involves the simultaneous execution of
computations to solve problems faster. This approach is crucial as
Moore's Law, which predicted the doubling of transistors on a
microchip every two years, faces limitations. By dividing tasks
into smaller parts that can be processed concurrently, parallel
computing unlocks new possibilities in scientific simulations,
artificial intelligence, machine learning, and large-scale data
analytics.

In essence, parallel computing is about harnessing the power of


multiple processors to tackle complex problems more efficiently.
This is particularly relevant in today's data-driven world, where
the size and complexity of datasets continue to grow
exponentially.
Need for parallel computing
The clock rate of processors have increased very slowly in the past 20 years. However, the processing demands
have only risen

Increasing clock speed without any significant scientific achievement would entail less efficient processors,
more power usage, and hence, more heat production

Certain applications such as weather predictions, physical simulations, 3d graphics, and artificial intelligence
requires a huge amount of processing power

While it is true that demands for these applications came from the increasing processing power in the new
generation of processors, it is equally true that the demand for more processing power came from
advancements in these fields

Many of these applications were still being performed before the introduction of multi core processors, using a
cluster of multiple processors

Even today, weather forecasting is such a complex task that it requires a chain of supercomputers for processing
Moving beyond Clock Speed
Below is the trend of clock speed vs core count of the processors at any given year-
Why parallel processing?
While increasing the clock rate would entail more heat output
per area of the processor, adding more cores would help us
increase the performance of the processor, while at the same
time, making sure that heat output per unit area doesn't increase

There are physical limitations on the power usage and heat


output of processors. After which, they might become unstable
and die quickly

Memory idling issue: in a single processor architecture, the


memory fetch and write phase only occur once during the
pipeline. Which means, the memory remains idle for the
majority of processing time. At the same time, memory is the
biggest bottleneck during processing. Parallel processors can
share the same memory, thus ensuring that it doesn't remain
idle

There is a limitation to the performance gain from implicit


instruction level pipeline parallelism. After which, we must go
on to multiple processors for more performance.
Scope of parallel computing
Some application domains where parallel paradigm have been proven
to be very effective are:

High-Performance Computing (HPC)

Cloud Computing

Big Data

Multimedia Processing

Parallel processing is almost required for the following applications


nowadays: Computational science, machine learning, real-time systems

Ideal conditions for parallelism:


Minimal/no need for communication (e.g., image processing)
High degree of interaction among tasks is required (e.g., simulations).
Parallel Programming Platforms
Following are some ways that data can be transferred between parallel
units:

Shared Memory Systems: Processors share address space.


For Example: Multicore CPU

Advantages: Easy communication via shared variables.


Challenges: Synchronization issues, cache coherence problems.

Distributed Memory Systems: Each processor has its own memory;


communicates via network. For Example: Clusters

Advantages: Scalability; large systems possible.


Challenges: Explicit communication needed (message passing).

Hybrid Systems: Combination of shared and distributed For Example:


Supercomputers.
Implicit vs Explicit Communication

Implicit Communication: Communication is hidden by the system (shared memory). Programmers are usually
either not aware, or don't have to explicitly care a lot about data transfer. For example: OpenMP is a pragma
directive based indicator for compiler to parallelize certain parts of the application

Pros: Easier to program.


Cons: Harder to optimize performance.

Explicit Communication: Programmer manages data movement (distributed memory). Explicit data transfer
between tasks are handled by the programmers. For example: MPI is standardized and portable library specification
which is used to send and receive messages among different processes. These processes are aware of the data
transfer

Pros: Fine-grained control over data movement.


Cons: More programming effort needed.
Impact of Communication Costs

Higher communication overhead reduces parallel


efficiency.

We need to optimize partitioning and minimize


communication for better performance gain from our
parallel programs

An example of an application that is suitable for


parallelisation is Matrix Multiplication where there is a
very little transfer of information between processes. It
also helps us in scaling up and down as per the
requirements.
Conclusion
Parallel computing is essential for solving modern,
complex, and large-scale problems efficiently.

By dividing tasks among multiple processors, it


allows for faster, more efficient, and scalable
computation.

It plays a critical role in fields like science,


engineering, medicine, AI, and big data analysis.

As technology advances, the importance of


mastering parallel computing will continue to grow.
.
Understanding parallel computing is key to
unlocking the full potential of today’s and
tomorrow’s computing systems.

You might also like