0% found this document useful (0 votes)
55 views15 pages

Parallel Fundamental Concepts

Parallel computing involves performing multiple calculations simultaneously using multiple processors. There are two main types: shared memory parallelism where all processors access the same memory, and distributed memory parallelism where each processor has its own memory and must communicate to share data. Programs can be parallelized using either data parallelism by splitting data among processors, or task parallelism by splitting tasks among processors. Concurrency refers to handling multiple tasks at once, while parallel execution means executing tasks simultaneously.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views15 pages

Parallel Fundamental Concepts

Parallel computing involves performing multiple calculations simultaneously using multiple processors. There are two main types: shared memory parallelism where all processors access the same memory, and distributed memory parallelism where each processor has its own memory and must communicate to share data. Programs can be parallelized using either data parallelism by splitting data among processors, or task parallelism by splitting tasks among processors. Concurrency refers to handling multiple tasks at once, while parallel execution means executing tasks simultaneously.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

PARALLEL FUNDAMENTAL CONCEPTS

Subtitle
PARALLEL FUNDAMENTAL CONCEPTS

Parallel computing is a type of


computation in which many calculations Add a Slide
are carried out simultaneously. This is in Title - 5
contrast to sequential computing, in
which calculations are carried out one
after the other.
Two main types of parallel computing:
• Shared memory parallelism: • Distributed memory parallelism:

• In shared memory • In distributed memory


parallelism, all of the parallelism, each processor
processors have access to has its own memory. The
the same memory. This is processors must
communicate with each
the most common type of
other in order to share data.
parallelism on modern This type of parallelism is
computers, as most CPUs often used in high-
have multiple cores. performance computing
clusters.
Two main ways to parallelize a program:
• Data parallelism • Task parallelism:

• In data parallelism, the • In task parallelism,


same operation is different tasks are
performed on different performed in parallel. For
parts of the data set. For example, if you are
example, if you are rendering a video, you
sorting a list of numbers, could split the rendering
you could split the list task into smaller tasks
into two parts and sort and render each part in
each part in parallel. parallel.
Example of data parallelism Example of task parallelism
CONCURRENCY AND PARALLEL EXECUTION

• Concurrency refers to the ability of a system CONCURRENCY


to handle multiple tasks at the same time.
AND
• Parallel execution refers to the actual
execution of multiple tasks simultaneously.
PARALLEL
EXECUTION
Here are some other important concepts and terminology in parallel computing:

• Granularity: The granularity of a parallel program refers to the size of


the tasks that are executed in parallel. Fine-grained parallelism involves many
small tasks, while coarse-grained parallelism involves a few large tasks.
• Load balancing: Load balancing is the process of distributing the
workload evenly across the processors. This is important for achieving good
performance.
• Synchronization: Synchronization ensures that the different parts of a
parallel program execute in the correct order. This is necessary to prevent
errors.
• Amdahl's law: Amdahl's law states that the maximum speedup that can
be achieved by parallelizing a program is limited by the fraction of the program
that cannot be parallelized.
Parallel execution

• Refers to the actual execution of multiple tasks


simultaneously. For example, a multi-core CPU can
execute multiple instructions simultaneously. This
means that multiple tasks can be running at the same
time, without having to take turns.
Parallel execution

• Multiple tasks may be executed at the same time, but they may not be executed
simultaneously.
• For example, a concurrent operating system may allow multiple users to be logged in
and running programs at the same time. However, the operating system may only be
able to run one program at a time on the CPU. This means that the tasks may need to
take turns running on the CPU.
Concurrency is often used to improve the performance of a
system. For example, a concurrent web server can handle
multiple requests at the same time. This can improve the
responsiveness of the server and allow it to handle more
requests overall.

Parallel execution can also be used to improve the


performance of a system. However, it is important to note
that not all tasks can be parallelized. For example, a task that
requires sequential access to data cannot be parallelized.
Concurrency: A web server is handling
multiple requests at the same time. The
server is able to do this by switching
between the different requests. This
means that the requests are not being
executed simultaneously, but they are
being handled concurrently.

Parallel execution: A multi-core CPU is


executing multiple instructions
simultaneously. This means that the
instructions are being executed at the
same time, on different cores.
LATENCY

Latency is the time it takes for data to travel from one point to
another. It is often measured in milliseconds (ms). Latency can be
caused by a variety of factors, including the distance between the
two points, the type of network connection, and the load on the
network.

Latency is an important factor to consider when designing and using


applications. For example, a low-latency application is necessary for
real-time communication, such as online gaming and video
conferencing. A high-latency application can be frustrating for users,
as it can cause delays in the response time of the application.
Here are some examples of latency:

• The time it takes for a web page to load

• The time it takes for a video to start playing

• The time it takes for a file to download

• The time it takes for a multiplayer game to respond to your input

There are a number of things that can be done to reduce latency, such as:

• Using a faster network connection

• Reducing the distance between the two points

• Reducing the load on the network

• Using a caching server


Imagine that you are playing a
multiplayer game. If the latency is low,
your input will be registered quickly and
Here is an example of how latency can the game will respond smoothly.
affect the user experience of an However, if the latency is high, there
application: will be a delay between your input and
the game's response. This can make the
game difficult to play and frustrating for
the user.
ARIGATOU GOZAIMAS SA IMONG TANAN

You might also like