0% found this document useful (0 votes)
8 views38 pages

PDC Lec 11

Uploaded by

kashafnaseer62
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views38 pages

PDC Lec 11

Uploaded by

kashafnaseer62
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Parallel and Distributed

Computing
Week 9
Parallel Algorithms

• We live in a world driven by speed and efficiency. From


weather forecasting to self-driving cars and even
streaming platforms like Netflix, modern systems rely on
the ability to process massive amounts of data quickly.
This is where parallel algorithms come into play.

• Parallel algorithms allow us to break down complex


problems into smaller tasks that can be solved
simultaneously, saving time and computing resources.
 Parallel algorithms are the foundation of parallel and
distributed systems. Without understanding how to
design, optimize, and implement these algorithms, it’s
impossible to build efficient parallel systems.“
 "For example, think about how Google
searches billions of web pages in seconds or
how Netflix recommends movies to millions
of users in real time. These systems rely on
efficient parallel algorithms to handle tasks
at such a massive scale."
Parallel Algorithms

• Parallel Algorithm, is an algorithm which can do multiple


operations in a given unit time.

• Parallel algorithms executed concurrently most of the


time.
Parallel Algorithms cont…

• Some problems are easy to divide up into pieces so


those could be solved as parallel problems.

• Some problems cannot be split up into parallel portions,

as they require the results from a preceding step to

effectively carry on with the next step, these are called

inherently serial problems.


Parallel Algorithms cont…

• Parallel algorithms on Personal Computers have become


more common since the early 2000s because of extensive
improvements in multiprocessing systems and the rise of
multi-core processors.
Parallel Algorithms cont… Structure

• To apply any algorithm properly, it is very important that you


select a proper data structure.

• It is because a particular operation performed on a data

structure may take more time as compared to the same

operation performed on another data structure.


Parallel Algorithms cont… Structure

• Therefore, the selection of a data structure must be done


considering the architecture and the type of operations to be
performed.

• The following data structures are commonly used in parallel


programming:

1. Linked List

2. Arrays

3. Hypercube Network
Parallel Algorithms cont… Structure

• A linked list is a data structure having zero or more nodes connected


by pointers.

• Nodes may or may not occupy consecutive memory locations.

• Each node has two or three parts − one data part that stores the data
and the other two are link fields that store the address of the
previous or next node.
Parallel Algorithms cont…
Structure (linked lists)
• The first node’s address is stored in an external pointer called head. The

last node, known as tail, generally does not contain any address. There are three

types of linked lists:

1. Singly Linked List

2. Doubly Linked List

3. Circular Linked List


Parallel Algorithms cont…
Structure (linked lists)
Parallel Algorithms cont…
Structure (Arrays)
• An array is a data structure where we can store similar types of data.

• It can be one-dimensional or multi-dimensional.

• Arrays can be created:

1. Statically

2. Dynamically
Parallel Algorithms cont…
Structure (Arrays)

• In statically declared arrays, dimension and size of the arrays


are known at the time of compilation.

• In dynamically declared arrays, dimension and size of the array

are known at runtime.


Parallel Algorithms cont…
Structure (Arrays)
• For shared memory programming, arrays can be used as a common

memory and for data parallel programming, they can be used

by partitioning into sub-arrays.


Parallel Algorithms cont…
Structure (Hypercube Network)
• A hypercube network is a specific type of network topology used in
parallel computing systems.
• It is designed to efficiently connect multiple processors in a highly
structured and scalable way

• Each node represents a processor, and each edge represents a direct


communication link between processors.

• It is also known as n-cubes, where n is the number of dimensions.


Parallel Algorithms cont…
Structure (Hypercube Network)
Parallel Algorithms cont… Design
Technique
• Selecting a proper designing technique for a parallel
algorithm is the most difficult and important task.

• Most of the parallel programming problems may have more

than one solution.


Parallel Algorithms cont… Design
Technique
• The following designing techniques can be used for parallel algorithms:

1. Divide and conquer

2. Greedy Method

3. Dynamic Programming

4. Backtracking
Parallel Algorithms cont… Design
Technique (Divide and Conquer)

• In the divide and conquer approach, the problem is divided into


several

small sub-problems.

• Then the sub-problems are solved recursively and combined to get

the solution of the original problem.

• The divide and conquer approach involves the following steps at each level:
Parallel Algorithms cont… Design
Technique (Divide and Conquer)

• Divide: The original problem is divided into sub-problems.

• Conquer: The sub-problems are solved recursively.

• Combine: The solutions of the sub-problems are combined together

to get the solution of the original problem.


Parallel Algorithms cont… Design
Technique (Divide and Conquer)

• The divide and conquer approach is applied in the following algorithms:

1. Binary search

2. Quick sort

3. Merge sort

4. Matrix multiplication
Parallel Algorithms cont… Design
Technique (Divide and Conquer)
Sorting Example: Divide and Conquer Rule
 Sort
the array [14, 10, 4, 8, 2, 12, 6, 0] in the
ascending order
Solution:
 Divide
14, 10, 4, 8, 2, 12, 6,0

Merge-sort

14, 10, 4, 8 2, 12, 6,0

Merge-sort Merge-sort

14, 10 4, 8 2, 12 6, 0
Merge-sort Merge-sort Merge-sort Merge-sort

14 10 4 8 2 12 6 0
 Recursion and Conquer Contd..

0, 2, 4, 6, 8, 10, 12, 14

Merge

4, 8, 10, 14 0, 2, 6, 12

Merge Merge

10, 14 4, 8 2, 12 0, 6
Merge Merge Merge Merge

14 10 4 8 2 12 6 0
Parallel Algorithms cont… Design
Technique (Greedy Algorithm)

• In greedy algorithm approach, solutions are made from


the given solution domain.

• As being greedy, the closest solution that seems to


provide an optimum solution is chosen.
Parallel Algorithms cont… Design
Technique (Greedy Algorithm)

• Greedy algorithms try to find a localized optimum solution,


which may eventually lead to globally optimized solutions.
However, generally greedy algorithms do not provide globally
optimized solutions.
Parallel Algorithms cont… Design
Technique (Greedy Algorithm)
• The problem is to count to a desired value by choosing
the least possible coins and the greedy approach forces the
algorithm to pick the largest possible coin.
• If we are provided coins of 1, 2, 5 and 10 and we are asked to
count 18 then the greedy procedure will be:
Parallel Algorithms cont… Design
Technique (Greedy Algorithm)

1. Select one 10 coin, the remaining count is 8

2. Then select one 5 coin, the remaining count is 3

3. Then select one 2 coin, the remaining count is 1

4. And finally, the selection of one 1 coins solves the problem

The greedy approach gives us: 10 + 5 + 2 + 1, i.e., 4 coins. This


is the optimal solution for this particular setup.
Parallel Algorithms cont… Design
Technique (Greedy Algorithm)
• If we slightly change the problem then the same approach
may not be able to produce the same optimum result.

• For the currency system, where we have coins of 1, 7, 10


value, counting coins for value 18 will be absolutely
optimum but for count like 15, it may use more coins than
necessary.
Parallel Algorithms cont… Design
Technique (Greedy Algorithm)

• For example, the greedy approach will use 10 + 1 + 1 + 1 + 1 + 1,


total 6 coins.

• Whereas the same problem could be solved by using only 3 coins (7 +


7 + 1)

• Hence, we may conclude that the greedy approach picks an immediate


optimized solution and may fail where global optimization is a major
concern.
Parallel Algorithms cont…
Design Technique (Dynamic Programming)

• Dynamic programming approach is similar to divide and conquer


in breaking down the problem into smaller and yet smaller possible
sub- problems.
• But unlike, divide and conquer, these sub-problems are not solved
independently.
Parallel Algorithms cont…
Design Technique (Dynamic Programming)

• Rather results of these smaller sub-problems are


remembered and used for similar or overlapping sub-problems.

• Dynamic programming is used where we have problems, which


can be divided into similar sub-problems, so that their results
can be re-used. Mostly, these algorithms are used for
optimization.
Parallel Algorithms cont…
Design Technique (Dynamic
Programming)
• Before solving the in-hand sub-problem, dynamic algorithm will try to
examine the results of the previously solved sub-problems.

• The solutions of sub-problems are combined in order to achieve the


best solution.
Design Technique
(Backtracking)
• Backtracking is an optimization technique to solve
combinational problems.

• In backtracking, we start with a possible solution, which satisfies all the


required conditions.

• Then we move to the next level and if that level does not
produce a satisfactory solution, we return one level back and start with a
new option.
Design Technique (Backtracking)

 In a parallel backtracking algorithm, we aim to divide the


problem into independent sub-problems, where different
branches of the search tree can be explored in parallel by
multiple processors or threads. This approach can
significantly reduce the time complexity, especially when
the search space is large.
Design Technique (Backtracking)

 Each processor is assigned a different branch of the search


space to explore. Since these branches are independent,
there are no data dependencies between the processors.
The processors work concurrently to explore different paths,
with each processor continuing from where the previous
one left off.
Design Technique (Backtracking)

 Synchronization:
 Once a solution is found (or if all branches are explored), the
processors need to synchronize. This can involve checking if
any processor has found the desired solution and then
terminating the search across all processors.
 If no solution is found, all processors will continue until they
have fully explored the search space.
Thank you

You might also like