0% found this document useful (0 votes)
46 views

Assignment On: "Parallel Algorithm"

Parallel algorithms are designed to run efficiently on computers with multiple processors. They have three main benefits: higher throughput, lower latency, and ability to solve larger problems. Parallel algorithms work best in domains like sorting, merging, and computing sums in parallel. However, parallel algorithms also face limitations including data dependency issues, race conditions, high resource requirements, challenges with scalability, and potential for parallel slowdown if too much time is spent on communication between threads.

Uploaded by

Ashadul Hoque
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Assignment On: "Parallel Algorithm"

Parallel algorithms are designed to run efficiently on computers with multiple processors. They have three main benefits: higher throughput, lower latency, and ability to solve larger problems. Parallel algorithms work best in domains like sorting, merging, and computing sums in parallel. However, parallel algorithms also face limitations including data dependency issues, race conditions, high resource requirements, challenges with scalability, and potential for parallel slowdown if too much time is spent on communication between threads.

Uploaded by

Ashadul Hoque
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Assignment

on
“Parallel Algorithm”

Submitted to:

Albina Alam

Department of Computer Science & Engineering (CSE)


University of Information Technology and Sciences (UITS)

-------

Submitted by:

Md. Ashadul Haque Arif


Id No – 16251015
------
Date of submission: 13-05-2020
1. Define parallel algorithm. What makes parallel algorithm better?
Answer: A parallel algorithm is an algorithm that has been specifically written for
execution on a computer with two more processors. But it also can run on
computers with single processor. It also has multiple function units, pipelined
functional units, pipelined memory system.
Two basic things that makes parallel algorithm better. Those are:
1. Throughput: Is the number of operations done per time unit.
2. Latency: Is the time needed to complete one operation.
2. Mention three sectors where parallel algorithm works best.
Answer: The three sectors where parallel algorithms works best, those are:
1. Odd – Even Transposition Sort: Odd-Even Transposition Sort is based on
the Bubble Sort technique. It compares two adjacent numbers and switches
them, if the first number is greater than the second number to get an
ascending order list. The opposite case applies for a descending order series.
Odd-Even transposition sort operates in two phases − odd phase and even
phase. In both the phases, processes exchange numbers with their adjacent
number in the right.
2. Parallel Merge Sort: Parallel Merge sort first divides the unsorted list into
smallest possible sub-lists, compares it with the adjacent list, and merges it
in a sorted order. It implements parallelism very nicely by following the
divide and conquer algorithm.
3. Computing Sum of a sequence with parallelism
3. Briefly explain the limitations of parallel algorithm.
Answer: The Limitations We Face the following limitations when designing a
parallel program:
1. Data Dependency
2. Race condition
3. Resource Requirements
4. Scalability
5. Parallel Slowdown
Explaining these limitations:
1. Data Dependency: Results from multiple use of the same location(s) in
storage by different tasks.

e.g.
for (int i=0; i<100; i++)
array[i]=array[i-1] *20;

• Distributed memory architectures - communicate required data at


synchronization points.
• Shared memory architectures -synchronize read/write operations between
tasks.
2. Race condition: If instruction 1B is executed between 1A and 3A, or if
instruction 1A is executed between 1B and 3B, the program will produce
incorrect data. This is known as a race condition.

3. Resource Requirements:
 The primary intent of parallel programming is to decrease execution
wall clock time, however in order to accomplish this, more CPU time
is required. For example, a parallel code that runs in 1 hour on 8
processors actually uses 8 hours of CPU time.
 The amount of memory required can be greater for parallel codes than
serial codes, due to the need to replicate data and for overheads
associated with parallel support libraries and subsystems.

4. Scalability: Two types of scaling based on time to solution:


 Strong scaling: The total problem size stays fixed as more processors
are added.
 Weak scaling: The problem size per processor stays fixed as more
processors are added.

Hardware factors play a significant role in scalability. Examples:


 Memory-CPU bus bandwidth
 Amount of memory available on any given machine or set of
machines
 Processor clock speed

5. Parallel Slowdown: Not all parallelization results in speed-up.


 Once task split up into multiple threads those threads spend a large
amount of time communicating among each other resulting
degradation in the system.
 This is known as parallel slowdown.

You might also like