0% found this document useful (0 votes)
7 views

Divide and Conquer Algorithm

The document discusses the divide-and-conquer algorithm, emphasizing its effectiveness in parallel programming by breaking problems into independent subproblems that can be solved simultaneously. It highlights the benefits of scalability, improved performance, and modularity, while also addressing challenges such as overhead and load balancing. Examples of applications include parallel merge sort and matrix multiplication, illustrating the algorithm's versatility in optimizing computational tasks.

Uploaded by

tamer.elsaadany
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Divide and Conquer Algorithm

The document discusses the divide-and-conquer algorithm, emphasizing its effectiveness in parallel programming by breaking problems into independent subproblems that can be solved simultaneously. It highlights the benefits of scalability, improved performance, and modularity, while also addressing challenges such as overhead and load balancing. Examples of applications include parallel merge sort and matrix multiplication, illustrating the algorithm's versatility in optimizing computational tasks.

Uploaded by

tamer.elsaadany
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

‫‪AAST‬‬

‫ﺑﺳم اﻟﻠﮫ اﻟرﺣﻣن اﻟرﺣﯾم‬

‫‪1‬‬
divide-and-conquer algorithm AAST

Parallelism
Breaking a Stone into Dust
Divide-and-conquer algorithms are naturally adapted for execution
in multi-processor machines, especially shared-memory systems
where the communication of data between processors does not need
to be planned in advance because distinct sub-problems can be
executed on different processors.

2
Prof. Ossama Ismail
Relation between divide and conquer and parallel
programming AAST

The relation between divide and conquer and parallel programming is


strong, as both involve breaking down problems into smaller, more
manageable tasks. Let's explore how they are related:
Divide and Conquer Overview:
•Divide and Conquer is a fundamental algorithm design technique.
•It works by recursively dividing a problem into smaller subproblems,
solving those subproblems independently, and then combining their
results to solve the original problem.
•Classic examples include algorithms like Merge Sort, Quick Sort, and
Binary Search.

3
Prof. Ossama Ismail
Parallel Programming Overview: AAST

•Parallel Programming refers to the execution of multiple tasks


simultaneously on different processors or cores.
•In parallel programming, tasks can be executed in parallel to reduce
the overall execution time.
Relation Between Divide and Conquer and Parallel Programming:
Parallelism in Divide and Conquer:
•Independent Subproblems: One of the main characteristics of divide
and conquer algorithms is that the subproblems are often
independent of each other. This independence makes them perfect
candidates for parallel execution. Once a problem is divided into
subproblems, each subproblem can be solved in parallel on a
separate core or processor.
4
Prof. Ossama Ismail
Recursive Nature: AAST

Recursive Nature:
•Divide and conquer algorithms are often recursive in nature.
Recursive calls to solve subproblems lend themselves well to parallel
execution because each call can be assigned to a different processor.
This helps in reducing the total execution time.
Parallelizing the Conquer Step:
•In some divide and conquer algorithms, the combine (conquer) step
is the final stage, where the results of the subproblems are merged. In
many cases, this step can also be parallelized, especially if it involves
operations like merging or summing up results from independent
subproblems.

5
Prof. Ossama Ismail
Benefits of Using Divide and Conquer in Parallel
Programming: AAST

Benefits of Using Divide and Conquer in Parallel Programming:


•Scalability: Divide and conquer naturally fits into a parallel programming
model because it divides the problem into smaller, parallelizable pieces. As
the number of available processors increases, the solution scales well,
reducing overall execution time.
•Improved Performance: By solving subproblems in parallel, the total time
complexity of many divide and conquer algorithms can be reduced
significantly when implemented in a parallel programming environment.
•Modularity: The divide and conquer approach allows for easy parallelization
since the tasks are already broken down into smaller parts. This makes it
straightforward to distribute tasks among multiple processors or cores.

6
Prof. Ossama Ismail
Challenges in Parallelizing Divide and Conquer:
AAST

•Overhead: There is some overhead associated with dividing the problem


and coordinating between different processors or cores. If the problem
size is too small, the cost of managing parallel tasks might outweigh the
benefits of parallelization.
•Load Balancing: Ensuring that the subproblems are evenly distributed
among processors is essential for achieving efficient parallelization.
Imbalanced work distribution can lead to some processors sitting idle
while others are overloaded.
•Data Dependency: In some divide and conquer problems, subproblems are
not entirely independent, which makes parallelizing difficult.
Synchronization or communication between threads or processes might be
necessary.

7
Prof. Ossama Ismail
Examples of Divide and Conquer in Parallel Programming:
AAST

•Parallel Merge Sort: Both halves of the array can be sorted in parallel, and the merge step can
also be parallelized by dividing it into smaller sections.
•Parallel Matrix Multiplication: The divide and conquer strategy for matrix multiplication (like
Strassen’s algorithm) can be parallelized by solving smaller matrix products simultaneously.
•Parallel FFT (Fast Fourier Transform): The FFT algorithm, which is often used in signal
processing, can be parallelized using the divide and conquer approach.
Summary:
•Divide and conquer provides a natural framework for parallel programming because of its
recursive division into independent subproblems.
•By executing these subproblems concurrently in parallel, you can achieve significant
performance gains, especially in multi-core or distributed systems.
•While the fit between divide and conquer and parallel programming is strong, careful
consideration of overhead, load balancing, and data dependencies is necessary to realize the
full benefits of parallelism.

8
Prof. Ossama Ismail
2. Parallel Programming Overview: AAST

•Parallel Programming refers to the execution of multiple tasks simultaneously


on different processors or cores.
•In parallel programming, tasks can be executed in parallel to reduce the overall
execution time.
3. Relation Between Divide and Conquer and Parallel Programming:
Parallelism in Divide and Conquer:
•Independent Subproblems: One of the main characteristics of divide and
conquer algorithms is that the subproblems are often independent of each other.
This independence makes them perfect candidates for parallel execution. Once a
problem is divided into subproblems, each subproblem can be solved in parallel
on a separate core or processor.
•For example:
• In Merge Sort, the array is divided into two halves, and each half is
sorted independently. These two halves can be sorted in parallel.
• In Quick Sort, after selecting a pivot, the algorithm divides the array
into two partitions. Both partitions can be sorted independently, and
hence, in parallel.

9
Prof. Ossama Ismail
AAST

Divide & Conquer Algorithm

10
Prof. Ossama Ismail
Recursion in Words of Wisdom AAST

• Philosopher Lao-tzu:
The journey of a thousand miles begins with a single step

11
Prof. Ossama Ismail
divide-and-conquer algorithm AAST

A divide-and-conquer algorithm recursively breaks down a problem


into two or more sub-problems of the same or related type, until
these become simple enough to be solved directly. The solutions to
Breaking a Stone into Dust
the sub-problems are then combined to give a solution to the original
problem.

The divide-and-conquer technique is the basis of efficient algorithms


for many problems, such as:
sorting (e.g., quicksort, merge sort),
multiplying large numbers (e.g., the Karatsuba algorithm),
finding the closest pair of points, syntactic
analysis (e.g., top-down parsers), and

12
Prof. Ossama Ismail
divide-and-conquer algorithm AAST

Parallelism
Divide-and-conquer algorithms are naturally adapted for execution
in multi-processor machines, especially shared-memory systems
Breaking a Stone into Dust
where the communication of data between processors does not need
to be planned in advance because distinct sub-problems can be
executed on different processors.

13
Prof. Ossama Ismail
divide-and-conquer algorithm AAST

Memory access
Divide-and-conquer algorithms naturally tend to make efficient use
of memory caches. The reason is that once a sub-problem is small
enough, it and all its sub-problems can, in principle, be solved within
the cache, without accessing the slower main memory.

D&C algorithms can be designed for important algorithms (e.g.,


sorting, FFTs, and matrix multiplication) to be optimal cache-oblivious
algorithms–they use the cache in a probably optimal way, in an
asymptotic sense, regardless of the cache size.

Prof. Ossama Ismail 14


divide-and-conquer algorithm AAST

Stack size
In recursive implementations of D&C algorithms, one must make sure that
there is sufficient memory allocated for the recursion stack, otherwise, the
execution may fail because of stack overflow.

Stack overflow may be difficult to avoid when using recursive procedures


since many compilers assume that the recursion stack is a contiguous area of
memory, and some allocate a fixed amount of space for it. Compilers may also
save more information in the recursion stack than is strictly necessary, such as
return address, unchanging parameters, and the internal variables of the
procedure. Thus, the risk of stack overflow can be reduced by minimizing the
parameters and internal variables of the recursive procedure or by using an
explicit stack structure.
15
Prof. Ossama Ismail
divide-and-conquer algorithm AAST

Recursion
Divide-and-conquer algorithms are naturally implemented
as recursive procedures. In that case, the partial sub-problems
leading to the one currently being solved are automatically stored in
the procedure call stack. A recursive function is a function that calls
itself within its definition.

H.W.
Write a piece of that returns the stack size used by your computer?

16
Prof. Ossama Ismail
Divide-and-Conquer Algorithm AAST

Method
• Divide problem into subproblems
• Solve subproblems
• If subproblems have same nature as original
problem, the strategy can be applied recursively

• Merge subproblem solutions into solutions for


original problem

17
Prof. Ossama Ismail
Breaking a Stone into Dust AAST

Break Stone:
You want to ground a stone into dust (very small stones)
What is your first step?
First Step
• Use a hammer and strike the stone
Next Step
• If a stone pieces that result is small enough, we are done with
that part
• For pieces that are too large, repeat the
BreakStone process

18
Prof. Ossama Ismail
Breaking a Stone into Dust CONT’d AAST

If the problem is small enough to be solved


directly, do it

If not, find a smaller problem and use its solution to


create the solution to the larger problem

19
Prof. Ossama Ismail
Divide-and-Conquer Algorithm AAST

• Divide-and conquer is a general algorithm design paradigm:


– Divide: divide the input data S in two or more disjoint subsets
S1, S2, S2…
– Recur: solve the subproblems recursively
– Conquer: combine the solutions for S1, S2, …, into a solution
for S
• The base case for the recursion are subproblems of constant size
• Analysis can be done using recurrence
equations

20
Prof. Ossama Ismail
Divide-and-Conquer Algorithm AAST

Divide_Conquer(problem P)
{
if Small(P) return S(P);
else {

divide P into smaller instances P1, P2, …, Pk, k≥1;

Apply Divide_Conquer to each of these subproblems;

return Combine(Divide_Conque(P1),
Divide_Conque(P2),…, Divide_Conque(Pk));
}
}

21
Prof. Ossama Ismail
Divide & Conquer recurrence relation AAST

The computing time of Divide & Conquer is

n small
otherwise

– T(n) == time for Divide & Conquer on any input size n.


– g(n) == time to compute the answer directly (for small inputs)
– f(n) == time for dividing P and combining the solutions.

22 22
Prof. Ossama Ismail
A Typical Divide and Conquer Case AAST

a problem of
size n

subproblem 1 subproblem 2
of size n/2 of size n/2

Find a solution to Find a solution to


subproblem 1 subproblem 2

Cobine solutions to
Solve the original problem

23
Prof. Ossama Ismail
Divide-and-Conquer Examples AAST

Factorial Problem

Example: Finding factorial of n >= 1


n! = n(n-1)(n-2)…1

Divide and Conquer


Strategy:
if n = 1: n! = 1 (direct solution),
else: n! = n * (n-1)!
24
Prof. Ossama Ismail
AAST

Divide-and-Conquer
n
* n!
Factorial n Factorial n -1

25
Prof. Ossama Ismail
Divide-and-Conquer Examples AAST

• Sorting: mergesort and quicksort


• Binary tree traversals
• Binary search (?)
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair and convex-hull algorithms

26
Prof. Ossama Ismail
Sorting Problem AAST

• Optimal number of comparisons


– An algorithm must be able to distinguish which one of the n!
permutations is the correct permutation.
– Can be viewed as a binary decision tree
– Each permutation corresponds to a leaf in the tree
– The depth of the tree is the number of comparisons necessary
for distinguishing the sorted permutation
– A binary tree with height h has at most 2h leaves
– A sorting decision tree has height at least log n!
– log n! = O(n log n)
– Comparison-based sorting is O(n log n)
27
Prof. Ossama Ismail
: Table for the running time of the algorithms AAST

• In this table, n is the number of elements to be sorted .


• The columns "Best", "Average", and "Worst" give the time
complexity in each case.
• "Memory" denotes the amount of auxiliary storage needed
beyond that used by the list itself.

28
Prof. Ossama Ismail
Merge Sort AAST

• Merge sort is a sorting algorithm invented by John von


Neumann based on the divide and conquer technique. It always
runs in n log n time, but requires space.

• Developed merge sort for EDVAC in 1945

29
Prof. Ossama Ismail
AAST

30
Prof. Ossama Ismail
Merge Sort Review AAST

Merge-sort on an input sequence S with n elements


consists of three steps:
– Divide: partition S into two sequences S1 and S2 of
about n/2 elements each
– Recur: recursively sort S1 and S2
– Conquer: merge S1 and S2 into a unique sorted
sequence

31
Prof. Ossama Ismail
Algorithm: Merge-Sort(S,p,r) AAST

A procedure sorts the elements in the sub-array A[p..r]


using divide & conquer and c

• MergeSort(A,p,r)
– if p >= r, do nothing
– if p< r then
• MergeSort(A,p,q)
• MergeSort(A,q+1,r)
• Merge(A,p,q,r)
• Starting by calling MergeSort(A,1,n)

32
Prof. Ossama Ismail
Recurrence Equation Analysis AAST

The conquer step of merge-sort consists of merging two sorted sequences,


each with n/2 elements and implemented by means of a doubly linked list,
takes at most cn steps, for some constant b.
Likewise, the basis case (n < 2) will take at c most steps.
Therefore, if we let T(n) denote the running time of merge-sort:

We can therefore analyze the running time of merge-sort by finding a closed


form solution to the above equation.
That is, a solution that has T(n) only on the left-hand side.

33
Prof. Ossama Ismail
Analysis of Merge sort AAST

• All cases have same efficiency: θ(n log n)


T(n) = 2T(n/2) + θ(n), T(1) = 0
• Number of comparisons in the worst case is close to
theoretical minimum for comparison-based sorting:
⎡log2 n!⎤ ≈ n log2 n - 1.44n

• Space requirement: θ(n) (not in-place)

34
Prof. Ossama Ismail
Solving recurrence eqn. for Mergesort
AAST
Iterative Substitution
• In the iterative substitution, or “plug-and-chug,” technique, we
iteratively apply the recurrence equation to itself and see if we
can find a pattern:

Note that base, T(1)=c, case occurs when 2k = n.


That is, k = log2 n. So, Thus, T(n) is O(n log n) 35
Prof. Ossama Ismail
The Recursion Tree AAST

• Draw the recursion tree for the recurrence relation and


look for a pattern:

depth T’s size


0 1 n time

bn
1 2 n/2
bn
k 2k n/2k bn
Total time = cn + n log2 n

… … …
(last level plus all previous levels)
36
Prof. Ossama Ismail
Merge sort Example AAST

8 3 2 9 7 1 5 4

8 3 2 9 7 1 5 4

The non-recursive version


8 3 2 9 71 5 4
of Mergesort starts from
merging single elements
8 3 2 9 7 1 5 4
into sorted pairs.
3 8 2 9 1 7 4 5

2 3 8 9 1 4 5 7

1 2 3 4 5 7 8 9
37
Prof. Ossama Ismail
Quicksort AAST

• Select a pivot (partitioning element) – here, the first element


• Rearrange the list so that all the elements in the first s
positions are smaller than or equal to the pivot and all the
elements in the remaining n-s positions are larger than or
equal to the pivot (see next slide for an algorithm)

A[i]≤p A[i]≥p
• Exchange the pivot with the last element in the first (i.e., ≤)
subarray — the pivot is now in its final position
• Sort the two subarrays recursively 38
Prof. Ossama Ismail
Quicksort Example AAST

Sort [5 3 1 9 8 2 4 ]
2 3 1 4 5 8 9 7
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9

39
Prof. Ossama Ismail
Analysis of Quicksort AAST

• Best case: split in the middle — Θ(n log n)


• Worst case: sorted array! — Θ(n2) T(n) = T(n-1) + Θ(n)
• Average case: random arrays — Θ(n log n)
• Improvements:
– better pivot selection: median of three partitioning
– switch to insertion sort on small subfiles
– elimination of recursion
These combine to 20-25% improvement
• Considered the method of choice for internal sorting of
large files (n ≥ 10000)
40
Prof. Ossama Ismail
Binary Search AAST

Binary Search - Is a degenerate divide-and-conquer


search algorithm, no combine phase.
Very efficient algorithm for searching in sorted array:
Problem: Searches for a key in a sorted vector,
returning the index where the key was found or -1
when not found.
Reduces the problem size by half each recursion.

41
Prof. Ossama Ismail
Binary Search Algorithm AAST

• A Divide and Conquer Algorithm to find a key in an array:


• -- Precondition: S is a sorted list >> Check if x ϵ S
binsearch(number n, low, high, S[], x)
if low ≤ high then mid = (low + high) / 2
if x = S[mid] then return mid
elsif x < S[mid] then return
binsearch(n, low, mid-1, S, x)
else return binsearch(n, mid+1, high, S, x)
else return 0 end binsearch
end binsearch

42
Prof. Ossama Ismail
Example: binsearch(14, 0,13, S, 22)
AAST
[ ]= S

43
Prof. Ossama Ismail
Analysis of Binary Search AAST

• Time efficiency
– worst-case recurrence: T (n) = 1 + ( ⎣n/2⎦ ), T (1) = 1
solution: T (n) = ⎡log2(n+1)⎤ = O(log n)

This is VERY fast: e.g., T(109) = 30

• Limitations: must be a sorted array (not linked list)

44
Prof. Ossama Ismail
AAST

Questions ?

45
Prof. Ossama Ismail
CC 412 AAST

You might also like