0% found this document useful (0 votes)
2 views44 pages

Divide and Conquer

The document discusses the divide-and-conquer algorithm design technique, detailing its process of dividing problems into subproblems, solving them, and combining solutions. It highlights specific algorithms such as Mergesort and Quicksort, explaining their methodologies and efficiencies. Additionally, it covers binary tree traversals and advanced applications like Strassen's matrix multiplication, showcasing the versatility of the divide-and-conquer approach.

Uploaded by

yaseenib9075
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views44 pages

Divide and Conquer

The document discusses the divide-and-conquer algorithm design technique, detailing its process of dividing problems into subproblems, solving them, and combining solutions. It highlights specific algorithms such as Mergesort and Quicksort, explaining their methodologies and efficiencies. Additionally, it covers binary tree traversals and advanced applications like Strassen's matrix multiplication, showcasing the versatility of the divide-and-conquer approach.

Uploaded by

yaseenib9075
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Divide-and-Conquer

Intro
• Divide-and-conquer is probably the best-known general algorithm design
technique.

• Though its fame may have something to do with its catchy name, it is well
deserved: quite a few very efficient algorithms are specific
implementations of this general strategy.

• Divide-and-conquer algorithms work according to the following general


plan:
1. A problem is divided into several subproblems of the same type, ideally
of about equal size.
2. The subproblems are solved (typically recursively, though sometimes a
different algorithm is employed, especially when subproblems become
small enough).
3. If necessary, the solutions to the subproblems are combined to get a
solution to the original problem.
Cont.,
• As mentioned above, in the most typical case of divide-and-
conquer a problem’s instance of size n is divided into two
instances of size n/2.

• More generally, an instance of size n can be divided into ‘b’


instances of size n/b, with ‘a’ of them needing to be solved.

• Here, ‘a’ and ‘b’ are constants; a ≥ 1 and b > 1.

• Assuming that size ‘n’ is a power of b to simplify our analysis,


we get the following recurrence for the running time T (n):
T(n) = aT (n/b) + f (n)
Cont.,
• where f (n) is a function that accounts for the time spent on
dividing an instance of size n into instances of size n/b and
combining their solutions.

• (For the sum example above, a = b = 2 and f (n) = 1.)


Recurrence is called the general Divide-and-conquer
recurrence.

• Obviously, the order of growth of its solution T (n) depends on


the values of the constants ‘a’ and ‘b’ and the order of growth
of the function f (n).

• The efficiency analysis of many divide-and-conquer algorithms


is greatly simplified by the Master Theorem.
Mergesort
Intro
• Mergesort is a perfect example of a successful
application of the divide-and conquer technique.

• It sorts a given array A[0..n − 1] by dividing it into two


halves A[0..n/2 − 1] and A[n/2..n − 1].

• Sorting each of them recursively, and then merging the


two smaller sorted arrays into a single sorted one
The merging of two sorted arrays can be
done as follows:
• Two pointers (array indices) are initialized to point to the first
elements of the arrays being merged.

• The elements pointed to are compared, and the smaller of them is


added to a new array being constructed;

• After that, the index of the smaller element is incremented to point to


its immediate successor in the array it was copied from.

• This operation is repeated until one of the two given arrays is


exhausted, and then the remaining elements of the other array are
copied to the end of the new array.
Input: 8, 3, 2, 9, 7, 1, 5, 4
Quicksort
Intro
• Quicksort is the other important sorting algorithm that
is based on the divide-and-conquer approach.

• Unlike mergesort, which divides its input elements


according to their position in the array, quicksort divides
them according to their value.

• A partition is an arrangement of the array’s elements so


that all the elements to the left of some element A[s]
are less than or equal to A[s], and all the elements to
the right of A[s] are greater than or equal to it.
Cont.,
• Obviously, after a partition is achieved, A[s] will be in its
final position in the sorted array, and we can continue
sorting the two subarrays to the left and to the right of
A[s] independently.

• Note the difference with mergesort:


There, the division of the problem into two
subproblems is immediate and the entire work
happens in combining their solutions;

Here, the entire work happens in the division stage,


with no work required to combine the solutions to the
subproblems.
Selecting a Pivot point
• we use the simplest strategy of selecting the subarray’s first
element: p = A[l].

• we will now scan the subarray from both ends, comparing the
subarray’s elements to the pivot.

• The left-to-right scan, denoted below by index pointer i, starts


with the second element.

• Since we want elements smaller than the pivot to be in the left


part of the subarray, this scan skips over elements that are
smaller than the pivot and stops upon encountering the first
element greater than or equal to the pivot.
Cont.,
• The right-to-left scan, denoted below by index pointer
j, starts with the last element of the subarray.

• Since we want elements larger than the pivot to be in


the right part of the subarray, this scan skips over
elements that are larger than the pivot and stops on
encountering the first element smaller than or equal to
the pivot.
Cont.,
• After both scans stop, three situations may arise,
depending on whether or not the scanning indices have
crossed.

1. If scanning indices i and j have not crossed, i.e., i < j,


we simply exchange A[i] and A[j ] and resume the
scans by incrementing i and decrementing j,
respectively:
Cont.,
2. If the scanning indices have crossed over, i.e., i > j, we
will have partitioned the subarray after exchanging the
pivot with A[j ]:

3. Finally, if the scanning indices stop while pointing to


the same element, i.e., i = j, the value they are pointing
to must be equal to p (why?). Thus, we have the subarray
partitioned, with the split position s = i = j :
Quicksort’s Efficiency
• The number of key comparisons made before a partition is
achieved is n + 1 if the scanning indices cross over and n
if they coincide.
• If all the splits happen in the middle of corresponding
subarrays, we will have the best case.
• The number of key comparisons in the best case satisfies the
recurrence
Worst Case
• In the worst case, all the splits will be skewed to the extreme:
one of the two subarrays will be empty, and the size of the other
will be just 1 less than the size of the subarray being partitioned.

• This unfortunate situation will happen, in particular, for


increasing arrays.

• if A[0..n − 1] is a strictly increasing array and we use A[0] as the


pivot, the left-to-right scan will stop on A[1] while the right-to-
left scan will go all the way to reach A[0], indicating the split at
position 0
Average Case
• Thus, the question about the utility of quicksort comes down to
its average case behavior.
• Let Cavg(n) be the average number of key comparisons made
by quicksort on a randomly ordered array of size n.
• A partition can happen in any position s (0 ≤ s ≤ n − 1) after n
+ 1 comparisons are made to achieve the partition.
• After the partition, the left and right subarrays will have s
and n − 1 − s elements, respectively.
• Assuming that the partition split can happen in each position s
with the same probability 1/n, we get the following recurrence
relation:
Binary Tree Traversals
and Related Properties
Intro
• A binary tree T is defined as a finite set of nodes that is
either empty or consists of a root and two disjoint
binary trees TL and TR called, respectively, the left and
right subtree of the root.
• We usually think of a binary tree as a special case of an
ordered tree
Cont.,
• Since the definition itself divides a binary tree into two smaller structures
of the same type, the left subtree and the right subtree.

• Many problems about binary trees can be solved by applying the divide-
• and-conquer technique.

• As an example, let us consider a recursive algorithm for computing the height


of a binary tree.

• Recall that the height is defined as the length of the longest path from the
root to a leaf.

• Hence, it can be computed as the maximum of the heights of the root’s left
and right subtrees plus 1. The height of the empty tree as − 1.
Cont.,
Three Classic Tree Traversals
• The most important divide-and-conquer algorithms for
binary trees are the three classic traversals:
 preorder, inorder, and postorder.
• All three traversals visit nodes of a binary tree
recursively, i.e., by visiting the tree’s root and its left and
right subtrees.
• They differ only by the timing of the root’s visit:
 In the preorder traversal: the root is visited before the left and
right subtrees are visited (in that order).
 In the inorder traversal: the root is visited after visiting its left
subtree but before visiting the right subtree.
 In the postorder traversal: the root is visited after visiting the left
and right subtrees (in that order).
Cont.,
• Finally, we should note
that, obviously, not all
questions about binary
trees require traversals of
both left and right
subtrees.
• For example, the search
and insert operations for a
binary search tree require
processing only one of the
two subtrees.
Multiplication of Large
Integers and Strassen’s
Matrix Multiplication
Multiplication of Large Integers

To demonstrate Now let us =322

the basic idea of multiply them:


the algorithm, let
us start with a
case of two-digit
integers, say, 23 23 ∗ 14 = (2 . 101 + 3 . 100 ) ∗ (1 . 101 +
4 . 100 )
and 14. These
numbers can be = (2 ∗ 1)102 + (2 ∗ 4 + 3 ∗ 1)101 + (3 ∗
4)100
represented
23 = 2 . 101 + 3 as
. 100
follows:
and
14 = 1 . 101 + 4 . 100 .
Cont.,
• The last formula yields the correct answer of 322, of
course, but it uses the same four digit multiplications as
the pen-and-pencil algorithm.

• Fortunately, we can compute the middle term with just


one digit multiplication by taking advantage of the
products 2 ∗ 1 and 3 ∗ 4 that need to be computed
anyway:

2 ∗ 4 + 3 ∗ 1 = (2 + 3) ∗ (1 + 4) −
2∗1−3∗4
Cont.,
• Of course, there is nothing special about the numbers we just
multiplied. For any pair of two-digit numbers a = a1a0 and b =
b1b0, their product c can be computed by the formula
Cont.,
• How many digit multiplications does this algorithm
make? Since multiplication of n-digit numbers requires
three multiplications of n/2-digit numbers, the
recurrence for the number of multiplications M(n) is
Cont.,
Strassen’s Matrix Multiplication
• Now that we have seen that the divide-and-conquer
approach can reduce the number of one-digit
multiplications in multiplying two integers.
• we should not be surprised that a similar feat can be
accomplished for multiplying matrices.
• Such an algorithm was published by V. Strassen in 1969
[Str69].
• The principal insight of the algorithm lies in the
discovery that we can find the product C of two 2 × 2
matrices A and B with just seven multiplications as
opposed to the eight required by the brute-force
algorithm.
C11=m1+m4-m5+m7
C12=m3+m5
C21=m2+m4
C22=m1+m3-m2+m6

You might also like