M3 Chapter 5
M3 Chapter 5
1.Divide the problem into a number of subproblems that are smaller instances
of the same problem.
2.Conquer the subproblems by solving them recursively. If they are small
enough, solve the subproblems as base cases.
3.Combine the solutions to the subproblems into the solution for the original
problem.
1
View of divide-and-conquer
2
• ‘n’ is the size of the problem.
• ‘a’ is the number of subproblems in the recursion.
• ‘n/b’ is the size of each subproblem. (Here it is assumed that all
subproblems are essentially the same size.)
• Here a and b are constants.
• ‘f(n)’ is the sum of the work done outside the recursive calls,
which includes the sum of dividing the problem and the sum of
combining the solutions to the subproblems.
• It is not possible always bound the function according to the
requirement, so we make three cases which will tell us what kind
of bound we can apply on the function.
Master Theorem Cases- To solve recurrence relations using Master’s theorem, we compare a with b k.
Case-02: Case-03:
Case-01:
If a = bk and If a < bk and
If a > b ,
k
Here, a ≥ 1 and b > 1 are constants, and f(n) is an asymptotically positive function.
4
5
Practice Problems: For each of the following recurrences, give an expression for the
runtime T (n) if the recurrence can be solved with the Master Theorem. Otherwise,
indicate that the Master Theorem does not apply. For all the problems, the initial
condition T(1)=1
• Solution 1: We compare the given recurrence relation
with
1. T (n) = 3T (n/2) +
T(n) = aT(n/b) + θ (nklogpn).
2. T (n) = 4T (n/2) + • Then, we have, a = 3, b = 2, k = 2, and p = 0
3. T (n) = T (n/2) + • Now, a = 3 and bk = 22 = 4.
6
1.Mergesort
• Mergesort is a perfect example of a successful application of the divide-and conquer technique. It sorts a
given array A[0..n − 1] by dividing it into two halves A[0..n/2 − 1] and A[n/2..n − 1], sorting each of them
recursively, and then merging the two smaller sorted arrays into a single sorted one.
7
8
9
10
2. Quicksort
• In quicksort a partition is an arrangement of the array’s elements so that all the elements
to the left of some element A[s] are less than or equal to A[s], and all the elements to
the right of A[s] are greater than or equal to it:
• Obviously, after a partition is achieved, A[s] will be in its final position in the sorted
array, and we can continue sorting the two subarrays to the left and to the right of A[s]
independently (e.g., by the same method). Note the difference with mergesort: there, the
division of the problem into two subproblems is immediate and the entire work happens
in combining their solutions; here, the entire work happens in the division stage, with
no work required to combine the solutions to the subproblems.
11
Procedure:
• First, we start by selecting a pivot element (it can be any element from the list, here we
are selecting the subarray’s first element)—an element with respect to whose value we
are going to divide the subarray. We will now scan the subarray from both ends,
comparing the subarray’s elements to the pivot.
• The left-to-right scan, denoted by index pointer i, starts with the second element. Since
we want elements smaller than the pivot to be in the left part of the subarray, this scan
skips over elements that are smaller than the pivot and stops upon encountering the first
element greater than or equal to the pivot.
• The right-to-left scan, denoted by index pointer j, starts with the last element of the
subarray. Since we want elements larger than the pivot to be in the right part of the
subarray, this scan skips over elements that are larger than the pivot and stops on
encountering the first element smaller than or equal to the pivot.
• Note: Why is it worth stopping the scans after encountering an element equal to the
pivot? Because doing this tends to yield more even splits for arrays with a lot of
duplicates, which makes the algorithm run faster.
12
• After both scans stop, three situations may arise, depending on whether or not the scanning indices have
crossed:
1. If scanning indices i and j have not crossed, i.e., i < j, we simply exchange A[i] and A[j ] and resume the
scans by incrementing i and decrementing j, respectively:
2. If the scanning indices have crossed over, i.e., i > j, we will have partitioned the subarray after exchanging the
pivot with A[j ]:
3. Finally, if the scanning indices stop while pointing to the same element, i.e., i = j, the value they are pointing
to must be equal to p (why?). Thus, we have the subarray partitioned, with the split position s = i = j :
We can combine the last case with the case of crossed-over indices (i > j ) by exchanging the pivot with A[j ]
whenever i ≥ j .
13
void quicksort(int A[], int l, int r) {
if (l < r) {
s=partition(A, l, r);
quicksort(a, l, s);
quicksort(a, s, r);
} 14
int partition(int A[], int l, int r)
{
int pivot = A[0];
int i = l;
int j = r+1;
int temp;
while (i < j)
{
do
{
i++;
} while (A[i] <= pivot); // Moving elements smaller than pivot to the left
do{
j--;
} while (A[j] >= pivot) ; // Moving elements greater than pivot to the right
if (i <= j) {
Swap(A[i],A[j]
}
}
swap(A[l],A[j])
return j;
}
15
16
Quicksort time complexity
Best Case
The best case occurs when all the partitions are balanced, i.e., they are equal of size n/2.
This gives the below recursive equation and we get O(nlogn) using masters theorem.
Average Case
In the average case the partitions will not be perfectly balanced but will still produce a
recursion tree that has no more than cn cost for lg n (actually logbn where b is the same
constant as in the master theorem) levels giving an average run time also of O(n lg n).
17
Worst Case
• In the worst case, all the splits will be skewed to the extreme: one of the two subarrays will be
empty, and the size of the other will be just 1 less than the size of the subarray being partitioned.
• This unfortunate situation will happen, in particular, for increasing arrays, i.e., for inputs for which
the problem is already solved!
• Indeed, if A[0..n − 1] is a strictly increasing array and we use A[0] as the pivot, the left-to-right scan
will stop on A[1] while the right-to-left scan will go all the way to reach A[0], indicating the split at
position 0.
• So, after making n + 1 comparisons to get to this partition and exchanging the pivot A[0] with itself,
the algorithm will be left with the strictly increasing array A[1..n − 1] to sort. This sorting of strictly
increasing arrays of diminishing sizes will continue until the last one A[n − 2..n − 1] has been
processed.
• The total number of key comparisons made will be equal to
18
3. Binary Tree Traversals and Related Properties
• A binary tree T is defined as a finite set of nodes that is either empty or
consists of a root and two disjoint binary trees TL and TR called,
respectively, the left and right subtree of the root.
• Since the definition itself divides a binary tree into two smaller structures
of the same type, the left subtree and the right subtree, many problems
about binary trees can be solved by applying the divide-and-conquer
technique.
• Example: A recursive algorithm for computing the height of a binary tree.
• Height of a binary tree is defined as the length of the longest path from
the root to a leaf. Hence, it can be computed as the maximum of the
heights of the root’s left and right subtrees plus 1. (We have to add 1 to
account for the extra level of the root.) Also note that it is convenient to
define the height of the empty tree as −1.
19
• We measure the problem’s instance size by the
number of nodes n(T ) in a given binary tree T .
Obviously, the number of comparisons made to
compute the maximum of two numbers and the
number of additions A(n(T)) made by the
algorithm are the same.
• We have the following recurrence relation for
A(n(T )):
A(n(T )) = A(n(Tleft)) + A(n(Tright)) + 1,
for n(T ) > 0,
A(0) = 0.
• For example, for the empty tree, the comparison
T = ∅ is executed once but there are no additions,
and for a single-node tree, the comparison and
addition numbers are 3(root, left, and tight) and 1,
respectively.
20
• It is easy to see that the Height algorithm makes exactly one
addition for every internal node of the extended tree, and it
makes one comparison to check whether the tree is empty for
every internal and external node.
• Therefore, to ascertain the algorithm’s efficiency, we need to
know how many external nodes an extended binary tree with n
internal nodes can have.
• The number of external nodes x is always 1 more than the
number of internal nodes n, i.e, x=n+1
• It also applies to any nonempty full binary tree, in which, by
definition, every node has either zero or two children: for a full
binary tree, n and x denote the numbers of parental nodes and
leaves, respectively • Returning to algorithm Height, the
• To prove this equality, consider the total number of nodes, both number of comparisons to check
internal and external. Since every node, except the root, is one of whether the tree is empty is
the two children of an internal node, we have the equation • C(n) = n + x = 2n + 1
2n+1= x+n • The number of additions is
• A(n) = n.
21
• The most important divide-and-conquer algorithms
for binary trees are the three classic traversals:
preorder, inorder, and postorder. All three traversals
visit nodes of a binary tree recursively, i.e., by
visiting the tree’s root and its left and right
subtrees. They differ only by the timing of the
root’s visit:
1. In the preorder traversal, the root is visited
before the left and right subtrees are visited (in
that order). Preorder: a, b, d, g, e, c, f
Inorder: d, g, b, e, a, f, c
2. In the inorder traversal, the root is visited after Postorder: g, d, e, b, f, c, a
visiting its left subtree but before visiting the
right subtree.
3. In the postorder traversal, the root is visited
after visiting the left and right subtrees (in that
order).
22
Practice problems:
23
4. Multiplication of Large Integers and Strassen’s Matrix Multiplication
1. Multiplication of Large Integers:
• Multiplying two n-digit integers ‘a’ and ‘b’ using divide-and-conquer method, where n is a positive even
number.
• We denote the first half of the a’s digits by and the second half by ; for b, the notations are and ,
respectively. In these notations,
• a = implies that a = + and
• b = implies that b = + .
• Therefore,
24
25
Algorithm : DC_LI_MULTIPLICATION(A, B)
// Description : Perform multiplication of large numbers using divide and conquer strategy.
// Input : Number A and B, where A = an-1…a1a0, and B = bn-1...b1b0
// Output : Multiplication of A and B as C, i.e. C = A * B
26
Example:
• a=123456 and b=786932, (a&b are 6 digit numers, n=6)
• We split the numbers into two equal parts.
a = , where =123 and =456
b = , where =786 and =932
a = implies that a = 123*+ 456, and If one of the numbers has fewer
digits than the other, we can
b= implies that b = 786+932. pad the shorter number with
• c = a*b = leading zeros to equalize their
• = * = 123*786 = 96,678 lengths
• = * = 456*932 = 424,992
• = ( )*( ) – (+ )
= (123 +456)* (786 +932) – (96,678+ 424,992)
= 579*1718- 521670
= 473052
C = 96678*= 97,151,476,992
27
Multiplication of Large Integers Time Complexity :
• How many digit multiplications does this algorithm make?
• Since multiplication of n-digit numbers requires three multiplications of n/2-digit
numbers, the recurrence for the number of multiplications M(n) is ,
28
• How many additions and subtractions?
• Let A(n) be the number of digit additions and subtractions executed by the above
algorithm in multiplying two n-digit decimal integers. Besides 3A(n/2) of these
operations needed to compute the three products of n/2-digit numbers, the above
formulas require five additions and one subtraction. Hence, we have the
recurrence,
https://codecrucks.com/large-integer-multiplication-using-divide-and-conquer/
29
2. Strassen’s Matrix Multiplication:
The principal insight of the algorithm lies in the discovery that we can find
the product C of two 2 × 2 matrices A and B with just seven multiplications.
This is accomplished by using the following formulas:
https://www.youtube.com/watch?v=OSelhO6Qnlc
30
• Thus, to multiply two 2 × 2 matrices, Strassen’s algorithm makes seven
multiplications and 18 additions/subtractions.
• Let A and B be two n × n matrices where n is a power of 2. (If n is not a power of 2,
matrices can be padded with rows and columns of zeros.) We can divide A, B, and
their product C into four (n/2) × (n/2) submatrices each as follows:
• It is not difficult to verify that one can treat these submatrices as numbers to get the
correct product. For example, C00 can be computed either as A00 ∗ B00 + A01 ∗ B10
or as M1 + M4 − M5 + M7 where M1, M4, M5, and M7 are found by Strassen’s
formulas, with the numbers replaced by the corresponding submatrices. If the seven
products of n/2 × n/2 matrices are computed recursively by the same method, we have
Strassen’s algorithm for matrix multiplication.
31
• Let us evaluate the asymptotic efficiency of this algorithm. If M(n) is the number of multiplications made
by Strassen’s algorithm in multiplying two n × n matrices (where n is a power of 2), we get the following
recurrence relation for it:
• The number of additions A(n) made by Strassen’s algorithm: To multiply two matrices of order n > 1, the
algorithm needs to multiply seven matrices of order n/2 and make 18 additions/subtractions of matrices of
size n/2; when n = 1, no additions are made since two numbers are simply multiplied. These observations
yield the following recurrence relation:
32
Algorithm Strassen (n, a, b, c)
1. if n<=2
2. return brute_force (a, b) // C = a * b is a conventional matrix. m1= (a[0][0] + a[1][1])*(b[0][0]+b[1][1]);
m2= (a[1][0]+a[1][1])*b[0][0];
3. else m3= a[0][0]*(b[0][1]-b[1][1]);
4. Partition a into four sub matrices a00, a01, a10, a11. m4= a[1][1]*(b[1][0]-b[0][0]);
5. Partition b into four sub matrices b00, b01, b10, b11. m5= (a[0][0]+a[0][1])*b[1][1];
m6= (a[1][0]-a[0][0])*(b[0][0]+b[0][1]);
6. Strassen (n/2, a00 + a11, b00 + b11, m1) m7= (a[0][1]-a[1][1])*(b[1][0]+b[1][1]);
7. Strassen (n/2, a10 + a11, b00, m2)
8. Strassen (n/2, a00, b01 – b11, m3) c[0][0]=m1+m4-m5+m7;
c[0][1]=m3+m5;
9. Strassen (n/2, a11, b10 – b00, m4) c[1][0]=m2+m4;
10. Strassen (n/2, a00 + a01, b11, m5) c[1][1]=m1-m2+m3+m6;
11. Strassen (n/2, a10 – a00, b00 + b01, m6)
12. Strassen (n/2, a01 – a11, b10 + b11, m7)
13. C = m1+m4-m5+m7 m3+m5
14. m2+m4 m1+m3-m2-m6
15.end if
16.return (C) 33
Example:
34
Tutorial:
1. Sort the given list 9, 8, 4, 2, 7, 1, 5, 3 using mergesort and quicksort.
2. Multiply 2345 with 678 using divide and conquer approach.
3. Find the result of A*B using Strassen’s Matrix Multiplication
A= B=
35