2 Divide Conquer
2 Divide Conquer
Algorithmic Paradigms
•Techniques for the Design of Algorithms:
–General approaches to the construction of efficient solutions to problems.
•Such methods are of interest because:
–They provide templates suited to solving a broad range of diverse
problems which can be precisely analyzed.
–They can be translated into common control and data structures provided
by most high-level languages.
•Subsequent lectures will examine paradigms, such as:
–Divide and Conquer
–Greedy
–Dynamic Programming
–Backtracking, branch and bound, ….
•Although more than one technique may be applicable to a specific problem,
it is often the case that an algorithm constructed by one approach is clearly
superior to equivalent solutions built using alternative techniques.
–The choice of design paradigm is an important aspect of algorithm
analysis
Divide-and-Conquer Strategy
• Divide and Conquer is a general algorithm design paradigm that has
created such efficient algorithms as Merge Sort, Binary Search, ….
• This method has three distinct steps:
–Divide: If the input size is too large, divide the input into two or
more sub-problems. That is, divide P P1, …, Pk
• If the input size of the problem is small, it is solved directly
–Recur: Use divide and conquer to solve the sub-problems
associated with each one-kth of the data subsets separately, That is,
find solution for S(P1), …, S(Pk)
–Conquer: Take the solutions to the sub-problems and combine
(“merge”) these solutions into a solution for the original problem.
That is, Merge S(P1 ), …, S(Pk) S(P)
The Divide and Conquer Strategy
• Implementation: suppose we consider the divide-and-
conquer strategy when it splits the input into two sub-
problems of the same kind as the original problem.
• If the input size of the problem is small, it is solved directly.
• If the input size of the problem is large, apply the strategy:
–Divide: divide the input data S in two disjoint subsets
S1and S2
–Recur: Solve each half of the sub-problems associated
with S1 and S2
–Conquer: combine the solution for S1and S2 into a
solution for S
General Algorithm
procedure DCS (P)
if small(P) then
return S(P)
else
divide P into smaller instances P1, P2 …, Pk
apply DCS to each of these sub-problems
return (combine(DCS(P1), DCS(P2), …, DCS(Pk))
end if;
end DCS;
Complexity: f(n) n small
T(n) = aT(n/b) + g(n) otherwise, where
• b be the ways we divide the problem at each step
• a be the number of sub-problems we solve at each step; i.e. n/b.
• T(n) be the time needed to solve the problem with input of size n
• g(n) be the time for dividing the problem and for combining solutions
to sub-problems
• f(n) be the time to compute the answer directly for small inputs
Divide-and-Conquer Technique
a problem of size n
(instance)
subproblem 1 subproblem 2
of size n/2 of size n/2
a solution to a solution to
subproblem 1 subproblem 2
a solution to
the original problem
In general it leads to a recursive algorithm with complexity
T(n) = 2 T(n/2) + g(n)
Solving Recurrence Relation
• One of the method for solving recurrence relation is called the
substitution method.
–This method repeatedly makes substitutions for each occurrence of
the function T(n) until all such occurrences disappear
1 n=1
1. T(n) = 2T(n/2)+n n>1
1 n=1
2. T(n) = 2T(n/2)+1 n>1
Example of Recursion: SUM A[1…n]
•Problem: Write a recursive function to find the sum of the first n
integers A[1…n] and output the sum
–Example: given k = 3, we return sum = A[1] + A[2] + A[3]
given k = n, we return A[1] + A[2] + … + A[n]
–How can you define the problem in terms of a smaller problem of
the same type?
1 + 2 + … + n = [1 + 2 + … + (n -1)] + n
for n > 1, f(n) = f(n-1) + n
–How does each recursive call diminish the size of the problem? It
reduces by 1 the number of values to be summed.
–What instance of the problem can serve as the base case?
n=1
–As the problem size diminishes, will you reach this base case? Yes,
as long as n is nonnegative. Therefore the statement “n >= 1” needs
to be a precondition
Example of Recursion : SUM A[1…n]
Problem: Write a recursive function to find the sum of the first n
integers A[1…n] and output the sum
algorithm LinearSum(A, n)
// Input: an array A with n elements
// Output: The sum of the first n integers in A
if n = 1 then
return A[0]
else
return LinearSum(A, n - 1) + A[n]
end algorithm call return 15 + A[4] = 15 + 5 = 20
LinearSum (A,5)
call return 13 + A[3] = 13 + 2 = 15
LinearSum (A,4)
Example recursion trace: call return 7 + A [2] = 7 + 6 = 13
LinearSum (A,3)
call return 4 + A [1 ] = 4 + 3 = 7
LinearSum (A,2)
call return A[0] = 4
LinearSum (A,1)
Binary Recursive Method
• Binary recursion occurs whenever there are two recursive calls for
each non-base case.
Algorithm BinarySum(A, i, n):
//Input: An array A and integers i and n
//Output: The sum of the n integers in A starting at index i
if n = 1 then
return A[i ]
return (BinarySum(A, i, n/ 2) + BinarySum(A, i + n/ 2, n/ 2))
end algorithm 0, 8
0, 4 4, 4
0, 2 2, 2 4, 2 6, 2
0, 1 1, 1 2, 1 3, 1 4, 1 5, 1 6, 1 7, 1
Binary search
• Binary Search is an algorithm to find an item in a sorted list.
–very efficient algorithm for searching in sorted array
–Limitations: must be a sorted array
• Problem: determine whether a given element K is present in the given list
or not
–Input: Let A = <a1, a2, … an> be a list of elements that are sorted in non-
decreasing order.
–Output: If K is present output its position. Otherwise output “Not
Found”.
• Implementation:
–Pick the pivot item in the middle: Split the list in two halves (size n/2) at
m so that
A[1], … A[m], … A[n].
–If K = A[m], stop (successful search);
–Otherwise, until the list has shrunk to size 1 narrow our search recursively
to either
the top half of the list : A[1..m-1] if K < A[m] or
the bottom half of the list: A[m+1..n] if K > A[m]
Example
• Example: Binary Search for 64 in the given list A[] = {5
8 9 13 22 30 34 37 38 41 60 63 65 82 87 90 91}
1. Looking for 64 in this list.
2. Divide the list into two
(1+17)/2 = 9
3. Pivot = 38. Is 64 < 38? No.
4. Recurse looking for 64 in
the list > 38.
5. etc.
Pivot
• Given 14 elements: A[1:14] = (-15, -6, 0, 7, 9, 23, 54, 82, 101, 112,
125, 131, 142, 151).
–Construct binary search tree and search for (i) 151, (ii) 10
Binary Search Recursive Algorithm
Four Questions in designing recursive algorithm
• How can you define the problem in terms of a smaller
problem of the same type?
Look at the middle of the list. Then recursively search the
top or bottom half, as appropriate.
• How does each recursive call diminish the size of the
problem?
It cuts the size of the list in half (roughly).
• What instance of the problem can serve as the base case?
base case = 1.
• As the problem size diminishes, will you reach this base
case?
Yes, A list cannot have negative size.
Binary Search Recursive Algorithm
procedure BSearch(A, low, high, key)
// A is sorted array. Low =1, high = n
if low = high then
if key = A[low] then return low
else return “Not Found”;
end if
else
mid = (low + high)/2;
if key > A[mid]
return BSearch(A, mid+1, high, key);
else
return BSearch(A, low, mid-1, key);
end if
end if
end algorithm
Binary Search Iterative Algorithm
Procedure BinarySearch(A, n, key)
low 1; high n;
while low high do
mid (low+high)/2
if key = A[mid] then
return mid
else if key < A[mid] then
high mid-1
else low mid+1
return “NotFound”
end
Binary Search Iterative Algorithm
• Analysis: considering the number of element comparison, the worst-
case recurrence is:
T(n) = 1 n =1
T(n/2) + 1 n >1