0% found this document useful (0 votes)
89 views31 pages

More On Divide and Conquer

The document discusses the divide-and-conquer algorithm design paradigm. It provides examples of problems that can be solved using this approach, including merge sort, binary search, matrix multiplication, and more. The key steps of divide-and-conquer are to divide the problem into subproblems, solve the subproblems recursively, and combine the subproblem solutions. Analysis techniques like recurrence relations and the master method can be used to determine the runtime of divide-and-conquer algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views31 pages

More On Divide and Conquer

The document discusses the divide-and-conquer algorithm design paradigm. It provides examples of problems that can be solved using this approach, including merge sort, binary search, matrix multiplication, and more. The key steps of divide-and-conquer are to divide the problem into subproblems, solve the subproblems recursively, and combine the subproblem solutions. Analysis techniques like recurrence relations and the master method can be used to determine the runtime of divide-and-conquer algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 31

More on

Divide and Conquer


The divide-and-conquer
design paradigm

1. Divide the problem (instance)


into subproblems.
2. Conquer the subproblems by
solving them recursively.
3. Combine subproblem solutions.
Example: merge sort

1. Divide: Trivial.
2. Conquer: Recursively sort 2 subarrays.
3. Combine: Linear-time merge.
T(n) = 2 T(n/2) + O(n)

# subproblems work dividing


subproblem size and combining
Master theorem:
T(n) = aT(n/b) + f (n)

CASE 1: 若 f(n) = O(nlogb a) ,則 T(n) = (nlogb a)


CASE 2: 若 f(n) = (nlogb a) ,則 T(n) = (nlogb a lg n)
CASE 3: 若 f(n) = (nlogb a+) ,且 af(n/b)  cf(n) ,則
T(n) = (f(n))

Merge sort: a = 2, b = 2 ⇒ nlogb a = n


⇒ CASE 2 T(n) = Θ(n lg n) .
Binary search

Find an element in a sorted array:


1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3 5 7 8 9 12 15
Binary search

Find an element in a sorted array:


1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3 5 7 8 9 12 15
Binary search

Find an element in a sorted array:


1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3 5 7 8 9 12 15
Binary search

Find an element in a sorted array:


1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3 5 7 8 9 12 15
Binary search

Find an element in a sorted array:


1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3 5 7 8 9 12 15
Binary search

Find an element in a sorted array:


1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3 5 7 8 9 12 15
Recurrence for binary search

T(n) = 1 T(n/2) + Θ(n)

# subproblems work dividing


subproblem size and combining

a=1, b=2, nlogb a= n0 ⇒ CASE 2


⇒ T(n) = Θ(lg n) .
Powering a number

Problem: Compute an, where n ∈ N.


Naive algorithm: Θ(n).
Divide-and-conquer algorithm:
Fibonacci numbers

Recursive definition:

0 1 1 2 3 5 8 13 21 34 …
Naive recursive algorithm: Ω( φn)
(exponential time), where
is the golden ratio.
Computing Fibonacci
numbers

Naive recursive squaring:


rounded to the nearest integer. 5
• Recursive squaring: Θ(lg n) time.
• This method is unreliable, since floating-point
arithmetic is prone to round-off errors.
Bottom-up:
• Compute F0, F1, F2, …, Fn in order, forming
each number by summing the two previous.
• Running time: Θ(n).
Recursive squaring

Theorem:

Algorithm: Recursive squaring.


Time = Θ(lg n) .
Proof of theorem. (Induction on n.)
Base (n = 1):
Recursive squaring

Inductive step (n ≥ 2):


Maximum subarray problem
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7

Input: An Array of numbers


Output: A subarray with the maximum sum
Observation:

low mid high


low mid high

• Subproblem: Find a maximum subarray of A[lo


w .. high]. In original call, low =1, high = n.
• Divide the subarray into two subarrays. Find th
e midpoint mid of the subarrays, and consider th
e subarrays A[low ..mid] And A[mid +1..high]
• Conquer by finding a maximum subarrays of A[l
ow ..mid] and A[mid+1..high].
• Combine by finding a maximum subarray that c
rosses the midpoint, and using the best solution
out of the three.
FIND-MAX-CROSSING-SUBARRAY(A, low, mid, high)
// Find a maximum subarray of the form A[i..mid].
leftsum =-¥; sum= 0;
for i = mid downto low
sum =sum + A[i] ;
if sum > leftsum
leftsum = sum
maxleft = i
// Find a maximum subarray of the form A[mid + 1.. j].
rightsum =-¥; sum= 0;
for j = mid + 1 to high
sum = sum + A[j]
if sum > rightsum
rightsum = sum
maxright = j
// Return the indices and the sum of the two subarrays.
return (maxleft, maxright, leftsum +rightsum)
FIND-MAXIMUM-SUBARRAY(A, low, high)
if high == low
return (low, high, A[low]) // base case: only one element
else mid = (low +high)/2
(leftlow; lefthigh, leftsum) =
FIND-MAXIMUM-SUBARRAY(A, low, mid)
(rightlow, righthigh, rightsum) =
FIND-MAXIMUM-SUBARRAY(A, mid + 1, high)
(crosslow; crosshigh, cross-sum) =
FIND-MAX-CROSSING-SUBARRAY(A, low, mid, high)
if leftsum >= rightsum and leftsum >= crosssum
return (leftlow, lefthigh, leftsum)
elseif rightsum >= leftsum and rightsum >= crosssum
return (rightlow, righthigh, rightsum)
T(n)=2T(n/2)+Θ (n)
else return (crosslow, crosshigh, crosssum)

T(n)=Θ (n log n)
Initial call: FIND-MAXIMUM-SUBARRAY(A, 1, n)
Matrix multiplication

Input:
Output:
Standard algorithm

for i ← 1 to n
do for j ← 1 to n
do cij ← 0
for k ← 1 to n
do cij ← cij + aik ⋅bkj
Running time = Θ(n3)
Divide-and-conquer algorithm

IDEA:
n×n matrix = 2×2 matrix of (n/2)×(n/2) submatrices:

8 mults of (n/2)×(n/2) submatrices


4 adds of (n/2)×(n/2) submatrices
Analysis of D&C algorithm

T(n) = 8 T(n/2) + Θ(n2)

# subproblems work dividing


subproblem size and combining

a=8, b=2, nlogb a= n3 ⇒ CASE 1 ⇒ T(n) = Θ(n3) .


No better than the ordinary algorithm.
Strassen’s idea
• Multiply 2×2 matrices with only 7 recursive
P1 = a ⋅ ( f – h) r = P5 + P4 – P2 + P6
P2 = (a + b) ⋅ h s = P1 + P2
P3 = (c + d) ⋅ e t = P3 + P4
P4 = d ⋅ (g – e) u = P5 + P1 – P3 – P7
P5 = (a + d) ⋅ (e + h) 7 mults, 18 adds/subs.
P6 = (b – d) ⋅ (g + h) Note: No reliance on
P7 = (a – c) ⋅ (e + f ) commutativity of mult!
Strassen’s idea
• Multiply 2×2 matrices with only 7 recursive
P1 = a ⋅ ( f – h) r = P5 + P4 – P2 + P6
P2 = (a + b) ⋅ h = (a+d)(e+h)
P3 = (c + d) ⋅ e + d(g-e)-(a+b)h
P4 = d ⋅ (g – e) +(b-d)(g+h)
P5 = (a + d) ⋅ (e + h) = ae+ah+de+dh
P6 = (b – d) ⋅ (g + h) +dg-de-ah-bh
P7 = (a – c) ⋅ (e + f ) +bg+bh-dg-dh
=ae + bg
Strassen’s algorithm

1. Divide: Partition A and B into


(n/2)×(n/2) submatrices. Form terms
to be multiplied using + and – .
2. Conquer: Perform 7 multiplications of
(n/2)×(n/2) submatrices recursively.
3. Combine: Form C using + and – on
(n/2)×(n/2) submatrices.
T(n) = 7 T(n/2) + Θ(n2)
Analysis of Strassen
T(n) = 7 T(n/2) + Θ(n2)
a=7, b=2, nlogb a= nlg7 ≈ n2.81 ⇒ CASE 1 ⇒ T(n) = Θ(n lg 7).

The number 2.81 may not seem much smaller than


3, but because the difference is in the exponent, the
impact on running time is significant. In fact,
Strassen’s algorithm beats the ordinary algorithm
on today’s machines for n ≥ 30 or so.

Best to date (of theoretical interest only): Θ(n2.376…).


VLSI layout
Problem: Embed a complete binary tree
with n leaves in a grid using minimal area.

H(n) = H(n/2) + Θ(1) W(n) = 2W(n/2) + Θ(1)


= Θ(lg n) = Θ(n)

Area = Θ(n lg n)
H-tree embedding

L(n) = 2L(n/4) + Θ(1)


= Θ(√n )

Area = Θ(n)
Conclusion

• Divide and conquer is just one of several


powerful techniques for algorithm design.
• Divide-and-conquer algorithms can be
analyzed using recurrences and the
master method (so practice this math).
• Can lead to more efficient algorithms

You might also like