0% found this document useful (0 votes)
4 views5 pages

Divide and Conquer

The divide and conquer approach involves breaking a problem into smaller sub-problems, solving them independently, and then combining their solutions. Key examples include Binary Search, Merge Sort, and Karatsuba's Algorithm for multiplying large numbers efficiently. The document also discusses the recurrence relations and complexities associated with these algorithms.

Uploaded by

Asme
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views5 pages

Divide and Conquer

The divide and conquer approach involves breaking a problem into smaller sub-problems, solving them independently, and then combining their solutions. Key examples include Binary Search, Merge Sort, and Karatsuba's Algorithm for multiplying large numbers efficiently. The document also discusses the recurrence relations and complexities associated with these algorithms.

Uploaded by

Asme
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Chapter 2

Divide and Conquer

In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then
each problem is solved independently. When we keep on dividing the sub-problems into even smaller
sub-problems, we may eventually reach a stage where no more division is possible. Those "atomic"
smallest possible sub-problem (fractions) are solved. The solution of all sub-problems is finally merged
in order to obtain the solution of an original problem.
Inspired by emperors and colonizers.
It has three steps:
1. Divide the problem into smaller problems
2. Conquer by solving problems in (1)
3. Combine the results in (2) together.
Example: Binary Search: Search for x in sorted array A.

Examples: Binary Search, Merge sort, Quick sort, etc. Matrix multiplication, Selection, Convex Hulls.
Binary Search
Let T (n) denote the worst-case time to binary search in an array of length n.
Recurrence is T (n) = T (n/2) + O(1).
T (n) = O(log n).
Merge Sort
Sort an unordered array of numbers A.
Let T (n) denote the worst-case time to merge sort an array of
length n.
Recurrence is T (n) = 2T (n/2) + O(n).
T (n) = O(n log n).
Merge Sort: Illustration

Multiplying Numbers
We want to multiply two n-bit numbers. Cost is number of elementary bit steps. Grade school method
has Θ(n2 ) cost.:
n2 multiplies, n2 /2 additions, plus some carries.
Doesn’t hardware provide multiply? It is fast, optimized,
and free. So, why bother?
• True for numbers that fit in one computer word.
But what if numbers are very large.
• Cryptography (encryption, digital signatures) uses
big number “keys.” Typically 256 to 1024 bits
long!
• n2 multiplication too slow for such large numbers.
• Karatsuba’s (1962) divide-and-conquer scheme multiplies two n bit numbers in O(n 1.59 ) steps.

Karatsuba’s Algorithm
Let X and Y be two n-bit numbers. Write X = a b and Y = c d. Where a, b, c, d are n/2 bit numbers.
(Assume n = 2k .). Hence,
XY = (a2n/2 + b)(c2n/2 + d) = ac2n + (ad + bc)2n/2 + bd
Example: X = 4729, Y = 1326
• a = 47; b = 29 and c = 13; d = 26.
• ac = 47 ∗ 13 = 611; ad = 47 ∗ 26 = 1222; bc = 29 ∗ 13 = 377; and bd = 29 ∗ 26 = 754
• XY = 6110000 + 159900 + 754 = 6270654
This is D&C: Solve 4 problems, each of size n/2; then perform O(n) shifts to multiply the terms by 2 n
and 2n/2 . We can write the recurrence as T (n) = 4T (n/2) + O(n). But this solves to T (n) = O(n 2 )!
• XY = ac2 n + (ad + bc)2 n/2 + bd.
• Note that (a − b)(c − d) = (ac + bd) − (ad + bc).
• Solve 3 subproblems: ac, bd, (a − b)(c − d).
We can get all the terms needed for XY by addition and subtraction! The recurrence for this algorithm
is T (n) = 3T (n/2) + O(n) = O(n log2 3 ). The complexity is O(n log 2 3 ) ≈ O(n 1.59 ).
The recursion solution: review
T (n) = 2T (n/2) + cn, with T (1) = 1.
By term expansion.

Set i = log 2 n. Use T (1) = 1. We get T (n) = n + cn(log n) = O(n log n).
The Tree View
T (n) = 2T (n/2) + cn, with T (1) = 1.
# leaves = n; # levels = log n.
Work per level is O(n), so total is O(n log n).

Solving By Induction
Recurrence: T (n) = 2T (n/2) + cn.
Base case: T (1) = 1.
Claim: T (n) = cn log n + cn.
T (n) = 2T (n/2) + cn
= 2 (c(n/2) log(n/2) + cn/2) + cn
= cn (log n − 1 + 1) + cn
= cn log n + cn
More Examples
T (n) = 4T (n/2) + cn, T (1) = 1.
• Stops when n/2 i = 1, and i = log n.
• Recurrence solves to T (n) = O(n2 ).
By Term Expansion

Terminates when 2 i = n, or i = log n.


• 4i = 2i × 2i = n × n = n2 .
• T (n) = n 2 + cn 2 = O(n 2 ).
Exercise
1. T (n) = 2T (n/4) + √n, T (1) = 1.

Master Method
• Number children multiply by factor a at
each level.
• Number of leaves is a log b n = n log b a .
• Verify by taking logarithm on both sides.
• By recursion tree, we get
Applying Master Method

Exercise
• Workout with the analysis of Matrix multiplication, Quick Sort, Linear time selection, Convex
hulls

You might also like