0% found this document useful (0 votes)
5 views72 pages

05 DC

The document discusses the divide and conquer algorithm design paradigm, emphasizing its effectiveness in solving problems by breaking them down into smaller sub-problems. It reviews the merge sort algorithm, explaining its recursive structure and the importance of balanced subdivisions for efficiency. Additionally, it explores applications such as finding the closest pair of points and provides insights into analyzing the performance of these algorithms through recurrence relations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views72 pages

05 DC

The document discusses the divide and conquer algorithm design paradigm, emphasizing its effectiveness in solving problems by breaking them down into smaller sub-problems. It reviews the merge sort algorithm, explaining its recursive structure and the importance of balanced subdivisions for efficiency. Additionally, it explores applications such as finding the closest pair of points and provides insights into analyzing the performance of these algorithms through recurrence relations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

CSE 521

Algorithms:
Divide and Conquer



Larry Ruzzo


Thanks to Paul Beame, Kevin Wayne for some slides



algorithm design paradigms: divide and conquer
Outline:

General Idea

Review of Merge Sort

Why does it work?

Importance of balance

Importance of super-linear growth

Some interesting applications

Closest points

Integer Multiplication

Finding & Solving Recurrences

4

algorithm design techniques
Divide & Conquer

Reduce problem to one or more sub-problems of
the same type

Typically, each sub-problem is at most a constant
fraction of the size of the original problem

Subproblems typically disjoint

Often gives significant, usually polynomial, speedup

Examples:

Binary Search, Mergesort, Quicksort (roughly),
Strassen’s Algorithm, …

5

merge sort


A U C
MS(A: array[1..n]) returns array[1..n] {

If(n=1) return A;

New U:array[1:n/2] = MS(A[1..n/2]);

New L:array[1:n/2] = MS(A[n/2+1..n]);

Return(Merge(U,L));

}



Merge(U,L: array[1..n]) {
L

New C: array[1..2n];

a=1; b=1;

split sort merge

For i = 1 to 2n


C[i] = “smaller of U[a], L[b] and correspondingly a++ or b++”;

Return C;

}

6

why balanced subdivision?
Alternative "divide & conquer" algorithm:

Sort n-1

Sort last 1

Merge them



T(n) = T(n-1)+T(1)+3n for n ≥ 2

T(1) = 0

Solution: 3n + 3(n-1) + 3(n-2) … = Θ(n2)

7

divide & conquer – the key idea
Suppose we've already invented DumbSort,
taking time n2

Try Just One Level of divide & conquer:

DumbSort(first n/2 elements)

DumbSort(last n/2 elements)

Merge results

D&C in a

Time: 2 (n/2)2 +n= n2/2 +n≪ n2
nutshell

Almost twice as fast!

8

d&c approach, cont.
Moral 1: “two halves are better than a whole”


Two problems of half size are better than one full-size
problem, even given O(n) overhead of recombining, since
the base algorithm has super-linear complexity.



Moral 2: “If a little's good, then more's better”


Two levels of D&C would be almost 4 times faster, 3 levels
almost 8, etc., even though overhead is growing. 
Best is usually full recursion down to some small constant
size (balancing "work" vs "overhead").

In the limit: you’ve just rediscovered mergesort!

9

d&c approach, cont.
Moral 3: unbalanced division less good:

(.1n)2 + (.9n)2 + n = .82n2 + n

The 18% savings compounds significantly if you carry recursion to
more levels, actually giving O(nlogn), but with a bigger constant.
So worth doing if you can’t get 50-50 split, but balanced is better
if you can.

This is intuitively why Quicksort with random splitter is good –
badly unbalanced splits are rare, and not instantly fatal.

(1)2 + (n-1)2 + n = n2 - 2n + 2 + n

Little improvement here.

10

mergesort (review)
Mergesort: (recursively) sort 2 half-lists, then
merge results.



T(n) = 2T(n/2)+cn, n≥2

Log n levels

O(n)
T(1) = 0
work
per
Solution: Θ(n log n) level

(details later)

11

A Divide & Conquer Example:
Closest Pair of Points

12

closest pair of points: non-geometric version
Given n points and arbitrary distances between them,
find the closest pair. (E.g., think of distance as airfare
– definitely not Euclidean distance!)






(… and all the rest of the (2

n) edges…)





Must look at all n choose 2 pairwise distances, else 
any one you didn’t check might be the shortest.

Also true for Euclidean distance in 1-2 dimensions?
13

closest pair of points: 1 dimensional version
Given n points on the real line, find the closest pair







Closest pair is adjacent in ordered list

Time O(n log n) to sort, if needed

Plus O(n) to scan adjacent pairs

Key point: do not need to calc distances between all
pairs: exploit geometry + ordering

14

closest pair of points: 2 dimensional version
Closest pair. Given n points in the plane, find a pair with smallest
Euclidean distance between them.



Fundamental geometric primitive.

Graphics, computer vision, geographic information systems, molecular
modeling, air traffic control.

Special case of nearest neighbor, Euclidean MST, Voronoi.


fast closest pair inspired fast algorithms for these problems



Brute force. Check all pairs of points p and q with Θ(n2) comparisons.



1-D version. O(n log n) easy if points are on a line.



Assumption. No two points have same x coordinate.

Just to simplify presentation


15

closest pair of points. 2d, Euclidean distance: 1st try
Divide. Sub-divide region into 4 quadrants.


16

closest pair of points: 1st try
Divide. Sub-divide region into 4 quadrants.

Obstacle. Impossible to ensure n/4 points in
each piece.




17

closest pair of points
Algorithm.

Divide: draw vertical line L with ≈ n/2 points on each side.




18

closest pair of points
Algorithm.

Divide: draw vertical line L with ≈ n/2 points on each side.

Conquer: find closest pair on each side, recursively.

21

12

19

closest pair of points
Algorithm.

Divide: draw vertical line L with ≈ n/2 points on each side.

Conquer: find closest pair on each side, recursively.

Combine: find closest pair with one point in each side.

Return best of 3 solutions.
seems 
like 

Θ(n ) ?
2

8

21

12

20

closest pair of points
Find closest pair with one point in each side,
assuming distance < δ.

21

δ = min(12, 21)

12

21

closest pair of points
Find closest pair with one point in each side,
assuming distance < δ.

Observation: suffices to consider points within δ of line L.

21

δ = min(12, 21)

12

22

δ

closest pair of points
Find closest pair with one point in each side,
assuming distance < δ.

Observation: suffices to consider points within δ of line L.

Almost the one-D problem again: Sort points in 2δ-strip by
their y coordinate.

L

7

6

5

4
21

δ = min(12, 21)

12
3

2

1
23

δ

closest pair of points
Find closest pair with one point in each side,
assuming distance < δ.

Observation: suffices to consider points within δ of line L.

Almost the one-D problem again: Sort points in 2δ-strip by
their y coordinate. Only check pts within 8 in sorted list!

L

7

6

5

4
21

δ = min(12, 21)

12
3

2

1
24

δ

closest pair of points
Def. Let si have the ith smallest
y-coordinate among points  39

j
in the 2δ-width-strip.

Claim. If |i – j| > 8, then the  31

distance between si and sj 


½δ

is > δ.
29

30

28

½δ

Pf: No two points lie in the 
i 27

same ½δ-by-½δ box:  26



25


! 1 $ ! 1 $ 1 2
2 2

# & +# & = = ≈ 0.7 < 1



"2% "2% 2 2
δ
δ


so ≤ 8 boxes within +δ of y(si).
25

closest pair algorithm
Closest-Pair(p1, …, pn) {
if(n <= ??) return ??

Compute separation line L such that half the points


are on one side and half on the other side.

δ1 = Closest-Pair(left half)
δ2 = Closest-Pair(right half)
δ = min(δ1, δ2)

Delete all points further than δ from separation line L

Sort remaining points p[1]…p[m] by y-coordinate.

for i = 1..m
k = 1
while i+k <= m && p[i+k].y < p[i].y + δ
δ = min(δ, distance between p[i] and p[i+k]);
k++;

return δ.
}
26

closest pair of points: analysis
Analysis, I: Let D(n) be the number of pairwise distance
calculations in the Closest-Pair Algorithm when run on n ≥ 1
points




# 0 n = 1&
D(n) ≤ $2D n /2 + 7n n > 1' ⇒ D(n) = O(n log n)
% ( ) (





BUT – that’s only the number of distance calculations



What if we counted comparisons?

28

closest pair of points: analysis
Analysis, II: Let C(n) be the number of comparisons between
coordinates/distances in the Closest-Pair Algorithm when run
on n ≥ 1 points


"$ 0 n = 1 &$ 2

C(n) ≤ #
2C ( n / 2 ) + kn log n n > 1
' ⇒ C(n) = O(n log n)
$% $(


for some constant k


Q. Can we achieve O(n log n)?



A. Yes. Don't sort points from scratch each time.

Sort by x at top level only.

Each recursive call returns δ and list of all points sorted by y

Sort by merging two pre-sorted lists.

T(n) ≤ 2T ( n /2) + O(n) ⇒ T(n) = O(n log n) 29



is it worth the effort?
Code is longer & more complex

O(n log n) vs O(n2) may hide 10x in constant?



How many points?

Speedup:
n

n2 / (10 n log2 n)

10
0
.3

100
1
.5

1,000
10

10,000
75

100,000
602

1,000,000
5,017

10,000,000
43,004

30

Going From Code to Recurrence

31

going from code to recurrence
Carefully define what you’re counting, and write it
down!

“Let C(n) be the number of comparisons between sort keys
used by MergeSort when sorting a list of length n ≥ 1”

In code, clearly separate base case from recursive case,
highlight recursive calls, and operations being counted.

Write Recurrence(s)

32

merge sort

Base Case

MS(A: array[1..n]) returns array[1..n] {


If(n=1) return A;

New L:array[1:n/2] = MS(A[1..n/2]);
Recursive
New R:array[1:n/2] = MS(A[n/2+1..n]);
calls

Return(Merge(L,R));

}

One
Merge(A,B: array[1..n]) {
Recursive

New C: array[1..2n];

Level

a=1; b=1;

For i = 1 to 2n {
Operations


C[i] = “smaller of A[a], B[b] and a++ or b++”;

being

Return C;

}
counted

33

the recurrence





Base case





#0 if n = 1
C(n) = $



%2C(n /2) + (n −1) if n > 1

One compare per

Recursive calls
element added to
merged list, except
Total time: proportional to C(n)
the last.

(loops, copying data, parameter passing, etc.)

34

going from code to recurrence
Carefully define what you’re counting, and write it
down!

“Let D(n) be the number of pairwise distance calculations
in the Closest-Pair Algorithm when run on n ≥ 1 points”

In code, clearly separate base case from recursive case,
highlight recursive calls, and operations being counted.

Write Recurrence(s)

35

Basic operations:
closest pair algorithm
distance calcs

Closest-Pair(p1, …, pn) {
if(n <= 1) return ∞ Base Case
0

Compute separation line L such that half the points
are on one side and half on the other side.

δ1 = Closest-Pair(left half)
Recursive calls (2)
2D(n / 2)

δ2 = Closest-Pair(right half)
δ = min(δ1, δ2)

Delete all points further than δ from separation line L


One 
recursive 
Sort remaining points p[1]…p[m] by y-coordinate.
level

for i = 1..m Basic operations at

this recursive level

k = 1
while i+k <= m && p[i+k].y < p[i].y + δ
7n

δ = min(δ, distance between p[i] and p[i+k]);
k++;

return δ.
}
36

closest pair of points: analysis
Analysis, I: Let D(n) be the number of pairwise distance
calculations in the Closest-Pair Algorithm when run on n ≥ 1
points




# 0 n = 1&
D(n) ≤ $2D n /2 + 7n n > 1' ⇒ D(n) = O(n log n)
% ( ) (





BUT – that’s only the number of distance calculations



What if we counted comparisons?

37

going from code to recurrence
Carefully define what you’re counting, and write it
down!

“Let D(n) be the number of comparisons between
coordinates/distances in the Closest-Pair Algorithm 
when run on n ≥ 1 points”

In code, clearly separate base case from recursive case,
highlight recursive calls, and operations being counted.

Write Recurrence(s)

38

Basic operations:
closest pair algorithm
comparisons

Closest-Pair(p1, …, pn) {
Recursive calls (2)

if(n <= 1) return ∞ 0

Base Case

Compute separation line L such that half the points
k1n log n

are on one side and half on the other side.

δ1 = Closest-Pair(left half)
2C(n / 2)

δ2 = Closest-Pair(right half)
δ = min(δ1, δ2) 1

Delete all points further than δ from separation line L k2n


Sort remaining points p[1]…p[m] by y-coordinate. k3n log n


for i = 1..m Basic operations at



this recursive level

k = 1
while i+k <= m && p[i+k].y < p[i].y + δ
7n

δ = min(δ, distance between p[i] and p[i+k]);
k++;
One 
return δ. recursive 
} level

39

closest pair of points: analysis
Analysis, II: Let C(n) be the number of comparisons of
coordinates/distances in the Closest-Pair Algorithm 
when run on n ≥ 1 points


"$ 0 n = 1 &$

C(n) ≤ # 2C n / 2 + k n log n n > 1 ' ⇒ C(n) = O(n log 2 n)
$% ( ) 4 $(


for some k4 ≤ k1 + k2 + k3 + 7


Q. Can we achieve time O(n log n)?



A. Yes. Don't sort points from scratch each time.

Sort by x at top level only.

Each recursive call returns δ and list of all points sorted by y

Sort by merging two pre-sorted lists.

T(n) ≤ 2T ( n /2) + O(n) ⇒ T(n) = O(n log n) 40



Integer Multiplication

41

integer arithmetic
Add. Given two n-bit  1
1
1
1
1
1
0
1

integers a and b,  1


1
0
1
0
1
0
1

compute a + b.
Add
+
0
1
1
1
1
1
0
1



1
0
1
0
1
0
0
1
0

O(n) bit operations.



1
1
0
1
0
1
0
1


*
0
1
1
1
1
1
0
1

Multiply. Given two n-digit  1


1
0
1
0
1
0
1

0
0
0
0
0
0
0
0

Multiply

integers a and b,  1
1
0
1
0
1
0
1

compute a × b. 1
1
0
1
0
1
0
1

1
1
0
1
0
1
0
1

The “grade school” method:
1
1
0
1
0
1
0
1

1
1
0
1
0
1
0
1

Θ(n2) bit operations.
0
0
0
0
0
0
0
0

0
1
1
0
1
0
0
0
0
0
0
0
0
042

0


1

integer arithmetic
Add. Given two n-bit  1
1
1
1
1
1
0
1

integers a and b,  1


1
0
1
0
1
0
1

compute a + b.
Add
+
0
1
1
1
1
1
0
1



1
0
1
0
1
0
0
1
0

O(n) bit operations.



1
1
0
1
0
1
0
1


*
0
1
1
1
1
1
0
1

Multiply. Given two n-bit  1


1
0
1
0
1
0
1

0
0
0
0
0
0
0
0

Multiply

integers a and b,  1
1
0
1
0
1
0
1

compute a × b. 1
1
0
1
0
1
0
1

1
1
0
1
0
1
0
1

The “grade school” method:
1
1
0
1
0
1
0
1

1
1
0
1
0
1
0
1

Θ(n2) bit operations.
0
0
0
0
0
0
0
0

0
1
1
0
1
0
0
0
0
0
0
0
0
043

0


1

divide & conquer multiplication: warmup
To multiply two 2-digit integers:

Multiply four 1-digit integers.

Add, shift some 2-digit integers to obtain result.



x = 10⋅ x1 + x 0 4 5
y1 y0

y
10⋅ y1 + y 0
= 3

2
x1 x0


xy
(10⋅ x1 + x 0 ) (10⋅ y1 + y 0 )
=
1
0
x0⋅y0

= 100 ⋅ x1 y1 + 10⋅ ( x1 y 0 + x 0 y1 ) + x 0 y 0

0 8

x0⋅y1

Same idea works for long integers –


1 5

x1⋅y0

1 2
can split them into 4 half-sized ints


x1⋅y1

1 4 4



0





44



divide & conquer multiplication: warmup
To multiply two n-bit integers:

Multiply four ½n-bit integers.

Add two ½n-bit integers, and shift to obtain result.



x = 2 n / 2 ⋅ x1 +


y = 2 n / 2 ⋅ y1 +
x0
y0
1
1
0
1
0
1
0
1
y1 y0

*
0
1
1
1
1
1
0
1
x1 x0

xy
= (2 n / 2 ⋅ x1 + x 0 ) (2 n / 2 ⋅ y1 + y 0 ) 0
1
0
0
0
0
0
1
x0⋅y0

= 2 n ⋅ x1 y1 + 2 n / 2 ⋅ ( x1 y 0 + x 0 y1 ) + x 0 y 0

1
0
1
0
1
0
0
1
x0⋅y1

0
0
1
0
0
0
1
1


0
1
0
1
1
0
1
1

x1⋅y0

T(n) = 4T (n /2 ) + Θ(n) ⇒ T(n) = Θ(n 2 ) x1⋅y1





 
recursive calls add, shift 0
1
1
0
1
0
0
0
0
0
0
0
0
0
0
1

€ assumes n is a power of 2

45

key trick: 2 multiplies for the price of 1:

x = 2 n / 2 ⋅ x1 + x0
y = 2 n / 2 ⋅ y1 + y0 Well, ok, 4 for 3 is
xy = (2 n / 2 ⋅ x1 + x 0 ) (2 n / 2 ⋅ y1 + y 0 ) more accurate…

= 2 n ⋅ x1 y1 + 2 n / 2 ⋅ ( x1 y 0 + x 0 y1 ) + x 0 y 0

α = x1 + x 0
β = y1 + y 0
αβ = ( x1 + x 0 ) ( y1 + y 0 )
= x1 y1 + ( x1 y 0 + x 0 y1 ) + x 0 y 0
( x1 y 0 + x 0 y1 ) = αβ − x1 y1 − x 0 y 0
46


Karatsuba multiplication
To multiply two n-bit integers:

Add two ½n bit integers.

Multiply three ½n-bit integers.

Add, subtract, and shift ½n-bit integers to obtain result.




x = 2 n / 2 ⋅ x1
+ x0
y = 2 n / 2 ⋅ y1
+ y0

xy = 2 n ⋅ x1 y1
+ 2 n / 2 ⋅ ( x1 y0 + x0 y1 ) + x0 y0

= 2 n ⋅ x1 y1
+ 2 n / 2 ⋅ ( (x1 + x0 ) (y1 + y0 ) − x1 y1 − x0 y0 ) + x0 y0

A
B
A
C
C

Theorem.

[Karatsuba-Ofman, 1962] Can multiply two n-digit
integers in O(n1.585) bit operations.


T(n) ≤ T ( #n /2$ ) + T ( %n /2& ) + T ( 1+ %n /2& ) + Θ(n)
 
recursive calls add, subtract, shift

Sloppy version : T(n) ≤ 3T(n /2) + O(n)


log 2 3
⇒ T(n) = O(n ) = O(n1.585 )
47


Karatsuba multiplication
Theorem. [Karatsuba-Ofman, 1962] Can multiply two n-digit
integers in O(n1.585) bit operations.



T(n) ≤ T ( #n /2$ ) + T ( %n /2& ) + T ( 1+ %n /2& ) + Θ(n)
 
recursive calls add, subtract, shift

Sloppy version : T(n) ≤ 3T(n /2) + O(n)


log 2 3
⇒ T(n) = O(n ) = O(n1.585 )

48

49

50

multiplication – the bottom line
Naïve:

Θ(n2)

Karatsuba:
Θ(n1.59…)

Amusing exercise: generalize Karatsuba to do 5 size 
n/3 subproblems → Θ(n1.46…)

Best known:
Θ(n log n loglog n)

"Fast Fourier Transform"

but mostly unused in practice (unless you need really big
numbers - a billion digits of π, say)

High precision arithmetic IS important for crypto


51

Another D&C Example: Multiplying Polynomials

Similar ideas apply to polynomial multiplication



We’ll describe the basic ideas by multiplying
polynomials rather than integers

In fact, it’s somewhat simpler: no carries!

53

Notes on Polynomials

These are just formal sequences of coefficients so
when we show something multiplied by xk it just
means shifted k places to the left – basically no
work

Usual 
Polynomial 
Multiplication:



3x 2 + 2x + 2!


x2 - 3x + 1!
3x2 + 2x + 2!
-9x3 - 6x2 - 6x !
3x4 + 2x3+ 2x2 !
3x4 - 7x3 - x2 - 54
4x + 2 !
Polynomial Multiplication

Given:

Degree m-1 polynomials P and Q

P = a0 + a1 x + a2 x2 + … + am-2xm-2 + am-1xm-1

Q = b0 + b1 x+ b2 x2 + … + bm-2xm-2 + bm-1xm-1

Compute:

Degree 2m-2 Polynomial P Q

P Q = a0b0 + (a0b1+a1b0) x + (a0b2+a1b1 +a2b0) x2


+...+ (am-2bm-1+am-1bm-2) x2m-3 + am-1bm-1 x2m-2

Obvious Algorithm:

Compute all aibj and collect terms

Θ (m2) time

55

Naïve Divide and Conquer

Assume m=2k

P = (a0 + a1 x + a2 x2 + ... + ak-2 xk-2 + ak-1 xk-1) + 
(ak + ak+1 x + ... + am-2xk-2 + am-1xk-1) xk 

= P0 + P1 xk

Q = Q0 + Q1 xk



P Q = (P0+P1xk)(Q0+Q1xk)





= P0Q0 + (P1Q0+P0Q1)xk + P1Q1x2k


4 sub-problems of size k=m/2 plus linear combining



T(m)=4T(m/2)+cm

Solution T(m) = O(m2)



56

Karatsuba’s Algorithm

A better way to compute terms



Compute

P0Q0

P1Q1

(P0+P1)(Q0+Q1) which is P0Q0+P1Q0+P0Q1+P1Q1

Then

P0Q1+P1Q0 = (P0+P1)(Q0+Q1) - P0Q0 - P1Q1

3 sub-problems of size m/2 plus O(m) work

T(m) = 3 T(m/2) + cm

T(m) = O(mα) where α = log23 = 1.585...

57

Karatsuba: Details
P = ! Pone! Pzerp!
Q =! Qone! Qzero!
Prod1!
Mid!
Prod2!
R!
PolyMul(P, Q):
2m-2 !m !m/2 !0!
// P, Q are length m = 2k vectors, with P[i], Q[i] being
// the coefficient of xi in polynomials P, Q respectively.

if (m==1) return (P[0]*Q[0]);

Let Pzero be elements 0..k-1 of P; Pone be elements k..m-1

Qzero, Qone : similar

Prod1 = PolyMul(Pzero, Qzero);
// result is a (2k-1)-vector

Prod2 = PolyMul(Pone, Qone);
// ditto

Pzo = Pzero + Pone;


// add corresponding elements

Qzo = Qzero + Qone;
// ditto

Prod3 = PolyMul(Pzo, Qzo);
// another (2k-1)-vector

Mid = Prod3 – Prod1 – Prod2;
// subtract corr. elements

R = Prod1 + Shift(Mid, m/2) + Shift(Prod2,m) // a (2m-1)-vector

Return( R );

58

Multiplication – The Bottom Line

Polynomials

Naïve:

Θ(n2)

Karatsuba:
Θ(n1.585…)

Best known: Θ(n log n)

"Fast Fourier Transform"

Integers

Similar, but some ugly details re: carries, etc. 
gives Θ(n log n loglog n),

but mostly unused in practice


59

d & c summary
Idea:

“Two halves are better than a whole”

if the base algorithm has super-linear complexity.

“If a little's good, then more's better”

repeat above, recursively

Applications: Many.

Binary Search, Merge Sort, (Quicksort), Closest
points, Integer multiply,…

60

Recurrences

Above: Where they come 


from, how to find them


Next: how to solve them

61

mergesort (review)
Mergesort: (recursively) sort 2 half-lists, then
merge results.



T(n) = 2T(n/2)+cn, n≥2

Log n levels

O(n)
T(1) = 0
work
per
Solution: Θ(n log n) level

(details later)

now

62

Solve: T(1) = c
T(n) = 2 T(n/2) + cn

Level Num

Level

Num
SSize

ize
Work

Work

0

0 1 = 2
0

1=2n
0
n cn

1

cn
2 = 21
n/2
2cn/2

2

1
2 =2
4 = 2
2 1
nn/4

/2
2 c 4cn/4

n/2



2
4

=2 2
n…

/4
4 c n/4


i

… 2
i

…n/2i


…2
i c n/2i



i …

2
n…

/2i
2i c n/2
i i




… 2k-1

k-1



n/2
… k-1



2k-1
c n/2k-1

n = 2k ; k = log2n

k-1 2
k2k-1
k



n/2
n/2 k-1
k = 1

2k-12kcT(1)

n/2k-1



Total Work: c n (1+log2n)
(add last col)
63

Solve: T(1) = c
T(n) = 4 T(n/2) + cn

Level Num

Level

Num
SSize

ize
Work
Work

0

0 1 =
14=4 0
0
n n

cn
cn

1

1 4 =
44=4 1
1

nn/2

/2
4 c 4cn/2

n/2

.
.
2

2 16
1=6=4 42
2
nn/4

/4
16 16cn/4

c n/4

.

… …




… …





.

.
i

i 4i
4i
nn/2
/2ii

4i c4i n/2 i
i

c n/2
.
.
.

.

… …




… …






k-1 4k-1
k-1

4

k-1
n/2
n/2k-1
k-1

44k-1 c n/2k-1


k-1 c n/2k-1

n = 2k ; k = log2n
k

k 4k
4

k n/2
n/2=k=1
k 1

4k T(1)

4k T(1)


k
4k = (22)k=
Total Work: T(n) =
∑i =0
i i 2
4 cn / 2 = O(n ) 64

(2k)2 = n2

Solve: T(1) = c
T(n) = 3 T(n/2) + cn

LevelNum

Level

Num
Size
Size

WWork

ork

0

0 1 =
310
=30
n n

cn
cn

1

1 3 =
331
=31 n/2


n/2
3 3cn/2

c n/2

2

2 9 =
392
=32 n/4


n/4
9 9cn/4

c n/4



… …






……



i

i 3i

3i
n/2
n/2 i
i

33i icc n/2
n/2ii


.

.
.
.
.
.


… …






……



.
.
.
k-1

k-1 3k-1


3k-1 n/2

nk-1
/2
k-1
33k-1k-1c cn/2 n/2
k-1
k-1

n = 2k ; k = log2n
k

k 3k

3k n/2
kn= /21
k=1
3k3kT(1)

T(1)


i i
k
∑i =0 3 cn / 2
Total Work: T(n) =
65

a useful identity
Theorem:

1 + x + x2 + x3 + … + xk = (xk+1-1)/(x-1)

proof:

y
= 1 + x + x2 + x3 + … + xk

xy
= x + x2 + x3 + … + xk + xk+1

xy-y
= xk+1 - 1

y(x-1)
= xk+1 - 1

y
= (xk+1-1)/(x-1)



66

Solve: T(1) = c
T(n) = 3 T(n/2) + cn (cont.)

k
T ( n ) = ∑i=0 3 cn / 2
i i

k
= cn∑ i
3 /2 i
i=0
k i
= cn∑ 3
( )
i=0 2
k+1

= cn
( )3
2 −1
(2)
3 −1

67

Solve: T(1) = c
T(n) = 3 T(n/2) + cn (cont.)

k+1

cn
( 3
2) −1
= 2cn (( 2 ) −1)
3
k+1

(2)
3 −1

k+1
< 2cn ( 23 )
k
= 3cn ( 2 )
3

k
3
= 3cn k
2 68

Solve: T(1) = c
T(n) = 3 T(n/2) + cn (cont.)

k log 2 n
3 3
3cn k = 3cn log2 n
2 2
3 log 2 n
a log b n
= 3cn
n log b a log b n
= (b )
= 3c3log2 n
log b n log b a
= 3c ( n log2 3 ) = (b )
= O ( n1.585... ) = n log b a
69

divide and conquer – master recurrence
T(n) = aT(n/b)+cnk for n > b then



a > bk ⇒ T(n) = Θ(nlog
b a )
[many subprobs → leaves dominate]



a < bk ⇒ T(n) = Θ(nk)

[few subprobs → top level dominates]



a = bk ⇒ T(n) = Θ (nk log n)
[balanced → all log n levels contribute]



Fine print: 
a ≥ 1; b > 1; c, d, k ≥ 0; T(1) = d; n = bt for some t > 0; 
a, b, k, t integers. True even if it is ⎡n/b⎤ instead of n/b.

70

master recurrence: proof sketch
Expanding recurrence as in earlier examples, to get


T(n) = nh ( d + c S ) 


log b n
where h = logb(a) (tree height) and S = ∑ j=1 x , where x = bk/a.

j

If c = 0 the sum S is irrelevant, and T(n) = O(nh): all the work happens in
the base cases, of which there are nh, one for each leaf in the recursion
tree.

If c > 0, then the sum matters, and splits into 3 cases (like previous slide):

if x < 1, then S < x/(1-x) = O(1). [S is just the first log n terms of the
infinite series with that sum].

if x = 1, then S = logb(n) = O(log n). [all terms in the sum are 1 and
there are that many terms].

if x > 1, then S = x • (x1+logb(n)-1)/(x-1). After some algebra, 
nh * S = O(nk)

71

another d&c example: fast exponentiation
Power(a,n)

Input: integer n and number a

Output: an


Obvious algorithm

n-1 multiplications


Observation:

if n is even, n = 2m, then an = am• am

72

divide & conquer algorithm
Power(a,n)


if n = 0 then return(1)


if n = 1 then return(a)
x ← Power(a,⎣n/2⎦)
x ← x•x


if n is odd then



x ← a•x


return(x)

73

analysis
Let M(n) be number of multiplies

Worst-case  &( 0 n ≤1
M (n) = '
recurrence:
() M ("#n / 2$%) + 2 n > 1
By master theorem

M(n) = O(log n)
(a=1, b=2, k=0)

More precise analysis:

M(n) = ⎣log2n⎦ + (# of 1’s in n’s binary representation) - 1

Time is O(M(n)) if numbers < word size, else also
depends on length, multiply algorithm

74

a practical application - RSA
Instead of an want an mod N

ai+j mod N = ((ai mod N) • (aj mod N)) mod N

same algorithm applies with each x • y replaced by

((x mod N) • (y mod N)) mod N



In RSA cryptosystem (widely used for security)

need an mod N where a, n, N each typically have 1024 bits

Power: at most 2048 multiplies of 1024 bit numbers

relatively easy for modern machines

Naive algorithm: 21024 multiplies

75

d & c summary
Idea:

“Two halves are better than a whole”

if the base algorithm has super-linear complexity.

“If a little's good, then more's better”

repeat above, recursively

Analysis: recursion tree or Master Recurrence

Applications: Many.

Binary Search, Merge Sort, (Quicksort), Closest
points, Integer multiply, exponentiation,…

76

You might also like