0% found this document useful (0 votes)
130 views

Lecture 3

The document discusses recurrence relations and their use in analyzing algorithms. Recurrence relations model problems that can be broken down into smaller subproblems. The document covers solving recurrence relations through guessing solutions and proving by induction, backsubstitution to find patterns, recursion trees to visualize the process, and applying the master theorem for asymptotic bounds. Examples are given of recurrences for matrix multiplication, polygon triangulation, and sorting algorithms like mergesort.

Uploaded by

werhmeister
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
130 views

Lecture 3

The document discusses recurrence relations and their use in analyzing algorithms. Recurrence relations model problems that can be broken down into smaller subproblems. The document covers solving recurrence relations through guessing solutions and proving by induction, backsubstitution to find patterns, recursion trees to visualize the process, and applying the master theorem for asymptotic bounds. Examples are given of recurrences for matrix multiplication, polygon triangulation, and sorting algorithms like mergesort.

Uploaded by

werhmeister
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Recurrence Relations

Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally
modeled by recurrence relations.
A recurrence relation is an equation which is de ned in
terms of itself.
Why are recurrences good things?
1. Many natural functions are easily expressed as recurrences:
an = an;1 + 1 a1 = 1 ;! an = n (polynomial)
an = 2 an;1 a1 = 1

;! an = 2n;1

(exponential)

an = n an;1 a1 = 1 ;! an = n! (weird function)

2. It is often easy to nd a recurrence as the solution


of a counting problem. Solving the recurrence can
be done for many special cases as we will see,
although it is somewhat of an art.

RecursionInduction!
Mathematical
is

In both, we have general and boundary conditions, with


the general condition breaking the problem into smaller
and smaller pieces.
The initial or boundary condition terminate the recursion.
As we will see, induction provides a useful tool to solve
recurrences { guess a solution and prove it by induction.
Tn = 2 Tn;1 + 1 T0 = 0
n 0 1 2 3 4 5 6 7
Tn 0 1 3 7 15 31 63 127

Guess what the solution is?


Prove Tn = 2n ; 1 by induction:
1. Show that the basis is true: T0 = 20 ; 1 = 0.
2. Now assume true for Tn;1.
3. Using this assumption show:
Tn = 2 Tn;1 + 1 = 2(2n;1 ; 1) + 1 = 2n ; 1

Solving Recurrences

No general procedure for solving recurrence relations is


known, which is why it is an art. My approach is:

Realize that linear, nite history,


constant
coe
cient
recurrences
always can be solved

Check out any combinatorics or di erential equations


book for a procedure.
Consider an = 2an;1 + 2an;2 + 1, a1 = 1, a2 = 1
It has history = 2, degree = 1, and coe cients of 2
and 1. Thus it can be solved mechanically! Proceed:
Find the characteristic equation, eg. 2 ; 2 =
2 = 0.
Solve to get roots, which appear in the exponents.
Take care of repeated roots and inhomogeneous
parts.
Find the constants to nish the job.
p

an = ;1=3+(1; 3)n(1+ 3)=3+(1+ 3)n(;1+ 3)=3

Systems like Mathematica and Maple have packages


for doing this.

Guess a solution and prove by


induction

To guess the solution, play around with small values


for insight.
Note that you can do inductive proofs with the big-O's
notations - just be sure you use it right.
Example: Show that T (n) c n lg n for large enough
c and n.
Assume that it is true for n=2, then
T (n)

2cbn=2c lg(bn=2c) + n
cn lg(bn=2c) + n dropping oors makes it bigger
= cn(lg n ; (lg 2 = 1)) + n log of division
= cn lg n ; cn + n
cn lg n whenever c > 1

Starting with basis cases T (2) = 4, T (3) = 5, lets us


complete the proof for c 2.

Try backsubstituting until you


know what is going on

Also known as the iteration method. Plug the recurrence back into itself until you see a pattern.
Example: T (n) = 3T (bn=4c)+ n. Try backsubstituting:
T (n) = n + 3(bn=4c + 3T (bn=16c)
= n + 3bn=4c + 9(bn=16c + 3T(bn=64c))
= n + 3bn=4c + 9bn=16c + 27T (bn=64c)

The (3=4)n term should now be obvious.


Although there are only log4 n terms before we get to
T (1), it doesn't hurt to sum them all since this is a
fast growing geometric series:
T (n)

1
X
n 3 +
i=0

(nlog4 3 T (1))

T (n) = 4n + o(n) = O(n)

Recursion Trees

Drawing a picture of the backsubstitution process gives


you a idea of what is going on.
We must keep track of two things { (1) the size of
the remaining argument to the recurrence, and (2) the
additive stu to be accumulated during this call.
Example: T (n) = 2T (n=2) + n2
T(n)
n

T(n/2)

T(n/2)

T(n/4)

T(n/4)

2
(n/2)

T(n/4)

T(n/4)

(n/4)

2
(n/2)

2
n/2

(n/4)

(n/4)

2
(n/4)

2
n/4

The remaining arguments are on the left, the additive


terms on the right.
Although this tree has height lg n, the total sum at
each level decreases geometrically, so:
1
1
X
X
T (n) = n =2 = n
1=2 =
2

i=0

i=0

(n2)

The recursion tree framework made this much easier


to see than with algebraic backsubstitution.

See if you can use the Master


theorem to provide an instant
asymptotic solution

The Master Theorem: { Let a 1 and b > 1 be constants, let f (n) be a function, and let T (n) be de ned
on the nonnegative integers by the recurrence
T (n) = aT (n=b) + f (n)
where we interpret n=b to mean either bn=bc or dn=be.
Then T (n) can be bounded asymptotically as follows:
1. If f (n) = O(nlogb a; ) for some constant > 0,
then T (n) = O(nlogb a; ).
2. If f (n) = (nlogb a), then T (n) = (nlogb a lg n).
3. If f (n) = (nlogb a+ ) for some constant > 0,
and if af (n=b) cf (n) for some constant c < 1,
and all su ciently large n, then T (n) = (f (n)).

Examples of the Master


Theorem

Which case of the Master Theorem applies?


T (n) = 4T (n=2) + n
Reading from the equation, a = 4, b = 2, and
f (n) = n.
Is n = O(nlog2 4; ) = O(n2; )?
Yes, so case 1 applies and T (n) = O(n2).
T (n) = 4T (n=2) + n2
Reading 2from the equation, a = 4, b = 2, and
f (n) = n .
Is n2 = O(nlog2 4; ) = O(n2; )?
No, if > 0, but 2it is true if = 0, so case 2 applies
and T (n) = (n log n).
T (n) = 4T (n=2) + n3
Reading 3from the equation, a = 4, b = 2, and
f (n) = n .
Is n3 = (nlog2 4+ ) = (n2+ )?
Yes, for 0 < 1, so case 3 might apply.
Is 4(n=2)3 c n3?
Yes, for c 1=2, so there exists a c < 1 to satisfy the regularity
condition,
so
case
3
applies
and
T (n) = (n3).

Why should the Master


Theorem be true?

Consider T (n) = aT (n=b) + f (n).

Suppose f (n) is small enough

Say f (n) = 0, ie. T (n) = aT (n=b).


Then we have a recursion tree where the only contribution is at the leaves.
There will be logb n levels, with al leaves at level l.
T (n) = alogb n = nlogb a Theorem 2.9 in CLR
0

so long as f (n) is small enough that it is dwarfed by


this, we have case 1 of the Master Theorem!

Suppose f(n) is large enough

If we draw the recursion tree for T (n) = aT (n=b)+f (n).


T(n)

T(n/b)

...

...

f(n)

T(n/b)

f(n/b)

...

...

f(n/b)

If f (n) is a big enough function, the one top call can


be bigger than the sum of all the little calls.
Example: f (n) = n3 > (n=3)3 + (n=3)3 + (n=3)3. In
fact this holds unless a 27!
In case 3 of the Master Theorem, the additive term
dominates.
In case 2, both parts contribute equally, which is why
the log pops up. It is (usually) what we want to have
happen in a divide and conquer algorithm.

Famous Algorithms and their


Recurrence
Matrix Multiplication

The standard matrix 3multiplication algorithm for two


n n matrices is O(n ).
2

13

18

23

18

25

32

23

32

41

Strassen discovered a divide-and-conquer


algorithm which
2
takes T (n) = 7T (n=2) + O(n ) time.
Since O(nlg 7) dwarfs O(n2), case
1 of the master the2
:
81
orem applies and T (n) = O(n ).
This has been \improved" by more and more complicated recurrences until the current best in O(n2:38).

Polygon Triangulation

Given a polygon in the plane, add diagonals so that


each face is a triangle None of the diagonals are allowed
to cross.

Triangulation is an important rst step in many geometric algorithms.


The simplest algorithm might be to try each pair of
points and check if they see each other. If so, add the
diagonal and recur on both halves, for a total of O(n3).
However, Chazelle gave an algorithm which
runs1;in
1
=
2
T (n) = 2T (n=2)+O( (n)) time. Since n = O(n ),
by case 1 of the Master Theorem, Chazelle's algorithm
is linear, ie. T (n) = O(n).

Sorting

The classic divide and conquer recurrence is Mergesort's T (n) = 2T (n=2) + O(n), which divides the data
into equal-sized halves and spends linear time merging
the halves after they are sorted
Since n = O(nlog2 2) = O(n) but not n = O(n1; ),
Case 2 of the Master Theorem applies and T (n) =
O(n log n).
In case 2, the divide and merge steps balance out perfectly, as we usually hope for from a divide-and-conquer
algorithm.

Mergesort Animations

XEROX FROM SEDGEWICK

Approaches to Algorithms
Design
Incremental
Job is partly done - do a little more, repeat until done.
A good example of this approach is insertion sort

Divide-and-Conquer

A recursive technique
Divide problem into sub-problems of the same kind.
For subproblems that are really small (trivial), solve
them directly. Else solve them recursively. (conquer)
Combine subproblem solutions to solve the whole
thing (combine)
A good example of this approach is Mergesort.

You might also like