0% found this document useful (0 votes)
11 views

Algorithm Analysis

The operation that contributes most towards the running time of the algorithm
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Algorithm Analysis

The operation that contributes most towards the running time of the algorithm
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

Analysis of algorithms

● Issues:
• correctness
• time efficiency
• space efficiency
• optimality

● Approaches:
• empirical analysis – less useful
• theoretical analysis – most important

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 1
Empirical analysis of time efficiency
● Select a specific sample of inputs

● Use physical unit of time (e.g., milliseconds) or


Count actual number of basic operation’s executions

● Analyze the empirical data

● Problems:

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 2
Empirical analysis of time efficiency
● Select a specific sample of inputs

● Use physical unit of time (e.g., milliseconds) or


Count actual number of basic operation’s executions

● Analyze the empirical data

● Problem - Inefficient:
• Must implement algorithm
• Must run on many data sets to see effects of scaling
• Hard to see patterns in actual data

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 3
Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size

● Basic operation: the operation that contributes most


towards the running time of the algorithm
input
size

T(n) ≈ copC(n)
running execution time Number of times
time for basic basic operation is
operation executed

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 4
Input size and basic operation examples

Problem Input size measure Basic operation

Searching for key in a Number of list’s items,


Key comparison
list of n items i.e. n

Multiplication of two Matrix dimensions or Multiplication of two


matrices total number of elements numbers

Checking primality of n’size = number of digits


Division
a given integer n (in binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 5
Counting Operations
Consider counting steps in FindMax
● Problems:
• Hard to analyze
• May not need precise information
– Precise details less relevant than order growth
– May not know times (or relative times) of steps
– Only gives results within constants (constants relevant later)
– aC(n) < T(n) < bC(n)

● More interested in growth rates (ie Big O), but


● Careful analysis can compare algs with same growth rate
• If needed
• Knuth examples

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 6
Best-case, average-case, worst-case
For a given input size, how does algorithm perform
on different datasets of that size
For datasets of size n identify different datasets that give:
● Worst case: Cworst(n) – maximum over all inputs of size n
● Best case: Cbest(n) – minimum over all inputs of size n

● Average case: Cavg(n) – “average” over inputs of size n


• Typical input, NOT the average of worst and best case
• Aanalysis requires knowing distribution of all possible inputs of size n
• Can consider ALL POSSIBLE input sets of size n, average over all sets

● Some algs are same for all three (eg all case performance)
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 7
Example: Sequential search

● Worst case
● Best case
● Average case: depends on assumputions about input: ?

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 8
Example: Sequential search

● Worst case
● Best case
● Average case: depends on assumputions about input:
proportion of found vs not-found keys

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 9
Example: Find maximum

● Worst case:
● Best case:
● Average case:
● All case:

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 10
Critical factor for analysis: Growth rate

● Most important: Order of growth as n→∞


• What is the growth rate of time as input size increases
• How does time increase as input size increases

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 11
Growth rate: critical for performance performance

Focus: asymptotic order of growth:


Main concern: which function describes behavior.
Less concerned with constants
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 12
Asymptotic order of growth
Critical factor for problem size n:
- IS NOT the exact number of basic ops executed for given n
- IS how number of basic ops GROWS as n increases

- Constant factors and constants do not change growth RATE


- Rate most relevant for large input sizes, so ignore small sizes
- Informally: 5n^2 and 100n^2 +1000 are both n^2
- Define formally later

Call this: Asymptotic Order of Growth

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 13
Basic asymptotic efficiency classes
1 constant Best case
log n logarithmic Divide
Ignore part

n linear Examine each


Online/Stream Algs

n log n n-log-n or Divide


linearithmic Use all parts

n2 quadratic Nested loops

n3 cubic Nested loops


nk Examine all k-tuples

2n exponential All subsets

n! factorial All permutations


A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 14
Order of growth: Sets of functions
Approach: Group functions based on their growth rates

Example: Θ(n^2) is set of functions whose growth rate is n^2


(ie group all functions with n^2 growth rate)

These are all in Θ(n^2):


- f(n) = 100n^2 + 1000
- f(n) = n^2 + 1
- f(n) = 0.001n^2 + 1000000

- They are also Θ(10n^2), but we rarely say that

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 15
Order of growth: Upper, tight, lower bounds

More formally:
- Θ(g(n)): class of functions f(n) that grow at same rate as g(n)

Upper, tight, and lower bounds on performance:


● O(g(n)): class of functions f(n) that grow no faster than g(n)
• [ie f ’s speed is same as or faster than g, f bounded above by g]

● Θ(g(n)): class of functions f(n) that grow at same rate as g(n)

● Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)


• [ie f ’s speed is same as or slower than g, f bounded below by g]

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 16
t(n) ∈ O(g(n)) iff t(n) <=cg(n) for n > n0

t(n) = 10n3 in O(n3) and in O(n5). What c and n0? More later.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 17
t(n) ∈ Ω(g(n)) iff t(n) >=cg(n) for n > n0

t(n) = 10n3 in Ω(n2) and in Ω(n3)

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 18
t(n)∈ Θ(g(n)) iff t(n)∈O(g(n)) and
∈Ω(g(n))

t(n) = 10n3 in Θ(n3) but NOT in Θ(n2) or Θ(n4)

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 19
Terminology

Informally: f(n) is O(g) means f(n) ∈ O(g)

Example: 10n is O(n2)

Informally: f(n) = O(g) means f(n) ∈ O(g)

Example: 10n = O(n2)

Use similar terms for Big Omega and Big Theta

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 20
Big O, Ω, Θ : Informal Definitions

f(n) is in O(g(n)) if order of growth of f(n) ≤ order of growth


of g(n) (within constant multiple and for large n).

f(n) is in Ω(g(n)) if order of growth of f(n) ≥ order of growth


of g(n) (within constant multiple and for large n).

f(n) is in Θ(g(n)) if order of growth of f(n) = order of growth


of g(n) (within constant multiples and for large n).

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 21
Big O, Ω, Θ : Formal Definitions

f(n) ∈ O(g(n)) iff there exist positive constant c and non-


negative integer n0 such that
f(n) ≤ c g(n) for every n ≥ n0

f(n) ∈ Ω(g(n)) iff there exist positive constant c and non-


negative integer n0 such that
f(n) ≥ c g(n) for every n ≥ n0

f(n) ∈ Θ(g(n)) iff there exist positive constants c1 and c2 and


non-negative integer n0 such that
c1 g(n) ≤ f(n) ≤ c2 g(n) for every n ≥ n0

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 22
Big O, Ω, Θ: Why Constant c

What is gained by the constant c:


f(n) ≤ c g(n) for every n ≥ n0

What do you think?

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 23
Big O, Ω, Θ: Why Constant c

What is gained by the constant c:


f(n) ≤ c g(n) for every n ≥ n0

What do you think?

Allows c g(n) to exceed f(n)


Allows us to ignore leading constants in f(n)

For example, 0.1n^2, n^2, and 10n^2


All have same growth rate: n^2
Meaning what: as input size doubles, time is 4x increase

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 24
Using the Definition: Big O

Informal Definition: f(n) is in O(g(n)) if order of growth of


f(n) ≤ order of growth of g(n) (within constant multiple),

Definition: f(n) ∈ O(g(n)) iff there exist positive constant c


and non-negative integer n0 such that
f(n) ≤ c g(n) for every n ≥ n0

Examples:
● 10n is O(n2)
• [Can choose c and n0. Solve for 2 different c’s]

● 5n + 20 is O(n) [Solve for c and n0]


A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 25
Using the Definition: Big Omega

Informal Definition: f(n) is in Ω (g(n)) iff order of growth of


f(n) ≥ order of growth of g(n) (within constant multiple),

Definition: f(n) ∈ Ω(g(n)) iff there exist positive constant c


and non-negative integer n0 such that
f(n) ≥ c g(n) for every n ≥ n0

Examples:
● 10n2 is Ω(n)

● 5n + 20 is Ω(n)
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 26
Using the Definition: Theta

Informal Definition: f(n) is in Θ(g(n)) if order of growth of


f(n) = order of growth of g(n) (within constant multiple),

Definition: f(n) ∈ Θ(g(n)) iff there exist positive constants c1


and c2 and non-negative integer n0 such that
c1 g(n) ≤ f(n) ≤ c2 g(n) for every n ≥ n0

Examples:
● 10n2 is Θ(n2)

● 5n + 20 is Θ(n)
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 27
Some properties of asymptotic order of growth

● f(n) ∈ O(f(n))

● f(n) ∈ O(g(n)) iff g(n) ∈Ω(f(n))

● If f (n) ∈ O(g (n)) and g(n) ∈ O(h(n)) , then f(n) ∈ O(h(n))

Note similarity with a ≤ b

● If f1(n) ∈ O(g1(n)) and f2(n) ∈ O(g2(n)) , then


f1(n) + f2(n) ∈ O(max{g1(n), g2(n)})
O(g1(n) + g2(n))?

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 28
Establishing order of growth using limits

0 order of growth of T(n) < order of growth of g(n)

lim T(n)/g(n) = c > 0 order of growth of T(n) = order of growth of g(n)


n→∞
∞ order of growth of T(n) > order of growth of g(n)

Examples:
• 10n vs. n2

• n(n+1)/2 vs. n2

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 29
Big O, Little o. Big and little Omega

0 order of growth of T(n) < order of growth of g(n)

lim T(n)/g(n) = c > 0 order of growth of T(n) = order of growth of g(n)


n→∞
∞ order of growth of T(n) > order of growth of g(n)

Examples:
• 10n vs. n2
• 10n is O(n^2) and o(n^2)
• 10n is O(n) but not o(n)
Big O: upper bound or the same
•Little o: upper bound and not the same

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 30
L’Hôpital’s rule and Stirling’s formula

L’Hôpital’s rule: If limn→∞ f(n) = limn→∞ g(n) = ∞ and


the derivatives f´, g´ exist, then

lim f(n) lim f ´(n)


=
n→ g(n) n→ g ´(n)
∞ ∞
Example: log n vs. n

Stirling’s formula: n! ≈ (2πn)1/2 (n/e)n


Example: 2n vs. n!

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 31
Orders of growth of some important functions

● All polynomials of the same degree k belong to the same class:


aknk + ak-1nk-1 + … + a0 ∈ Θ(nk) [Why?]

● All logarithmic functions loga n belong to the same class


Θ(log n) no matter what the logarithm’s base a > 1 is [Why?]

● Exponential functions an have different orders of growth for


different a’s
• Frequently just say Exponential time, ignoring exponent

● order log n < order nα (α>0) < order an < order n! < order n
• Order n log n ?

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 32
Basic asymptotic efficiency classes
1 constant Best case
log n logarithmic Divide
Ignore part

n linear Examine each


Online/Stream Algs

n log n n-log-n or Divide


linearithmic Use all parts

n2 quadratic Nested loops

n3 cubic Nested loops


nk Examine all k-tuples

2n exponential All subsets

n! factorial All permutations


A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 33
Polynomial=Efficient. Exponential=Not Efficient

● In general, we say that


• Functions with polynomial growth are efficient
• Functions with exponential growth are NOT efficient

● But, what about n^100 vs 2^{0.2 log n}


• Yes, exponential is faster for all reasonable n
• But, such algorithms don’t happen in practice

● In practice, polynomials are faster than exponential

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 34
Time efficiency of nonrecursive algorithms
General Plan for Analysis
● Decide on parameter n indicating input size

● Identify algorithm’s basic operation

● Determine worst, average, and best cases for input of size n

● Set up a sum for the number of times the basic operation is


executed

● Simplify the sum using standard formulas and rules (see


Appendix A)

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 35
Useful summation formulas and rules
Σl≤i≤u1 = 1+1+ ⋯ +1 = u - l + 1
In particular, Σl≤i≤u1 = n - 1 + 1 = n ∈ Θ(n)

Σ1≤i≤n i = 1+2+ ⋯ +n = n(n+1)/2 ≈ n2/2 ∈ Θ(n2)

Σ1≤i≤n i2 = 12+22+ ⋯ +n2 = n(n+1)(2n+1)/6 ≈ n3/3 ∈ Θ(n3)

Σ0≤i≤n ai = 1 + a + ⋯ + an = (an+1 - 1)/(a - 1) for any a ≠ 1


In particular, Σ0≤i≤n 2i = 20 + 21 + ⋯ + 2n = 2n+1 - 1 ∈ Θ(2n )

Σ(ai ± bi ) = Σai ± Σbi Σcai = cΣai Σl≤i≤uai = Σl≤i≤mai + Σm+1≤i≤uai

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 36
Example: Sequential search

● Worst case: Omega, Theta, O?


● Best case: Omega, Theta, O?
● Average case: Omega, Theta, O?

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 37
Example 1: Maximum element

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 38
Example 2: Element uniqueness problem

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 39
Example 3: Matrix multiplication

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 40
Example 4: Gaussian elimination
Algorithm GaussianElimination(A[0..n-1,0..n])
//Implements Gaussian elimination of an n-by-(n+1) matrix A
for i ← 0 to n - 2 do
for j ← i + 1 to n - 1 do
for k ← i to n do
A[j,k] ← A[j,k] - A[i,k] * A[j,i] / A[i,i]

Find the efficiency class and a constant factor improvement.

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 41
Example 5: Counting binary digits

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 42
Plan for Analysis of Recursive Algorithms
● Decide on a parameter indicating an input’s size.

● Identify the algorithm’s basic operation.

● Check whether the number of times the basic op. is executed


may vary on different inputs of the same size. (If it may, the
worst, average, and best cases must be investigated
separately.)

● Set up a recurrence relation with an appropriate initial


condition expressing the number of times the basic op. is
executed.

● Solve the recurrence (ie find a closed form or, at the very
least, establish its solution’s order of growth) by backward
substitutions or another method.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 43
Recurrences

Terminology (used interchangably):


- Recurrence
- Recurrence Relation
- Recurrence Equation
Express T(n) in terms of T(smaller n)
- Example: M(n) = M(n-1) + 1, M(0) = 0
Solve: find a closed form:
- ie T(n) in terms of n, alone (ie not T)
Learn: 1. how to describe an alg w/ a recurrence
2. how to solve a recurrence (ie find a closed form)
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 44
Example 1: Recursive evaluation of n!
Definition: n ! = 1 ⋅ 2 ⋅ … ⋅ (n-1) ⋅ n for n ≥ 1 and 0! = 1

Recursive definition of n!: F(n) = F(n-1) ⋅ n for n ≥ 1 and


F(0) = 1

Size:
Basic operation:
Recurrence relation:
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 45
Solving the recurrence for M(n)

Recurrence: M(n) = M(n-1) + 1,


Initial Condition: M(0) = 0

Solution Methods:
- Guess?
- Forward Substitution?
- Backward Substitution?
- General Methods?
Guess closed form: ?
How to check a possible solution?
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 46
Check a possible solution with Substitution

Recurrence: M(n) = M(n-1) + 1


Initial Condition: M(0) = 0
Possible solution: M(n) = n. Substitute:
n=0: M(0) = n = 0

n = k > 0: M(k) = M(k-1) + 1


k = (k-1) + 1
= k (Correct)
Or: M(k) = M(k-1) + 1 = (k-1) + 1 = k. (Correct)
N.B. This is actually a hand-wavy proof by induction
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 47
Recurrences, Initial Conditions, Solutions

M(n) = M(n-1) + 1, M(0) = 1

M(n) = ??

A Recurrence has an infinite number of solutions


- Initial conditions eliminate all but 1

(Anyone know what other kind of equation is similar?)


(Sort of similar to y = x + 1, then add x=3)

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 48
Example 2: The Tower of Hanoi Puzzle

Recurrence for number of moves:

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 49
Solving recurrence for number of moves

M(n) = 2M(n-1) + 1, M(1) = 1

Guess closed form?


(What happens to number of moves when …)

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 50
Tree of calls for the Tower of Hanoi Puzzle

M(n) = 2M(n-1) + 1, M(1) = 1


What happens when we add another disk?
Another level of the tree?
What proportion of the nodes are leaves?
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 51
Example 3: Counting #bits

M(n) = M(?) ?,
M(?) = ?

Guess closed form?

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 52
General Methods
Forward and Backward Substitution problems:
solution must be proved
solution may be difficult to find

General Methods:
Apply to certain classes of recurrences
Solution always possible within the class

Two Common General Methods:


1. Master Method
2. Create and solve Characteristic Equation

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 53
Fibonacci numbers
The Fibonacci numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, …
The Fibonacci recurrence:
F(n) = F(n-1) + F(n-2)
F(0) = 0
F(1) = 1

General 2nd order linear homogeneous recurrence with


constant coefficients:
aX(n) + bX(n-1) + cX(n-2) = 0

Key idea: Assume X(n) = rn for some value r.


Steps: Change recurrence to equation. Solve for r.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 54
Solving aX(n) + bX(n-1) + cX(n-2) = 0

● Set up the characteristic equation (quadratic)


ar2 + br + c = 0

● Solve to obtain roots r1 and r2

● General solution to the recurrence


if r1 and r2 are two distinct real roots: X(n) = αr1n + βr2n
if r1 = r2 = r are two equal real roots: X(n) = αrn + βnr n

● Particular solution can be found by using initial conditions

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 55
Application to the Fibonacci numbers

F(n) = F(n-1) + F(n-2) or F(n) - F(n-1) - F(n-2) = 0

Characteristic equation:

Roots of the characteristic equation:

General solution to the recurrence:

Particular solution for F(0) =0, F(1)=1:

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 56
Computing Fibonacci numbers
1. Definition-based recursive algorithm

2. Nonrecursive definition-based algorithm

3. Explicit formula algorithm

4. Logarithmic algorithm based on formula:

F(n-1) F(n) 0 1 n
=
F(n) F(n+1) 1 1

for n≥1, assuming an efficient way of computing matrix powers.

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 57

You might also like