Chap 2
Chap 2
of Algorithm Efficiency
Analysis of Algorithms
Analysis of algorithms means to
investigate an algorithm’s efficiency with
respect to resources: running time and
memory space.
Time efficiency: how fast an algorithm runs.
Space efficiency: the space an algorithm requires.
2
Analysis Framework
Measuring an input’s size
Measuring running time
Orders of growth (of the algorithm’s
efficiency function)
Worst-base, best-case and average
efficiency
3
Measuring Input Sizes
Efficiency is define as a function of input
size.
Input size depends on the problem.
Example 1, what is the input size of the problem of
sorting n numbers?
Example 2, what is the input size of adding two n by n
matrices?
4
Units for Measuring Running Time
Measure the running time using standard unit of time
measurements, such as seconds, minutes?
Depends on the speed of the computer.
count the number of times each of an algorithm’s
operations is executed.
Difficult and unnecessary
count the number of times an algorithm’s basic operation
is executed.
Basic operation: the most important operation of the algorithm,
the operation contributing the most to the total running time.
For example, the basic operation is usually the most time-
consuming operation in the algorithm’s innermost loop.
5
Theoretical Analysis of Time Efficiency
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input
size. Assuming C(n) = (1/2)n(n-1),
how much longer will the algorithm run if we double the input size?
Orders of growth:
• consider only the leading term of a formula
• ignore the constant coefficient.
9
700
600
500
n*n*n
400 n*n
n log(n)
300 n
log(n)
200
100
0
1 2 3 4 5 6 7 8
10
Worst-Case, Best-Case, and Average-
Case Efficiency
Algorithm efficiency depends on the input size n
For some algorithms efficiency depends on type
of input.
Example: Sequential Search
Problem: Given a list of n elements and a search key K, find
an element equal to K, if any.
Algorithm: Scan the list and compare its successive
elements with K until either a matching element is found
(successful search) of the list is exhausted (unsuccessful
search)
Given a sequential search problem of an input size of n,
what kind of input would make the running time the longest?
How many key comparisons? 11
Sequential Search Algorithm
ALGORITHM SequentialSearch(A[0..n-1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n-1] and a search key K
//Output: Returns the index of the first element of A that matches K
or –1 if there are no matching elements
i 0
while i < n and A[i] ‡ K do
ii+1
if i < n //A[I] = K
return i
else
return -1
12
Worst-Case, Best-Case, and
Average-Case Efficiency
Worst case Efficiency
Efficiency (# of times the basic operation will be executed) for the
worst case input of size n.
The algorithm runs the longest among all possible inputs of size n.
Best case
Efficiency (# of times the basic operation will be executed) for the
best case input of size n.
The algorithm runs the fastest among all possible inputs of size n.
Average case:
Efficiency (#of times the basic operation will be executed) for a
typical/random input of size n.
NOT the average of worst and best case
How to find the average case efficiency? 13
Summary of the Analysis Framework
Both time and space efficiencies are measured as functions of input
size.
Time efficiency is measured by counting the number of basic
operations executed in the algorithm. The space efficiency is
measured by the number of extra memory units consumed.
The framework’s primary interest lies in the order of growth of the
algorithm’s running time (space) as its input size goes infinity.
The efficiencies of some algorithms may differ significantly for
inputs of the same size. For these algorithms, we need to
distinguish between the worst-case, best-case and average case
efficiencies.
14
Asymptotic Growth Rate
Three notations used to compare orders of growth
of an algorithm’s basic operation count
O(g(n)): class of functions f(n) that grow no faster than
g( n)
Ω(g(n)): class of functions f(n) that grow at least as fast
as g(n)
Θ (g(n)): class of functions f(n) that grow at same rate as
g( n)
15
O-notation
back
16
O-notation
Formal definition
A function t(n) is said to be in O(g(n)), denoted t(n) O(g(n)), if
t(n) is bounded above by some constant multiple of g(n) for all
large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t(n) cg(n) for all n n0
17
-notation
back
18
-notation
Formal definition
A function t(n) is said to be in (g(n)), denoted t(n)
(g(n)), if t(n) is bounded below by some constant multiple
of g(n) for all large n, i.e., if there exist some positive
constant c and some nonnegative integer n0 such that
t(n) cg(n) for all n n0
19
-notation
back
20
-notation
Formal definition
A function t(n) is said to be in (g(n)), denoted t(n)
(g(n)), if t(n) is bounded both above and below by some
positive constant multiples of g(n) for all large n, i.e., if there
exist some positive constant c1 and c2 and some nonnegative
integer n0 such that
c2 g(n) t(n) c1 g(n) for all n n0
21
>=
(g(n)), functions that grow at least as fast as g(n)
=
(g(n)), functions that grow at the same rate as g(n)
g(n)
<=
O(g(n)), functions that grow no faster than g(n)
22
Theorem
If t1(n) O(g1(n)) and t2(n) O(g2(n)), then
t1(n) + t2(n) O(max{g1(n), g2(n)}).
The analogous assertions are true for the -notation
and -notation.
23
Using Limits for Comparing Orders of
Growth
Examples:
• 10n vs. 2n2
• n(n+1)/2 vs. n2
Then
lim f(n) lim f ´(n)
n→∞ g(n) = n→∞ g ´(n)
25
Summary of How to Establish Orders of Growth
of an Algorithm’s Basic Operation Count
26
The time
efficiencies of a
large number
28
Time Efficiency of Nonrecursive Algorithms
29
Time Efficiency of Nonrecursive Algorithms
Check whether the number of times the basic operation is executed depends
only on the input size n. If it also depends on the type of input, investigate
worst, average, and best case efficiency separately.
Set up summation for C(n) reflecting the number of times the algorithm’s basic
operation is executed.
30
Example: Element uniqueness problem
31
Example: Matrix multiplication
32
Example: Find the number of binary digits
in the binary representation of a positive
decimal integer
Algorithm Binary(n)
//Input: A positive decimal integer n (stored in binary
form in the computer)
//Output: The number of binary digits in n’s binary
representation
count 0
while n >= 1 do //or while n > 0 do
count count + 1
n n/2
return count C(n) ∈ Θ ( log2n +1)
33
Example: Find the number of binary digits
in the binary representation of a positive
decimal integer
Algorithm Binary(n)
//Input: A positive decimal integer n (stored in binary
form in the computer)
//Output: The number of binary digits in n’s binary
representation
count 1
while n > 1 do
count count + 1
n n/2
return count
34
Mathematical Analysis of
Recursive Algorithms
Recursive evaluation of n!
Recursive solution to the Towers of
Hanoi puzzle
Recursive solution to the number of
binary digits problem
35
Example Recursive evaluation of n ! (1)
Iterative Definition
F(n) = 1 if n = 0
n * (n-1) * (n-2)… 3 * 2 * 1 if n > 0
Recursive definition
F(n) = 1 if n = 0
n * F(n-1) if n > 0
Algorithm F(n)
if n=0
return 1 //base case
else
return F (n -1) * n //general
case 36
Example Recursive evaluation of n ! (2)
Two Recurrences
The one for the factorial function value: F(n)
F(n) = F(n – 1) * n for every n >
0
The F(0) = 1number of multiplications to compute n!,
one for
M(n)
M(n) = M(n – 1) + 1 for every n > 0
M(0) = 0
M(n) ∈ Θ (n)
37
Steps in Mathematical Analysis of
Recursive Algorithms
Decide on parameter n indicating input size
38
The Towers of Hanoi Puzzle
Recurrence Relations
Denote the total number of moving as M(n)
M(n) = 2M(n – 1) + 1 for every n > 0
M(1) = 1 M(n) ∈ Θ (2n)
Succinctness vs. efficiency
Be careful with recursive algorithms because
their succinctness mask their inefficiency.
39
Example: Find the number of binary digits
in the binary representation of a positive
decimal integer (A recursive solution)
Algorithm Binary(n)
//Input: A positive decimal integer n (stored in binary
form in the computer)
//Output: The number of binary digits in n’s binary
representation
if n = 1 //The binary representation of n contains only one bit.
return 1
else //The binary representation of n contains more than one bit.
return BinRec(n/2) + 1
40
Smoothness Rule
Let f(n) be a nonnegative function defined on the set of natural
numbers. f(n) is call smooth if it is eventually nondecreasing and
f(2n) ∈ Θ (f(n))
Functions that do not grow too fast, including logn, n, nlogn, and n
where >=0 are smooth.
Smoothness rule
let T(n) be an eventually nondecreasing function and f(n) be a
smooth function. If
T(n) ∈ Θ (f(n)) for values of n that are powers of b,
where b>=2, then
T(n) ∈ Θ (f(n)) for any n.
41
Important Recurrence Types
Decrease-by-one recurrences
A decrease-by-one algorithm solves a problem by exploiting a
relationship between a given instance of size n and a smaller size n – 1.
Example: n!
The recurrence equation for investigating the time efficiency of such
algorithms typically has the form
T(n) = T(n-1) + f(n)
Decrease-by-a-constant-factor recurrences
A decrease-by-a-constant algorithm solves a problem by dividing its
given instance of size n into several smaller instances of size n/b,
solving each of them recursively, and then, if necessary, combining the
solutions to the smaller instances into a solution to the given instance.
Example: binary search.
The recurrence equation for investigating the time efficiency of such
algorithms typically has the form
T(n) = aT(n/b) + f (n)
42
Decrease-by-one Recurrences
One (constant) operation reduces problem size by one.
T(n) = T(n-1) + c T(1) = d
Solution: T(n) = (n-1)c + d linear
43
Decrease-by-a-constant-factor
recurrences – The Master Theorem
T(n) = aT(n/b) + f (n), where f (n) ∈ Θ(nk) , k>=0
4. Examples:
1. T(n) = T(n/2) + 1
Θ(logn)
2. T(n) = 2T(n/2) + n Θ(nlogn)
3. T(n) = 3 T(n/2) + n Θ(nlog23)
44
Homework 2
45