Algorithms and Data Structures
Lecture slides: Asymptotic notations and growth
rate of functions, Brassard Chap. 3
Lecturer: Michel Toulouse
Hanoi University of Science & Technology
[email protected]Topics outline
1. Introduction : The principle of invariance
2. Asymptotic notations : definitions, examples, properties
I Big O-notation
I Ω-notation
I Θ-notation
3. Cookbook for asymptotic notations : Limit rule
4. Exercises
thro
Running time
In the previous lecture, the running time of an algorithm was obtained
by figuring out the worst case instance and then counting the number
of times the most frequent basic operations is executed
We did agree this was not an exact measurement of the execution time
of an algorithm because
1. we count as “basic” pseudo-code operations that may take quite
different numbers of machine operations
2. we don’t count all the basic operations of an algorithm but only
the one that is executed the most often
From running time to growth rate to ”orders”
Here we move even further in abstracting measurements from real
execution time
The number of basic operations executed in term of n (the input size)
is called the ”growth rate” of an algorithm
The growth rate of different algorithms is in the same ”order” if bound
by a same function which differs from the growth rates by only a
multiplicative constant
In other words, two algorithms for which the growth rate differs only by
a multiplicative constant are considered to be in the same ”order”
The Principle of Invariance
We know that if we use two different computers to execute an
algorithm, one may run faster than the other
The principle of invariance states that the time used to execute the
algorithm on two different computers will not differ by more than some
multiplicative constant
For example, if the constant happen to be 2, than we know that the
slower execution will not take more than 2 times the time needed for
the other execution. If the fastest execution takes 1 second, the other
one takes 2 seconds
The Principle of Invariance : Formally
Assume the fastest execution takes t1 (n) seconds for a problem
instance of size n and the slowest execution takes t2 (n) seconds.
The principle of invariance is based on the following observation :
Given what is stated in (1), there always exist positive constants c and
d such that t1 (n) ≤ ct2 (n) and such that t2 (n) ≤ dt1 (n) for n
sufficient large.
The running time of either execution is bounded by a constant multiple
of the running time of the other, which one we should call the first is
irrelevant !
Principle of Invariance : “orders”
One of the most important implication of the principle of invariance is
that we don’t need to use any time unit to express the efficiency of an
algorithm.
For example, if an algorithm takes t1 (n) seconds, than using a constant
b, c or d we can say that it takes bt1 (n)µs, ct1 (n) ns or dt1 (n) year.
Rather we choose to express the time taken by an algorithm within a
multiplicative constant, i.e. a time in the order of t(n) for a given
function t, a positive constant c and an implementation of the
algorithm capable to solve every instance of size n in no more than
ct(n) seconds (the use of seconds here is completely arbitrary).
Frequent “orders”
Some orders occur so frequently that we give them a name.
I Logarithmic algorithm : If an algorithm never takes more than
c log n basic operations (therefore the algorithm takes a time in
the order of log n or logarithmic time).
I Linear algorithm : If an algorithm never takes more than cn basic
operations (therefore the algorithm takes a time in the order of n
or linear time).
I Quadratic algorithm : If an algorithm never takes more than cn2
basic operations (therefore the algorithm takes a time in the order
of n2 or quadratic time).
I Cubic, polynomial or exponential algorithms : For algorithms that
can take a time in the order of n3 , nk or c n .
O-Notation : An Asymptotic Upper Bound I
Definition (Big Oh notation)
Let g (n) be a function from N to R. Denote
O(g (n)) = {f (n) : there exist positive constant c and n0 such that
0 ≤ f (n) ≤ c g (n) for all n ≥ n0 }
the set of functions defined on natural numbers which are bounded
above by a positive real multiple of g (n) for sufficiently large n.
O-Notation : An Asymptotic Upper Bound II
Example : Let f (n) = 27n2 + 355
113 n + 12 be the number of basic
operations performed by an algorithm in the worst case, giving an
input of size n. We would like to find a simple function g (n) such that
f (x) ∈ O(g (n)).
We can guess g (n) = n2 . Thus,
f (n) = 27n2 + 355
113 n + 12
≤ 27n + 355
2 2
113 n + 12n
2
16 2 16
≤ 42 113 n = 42 113 g (n)
So instead of saying that an algorithm takes 27n2 + 355 113 n + 12
elementary operations to solve an instance of size n, we can say that
the time of the algorithm is in order of n2 , or write the algorithm is in
O(n2 ).
O-Notation : An Asymptotic Upper Bound III
Terminologies : Let f and g be non-negative valued functions
N → R≥0 :
1. We say that f (n) is in the order of g (n) if f (n) ∈ O(g (n)).
2. We say that “f (n) is big-O of g (n)”. For convenience, we also
write f (n) = O(g (n)).
3. As n increases, f (n) grows no faster than g (n). In other words,
g (n) is an asymptotic upper bound of f (n).
Graphic Example of O-notation
I f (n) ∈ O(g (n)) if there are constants c and n0 such that
0 ≤ f (n) ≤ c g (n) for all n0 ≤ n
cg(n)
f(n)
n0
How to find g (n)
Given that we have a function f that gives the exact number of
elementary operations performed by an algorithm in the worst case, we
need to find :
1. the simplest and slowest-growing function g such that
f (n) ∈ O(g (n))
2. prove the relation f (n) ∈ O(g (n)) is true, i.e. show ∃ c and n0
such that f (n) ≤ cg (n) for all n ≥ n0 .
Strategy for finding g (n)
Assume f (n) = 3n2 + 2n :
I Throw away the multiplicative constants : 3n2 + 2n is replaced by
n2 + n
I Also if you have 2n+1 = 2 × 2n can be replaced by 2n .
I If you have logs, throw away the bases since the log properties says
that for any two bases a and b, logb n = c × loga n for some
multiplicative constant c.
I Once f (n) has been simplified, the fastest growing term in f (n) is
your g (n).
I f (n) = 3n2 + 2n = O(n2 ).
OK, this is a ways to find g (n), but this is not a proof that
f (n) ∈ O(g (n))
Prove that f (n) ∈ O(g (n))
Often the easiest way to prove that f (n) ∈ O(g (n)) is to take c to be
the sum of the positive coefficients of f (n).
Example : Prove 5n2 + 3n + 20 ∈ O(n2 )
I We pick c = 5 + 3 + 20 = 28. Then if n ≥ n0 = 1,
5 n2 + 3 n + 20 ≤ 5 n2 + 3 n2 + 20 n2 = 28 n2 ,
thus 5n2 + 3n + 20 ∈ O(n2 ).
I We can also guess other values for c and then find n0 that work.
Prove that f (n) ∈ O(g (n))
Another way is to assume c = 1 and find for which n0 f (n) ≤ g (n)
Example : Show that 21 n2 + 3n ∈ O(n2 )
Proof : The dominant term is 12 n2 , so g (n) = n2 . Therefore we need
to find c and n0 such that
1
0 ≤ n2 + 3n ≤ c n2 for all n ≥ n0 .
2
Since we decided to fix c = 1, we have
1 2 1
n + 3n ≤ n2 ⇔ 3n ≤ n2 ⇔ 6 ≤ n
2 2
Thus, we pick n0 = 6.
We have just shown that if c = 1, and n0 = 6, then
0 ≤ 21 n2 + 3n ≤ c n2 for all n ≥ n0 , i.e. 12 n2 + 3n ∈ O(n2 ).
Some properties for Big O notation
1. Reflexivity : f (n) ∈ O(f (n)).
2. Scalar rule : Let f be a non-negative valued functions defined on
N and c be a positive constant. Then
O(cf (n)) ∈ O(f (n)).
Example : 6n2 ∈ O(n2 )
3. Maximum rule : Let f , g be non-negative functions. Then
O(f (n) + g (n)) ∈ O(max{f (n), g (n)}).
Exercises I
Given the following algorithm written in pseudo code :
t := 0;
for i := 1 to n do
for j := 1 to n do
t := t + i + j;
return t.
1. Which instruction can be used as elementary operation ?
2. Express the running time of this algorithm in terms of the number
of times your selected elementary operation is executed ?
3. Give (without proof) a big O estimate for the running time of the
algorithm.
4. What is computed and returned by this algorithm ?
Exercise solutions I
t := 0;
for i := 1 to n do
for j := 1 to n do
t := t + i + j;
return t.
1. Which instruction can be used as elementary operation ? Answer :
t := t + i + j
2. Express the running time of this algorithm in terms of the number
of times your selected elementary operation is executed ? Answer :
n2
3. Give (without proof) a big O estimate for the running time of the
algorithm. O(n2 )
Exercise solutions I
t := 0;
for i := 1 to n do
for j := 1 to n do
t := t + i + j;
return t.
What is computed and returned by this algorithm ?
The number of i =
n + 2n + 3n + · · · + n − 1(n) + n × n = n(1 + 2 + 3 + · · · + n − 1 + n) =
2 3 2
n( ni=1 i) = n × n(n+1) = n × n 2+n = n +n
P
2 2
n3 +n2
The number of j = n(1 + 2 + 3 + · · · + n − 1 + n) = 2
3 +n2
Answer : t = 2( n 2 ) = n3 + n2
Exercises I
1. Find g (n) for each of the following functions fi (n) such that
fi (n) = O(g (n)).
I f1 = 3n log2 n + 9n − 3 log2 n − 3
I f2 = 2n2 + n log3 n − 15
I f3 = 100n + (n + 1)(n + 5)/2 + n3/2
3 n+1
I f4 = 1, 000n2 + 2n + 36n log n + 2
3 3 n
1, 000n2 + 2n + 36n log n + 2 2
Exercises II
2. Which of the following statements are true ?
I n2 ∈ O(n3 ) I n 32 ∈ O(n log n)
I 2n ∈ O(3n ) I 2n+1 ∈ O(2n )
I 3n ∈ O(2n ) I O(2n+1 ) = O(2n )
I n log n ∈ O(n 23 ) I O(2n ) = O(3n )
Exercises III
3. Give an upper bound on the worst-case asymptotic time
complexity of the following function used to find the Kth smallest
integer in an unordered array of integers. Justify your result. You
do not need to find the closed form of summations.
int selectkth( int A[ ], int k, int n )
int i, j, mini, tmp ;
for ( i = 0 ; i < k ; i++ )
mini = i ;
for ( j = i + 1 ; j < n ; j++ )
if ( A[j] < A[mini] )
mini = j ;
tmp = A[i] ;
A[i] = A[mini] ;
A[mini] = tmp ;
return A[k-1] ;
Exercises IV
4. How n lg n compares with n1. for 0 < < 1 ?
Answer : Note that n lg n = n × (lg n) and n1. = n × n .
The grow rate of lg n is slower than n for any value of > 0
Eventually n catch up with lg n for some value of n > n0 ,
depending on how small is .
Therefore, n lg n ∈ O(n1. ) for 0 < .
Exercises V
5. Find the appropriate ”Big-Oh” relationship between the functions
n log n and 5n and find the constants c and n0
Answer : 5n ∈ O(n log n). Looking for c and n0 such that
0 ≤ 5n ≤ cn log n
5n ≤ cn log n
≤ c log n
5 ≤ log 32
As 25 = 32 and for c = 1, we have 5n ≤ cn log n
Exercises VI
6. Give the polynomial expression describing the running time of the
code below. Provide the asymptotic time complexity of this code
using the ”Big-Oh” notation.
for (i = 0; i < n; i + +){
for (j = 0; j < 2 ∗ n; j + +)
sum = sum + A[i] ∗ A[j]
for (j = 0; j < n ∗ n; j + +)
sum = sum + A[i] + A[j]
}
Answer : The first inner loop performs 2n2 operations while the
second one performs n3 operations. This iterative procedure
performs 2n2 + n3 operations. 2n2 + n3 ∈ O(n3 )
Properties of O-Notation
These are properties we have been using informally. Now we state
them explicitly and prove them.
Constant factors can be ignored in O-notation calculations.
Theorem 1 : For all k > 0, kf (n) is O(f (n)).
Proof : For n > 0, kf (n) ≤ cf (n) for c = k.
Example : f (n) = 3n2 implies that 3n2 ∈ O(n2 ).
Example : Any constant function is O(1) since let f (n) = C then
f (n) ≤ C · 1 for all n > N.
Remark : 2 is not a constant factor in 2n . Functions 2n and 3n have
different growth rates, so don’t throw away the 2 or 3 in these cases.
Theorem 2 : If f (n) is O(h(n)) and g (n) is O(h(n)), then
f (n) + g (n) is O(h(n)).
Proof : By definition,
I f (n) is O(h(n)) means that for n > n1 , f (n) ≤ c1 h(n) and
I g (n) is O(h(n)) means that for n > n2 , g (n) ≤ c2 h(n)
where c1 , c2 , n1 , and n2 are positive constants.
We want two positive constants c3 and n3 such that
f (n) + g (n) ≤ c3 h(n) for all n > n3 .
We compute c3 by adding the two inequalities :
f (n) ≤ c1 h(n)
g (n) ≤ c2 h(n)
f (n) + g (n) ≤ (c1 + c2 )h(n)
So c3 = c1 + c2 , which is positive since c1 and c2 are.
What should n3 be ? We need f (n) ≤ c1 h(n) and g (n) ≤ c2 h(n) at the
same time, so we can only consider values of n such that n > n1 and
n > n2 at the same time. If n > max{n1 , n2 }, then n > n1 and n > n2
at the same time, so choose n3 = max{n1 , n2 }, which is positive since
n1 and n2 are.
Then, for n > n3 , we have f (n) + g (n) ≤ c3 h(n), where c3 = c1 + c2
and n3 = max{n1 , n2 }. Therefore, f (n) + g (n) is O(h(n)).
Corollary to Theorem 2 : If f (n) is O(g (n)), then f (n) + g (n) is
O(g (n)).
Proof : Since g (n) is O(g (n)), we can put h(n) = g (n) in Theorem 2
to get this result.
Example : f (n) = n + n2 log2 n − logb n is O(n2 log2 n).
If h(n) grows faster than g (n) and g (n) grows faster than f (n), then
h(n) grows faster than f (n).
Theorem 3 : If f (n) is O(g (n)) and g (n) is O(h(n)), then f (n) is
O(h(n)).
Proof : By the definition of O-notation, for n > n1 , f (n) ≤ c1 g (n) and
for n > n2 , g (n) ≤ c2 h(n). Therefore, for n > max{n1 , n2 },
f (n) ≤ c3 h(n), where c3 = c1 · c2 .
Example : If you calculate that f (n) is O( something awful ) and read
in a book that something awful is O(n2 ), then you can conclude that
f (n) is O(n2 ).
The product of upper bounds for functions gives an upper bound for
the product of functions.
Theorem 4 : If f (n) is O(h(n)) and g (n) is O(l(n)), then f (n) · g (n)
is O(h(n) · l(n)).
Proof of Theorem 4 :
By definition, f (n) ≤ c1 h(n) for all n > n1 and g (n) ≤ c2 l(n) for all
n > n2 , for some positive constants c1 , c2 , n1 and n2 .
Then for n > n3
f (n) · g (n) ≤ (c1 h(n))(c2 l(n))
= c3 h(n)l(n)
where n3 = max{n1 , n2 } and c3 = c1 · c2 .
Example : If you have a running time that is the product of two awful
looking functions f (n) and g (n), it may be quite difficult to multiply
them out, but you don’t have to work so hard if you happen to know
what O-notation these functions have.
Exponentials grow faster than powers.
Theorem 5 : nk is O(bn ), for all b > 1 and k ≥ 0 and bn is NOT
O(nk ).
Proof : Consider nk /bn where b and k are constants.
nk k!
lim = lim n =0<1
n→∞ b n n→∞ b (loge b)k
by k applications of l’Hopital’s rule.
This means that for n > N = some integer, nk /bn < 1 or nk ≤ cbn for
c = 1.
Logarithms grow more slowly than powers.
Theorem 6 : logb n is O(nk ) for any positive integers b and k.
Proof : By Theorem 5, for n > N, nk ≤ bn .
Taking logs we get : for n > N, k logb n ≤ n or for n > N, logb n ≤ cn,
for c = 1/k.
Generalization : (logb n)r is O(nk ) for any integer r and any positive
integers k and b.
All logarithm functions grow at the same rate.
Theorem 7 : logb n is O(logc n)
Proof : Property 5 of logs says logb x = (logb c) · (logc x).
The sum of the r th powers of the first n numbers grows at the same
rate as the (r + 1)st power of n.
Pn r
Theorem 8 : f (n) = k=1 k is O(nr +1 ).
Proof :
n
X
kr = 1 r + 2r + · · · + n r
k=1
≤ nr + nr + · · · + nr
= n · nr
= nr +1
so for n > 0, f (n) ≤ cnr +1 where c = 1.
Big Omega (Ω) : An Asymptotic Lower Bound
Given a non-negative valued function g (n). Denote
Ω(g (n)) = {f (n) : there exist positive constant c and n0 such that
f (n) ≥ c g (n) for all n ≥ n0 }
Definition
Let f and g be non-negative valued functions N → R≥0 :
1. We say that f (n) is in omega of g (n) if f (n) ∈ Ω(g (n)).
2. As n increases, f (n) grows no slower than g (n). In other words,
g (n) is an asymptotic lower bound of f (n).
Graphic Example of Ω-notation
I f (n) = Ω(g (n)) if there are constants c and n0 such that
0 ≤ c g (n) ≤ f (n) for all n ≥ n0 .
f(n)
cg(n)
n0
Big Omega : Examples
1. f (n) = 3n2 + n + 12 is Ω(n2 ) and also Ω(n), but not Ω(n3 ).
2. n3 − 4n2 ∈ Ω(n2 ).
Proof : Let c = 1. Then we must have
cn2 ≤ n3 − 4n2
1≤n−4
which is true when n ≥ 5, therefore n0 = 5
so
0 ≤ n2 ≤ n2 (n − 4) = n3 − 4 n2
Ω Proofs : How to choose c and n0
To prove that f (n) ∈ Ω(g (n)), we must find positive values of c and
n0 that make c · g (n) ≤ f (n) for all n > n0 .
I You can assume that c < 1, pick a n0 such that f (n) is larger
than c · g (n) and then find the exact constant c for n0 , OR
I Choose c to be some positive constant less than the multiplicative
constant of the fastest growing term of f (n), then find n0 that
works with the chosen c.
Example 1
For this example we assume that c < 1 and find an appropriate n0
Show that (n log n − 2 n + 13) ∈ Ω(n log n)
Proof : We need to show that there exist positive constants c and n0
such that
0 ≤ c n log n ≤ n log n − 2 n + 13 for all n ≥ n0 .
Since n log n − 2 n ≤ n log n − 2 n + 13,
we will instead show that
c n log n ≤ n log n − 2 n,
Example 1 continue
c n log n ≤ n log n − 2 n
n log n 2n
c≤ −
n log n n log n
2
c ≤1−
log n
2
so, c ≤ 1 − log n , when n > 1.
If n ≥ 8, then 2/(log n) ≤ 2/3, and picking c = 1/3 suffices.
Thus if c = 1/3 and n0 = 8, then for all n ≥ n0 , we have
0 ≤ c n log n ≤ n log n − 2 n ≤ n log n − 2 n + 13.
Thus (n log n − 2 n + 13) ∈ Ω(n log n).
Example 2
For this example we select c to be smaller than the constant of the
fastest growing term in the expression describing the running time.
Prove that f (n) = 3n2 − 2n − 7 ∈ Ω(n2 ).
Proof : The fastest growing term of f (n) is 3n2 . Try c = 1, since
1 < 3.
Then
1 · n2 ≤ 3n2 − 2n − 7 for all n > n0
is true only if (subtracting n2 from both sides)
0 ≤ 2n2 − 2n − 7 for all n > n0
is also true.
Choose n0 = 3, then the inequality above hold for any n ≥ 3.
An Asymptotic Tight Bound Θ-notation
Let g (n) be a non-negative valued function. Denote
Θ(g (n)) = {f (n) : there exist positive constants c1 , c2 and n0 s.t.
c1 · g (n) ≤ f (n) ≤ c2 · g (n) for all n ≥ n0 }
Definition
Let f and g be non-negative valued functions N → R≥0 :
1. We say that f (n) is in the theta of g (n) if f (n) ∈ Θ(g (n)).
2. As n increases, f (n) grows at the same rate as g (n). In other
words, g (n) is an asymptotic tight bound of f (n).
Graphic Example of Θ-notation
I f (n) = Θ(g (n)) if there are constants c1 , c2 and n0 such that
0 ≤ c1 g (n) ≤ f (n) ≤ c2 g (n) for all n0 ≤ n
c2 g(n) f(n)
c1 g(n)
n0
Θ-notation : Example
Prove that n2 − 5n + 7 ∈ Θ(n2 ).
Proof : Let c1 = 12 , c2 = 1, and n0 = 10. Then 12 n2 ≥ 5n and
−5n + 7 ≤ 0. Thus,
1 1
0 ≤ n2 ≤ n2 − n2 ≤ n2 − 5n ≤ n2 − 5n + 7 ≤ n2
2 2
If f (n) is Θ(g (n)), then
I f (n) is “sandwiched” between c1 g (n) and c2 g (n) for sufficiently
large n ;
I g (n) is an asymptotically tight bound for f (n) ;
Big Theta Proofs
The following theorem shows us that proving f (n) ∈ Θ(g (n)) is
nothing new :
I Theorem : f (n) ∈ Θ(g (n)) if and only if f (n) ∈ O(g (n)) and
f (n) ∈ Ω(g (n)).
I Thus, we just apply the previous two strategies.
Example
Show that 12 n2 − 3n ∈ Θ(n2 )
Proof :
I Find positive constants c1 , c2 , and n0 such that
1
0 ≤ c1 n2 ≤ n2 − 3 n ≤ c2 n2 for all n ≥ n0
2
I Dividing by n2 , we get 0 ≤ c1 ≤ 21 − n3 ≤ c2
I c1 ≤ 12 − n3 holds for n ≥ 10 and c1 = 1/5
I 1 3
2 −
≤ c2 holds for n ≥ 10 and c2 = 1.
n
I Thus, if c1 = 1/5, c2 = 1, and n0 = 10, then for all n ≥ n0 ,
1
0 ≤ c1 n2 ≤ n2 − 3 n ≤ c2 n2 for all n ≥ n0 .
2
Thus we have shown that 12 n2 − 3n ∈ Θ(n2 ).
Cookbook for asymptotic notations
Theorem (Limit rule)
Given non-negative valued functions f and g : N → R≥0 . Then the
following statements are true
1. if 0 < limn→∞ gf (n)
(n) = L < ∞, then f (n) ∈ Θ(g (n)) and
consequently f (n) ∈ O(g (n)) and f (n) ∈ Ω(g (n)).
f (n)
2. if limn→∞g (n) = 0, then f (n) ∈ O(g (n)).
3. iflimn→∞ gf (n)
(n)= +∞, then f (n) ∈
/ O(g (n)) but g (n) ∈ O(f (n))
and f (n) ∈ Ω(g (n)).
Exercises
1. Prove that f (n) = n3 + 20n + 1 ∈ O(n3 )
2. Prove that f (n) = n3 + 20n + 1 6∈ O(n2 )
3. Prove that f (n) = n3 + 20n + 1 ∈ O(n4 ).
4. Prove f (n) = n3 + 20n ∈ Ω(n2 ).
5. Prove f (n) = 12 n2 − 3n ∈ Ω(n2 ).
6. Prove that f (n) = 5n2 − 7n ∈ Θ(n2 ).
7. Prove that f (n) = 23n3 − 10n2 log n + 7n + 6 ∈ Θ(n3 ).
8. Find the appropriate Ω relationship between the functions n3 and
3n3 − 2n2 + 2 and find the constants c and n0 .
Exercises (continue)
9. Consider the following iterative procedure :
for (i = 0; i < n; i + +){
for (j = 0; j < 2 ∗ n; j + +)
sum = sum + A[i] ∗ A[j]
for (j = 0; j < n ∗ n; j + +)
sum = sum + A[i] + A[j]
}
9.1 Give a function f describing the computing time of this procedure
in terms of the input size n.
9.2 Bound above the running time of this code using the ”Big-Oh”
notation. Prove your result.
9.3 Give a lower bound on the running time of this code using the “Ω”
notation. Prove your result. Then argue, based on your two
previous results about an exact time complexity of f
Exercises (continue)
10. To illustrate how the asymptotic notation can be used to rank the
efficiency of algorithms, use the relation ⊂ and = to put the
orders of the following functions into a sequence.
√
n2 , 1, n3/2 , 2n , log n, nn , 3n , n, n3 , n log n, n, log log n, n!
11. Similar to previous question, order the following growth rate
functions
√
n! (n + 1)! 2n 2n+1 22n nn n n
nlog n .
Exercises with solutions
Prove that f (n) = n3 + 20n + 1 ∈ O(n3 )
By the definition of big O, we must show ∃ some constants c and n0
such f (n) ≤ cn3 for all values of n ≥ n0 .
n3 + 20n + 1 ≤ cn3
n3 20n 1
+ 3 + 3 ≤ c
n3 n n
20 1
1+ 2 + 3 ≤ c
n n
Therefore f (n) ≤ cn3 for n0 = 1 and c = 22
We could have found this solution by simply adding the coefficients of
all the terms in f (n).
Larger values of n0 will reduce the value of c. For example, if we select
n0 = 5,
20 1
1+ 25 + 125 ≤ c will work for c > 1.808
Exercises with solutions
Prove that f (n) = n3 + 20n + 1 6∈ O(n2 )
n3 + 20n + 1 ≤ cn2
n3 20n 1
2
+ 2 + 2 ≤ c
n n n
20 1
n+ + 2 ≤ c
n n
Here we cannot find a n0 and a c such that f (n) ≤ cn2 for any value
of n ≥ n0 .
Exercises with solutions
Prove that f (n) = n3 + 20n + 1 ∈ O(n4 ). By the definition of big O,
we must show ∃ some constants c and n0 such f (n) ≤ cn4 for all
values of n ≥ n0 .
n3 + 20n + 1 ≤ cn4
n3 20n 1
4
+ 4 + 4 ≤ c
n n n
1 20 1
+ 3+ 4 ≤ c
n n n
This condition holds for n0 = 1, 1 + 20 + 1 ≤ c for c ≥ 22
Exercises with solutions
Prove f (n) = n3 + 20n ∈ Ω(n2 ).
By the definition of big Omega, f (n) ∈ Ω(n2 ) if f (n) ≥ cn2 for all
n ≥ n0 and some constant c.
cn2 ≤ n3 + 20n
20
c ≤ n+
n
For n0 = 5, the value of the right hand side of the equation is 9, if we
set c ≤ 9 then f (n) ≥ cn2
Since there is no negative term in f (n), we could simply select c < 3,
where 3 is the constant of the dominant term in f (n). Setting c = 2,
f (n) ≥ cn2 for all n ≥ n0 = 1.
Exercises with solutions
Prove f (n) = 12 n2 − 3n ∈ Ω(n2 ).
By the definition of big Omega, f (n) ∈ Ω(n2 ) if f (n) ≥ cn2 for all
n ≥ n0 and some constant c.
1 2
cn2 ≤ n − 3n
2
3
c ≤ 1/2 −
n
For n ≥ n0 = 12, the value of the right hand side of the equation is no
smaller than 1/4, therefore we can set c = 1/4. Then 14 n2 ≤ 21 n2 − 3n
for all values of n ≥ 12
Exercises with solutions
Prove that f (n) = 5n2 − 7n ∈ Θ(n2 ).
By the definition of Theta, f (n) ∈ Θ(n2 ) if there exists c1 , c2 and n0
such that c1 n2 ≤ 5n2 − 7n ≤ c2 n2 for all n ≥ n0 .
c1 n2 ≤ 5n2 − 7n
7
c1 ≤ 5 −
n
which is true for c1 = 1 and n0 = 2
5n2 − 7n ≤ c2 n2
7
5 − ≤ c2
n
which is true for c2 = 5 and n0 = 2
Exercises with solutions
Prove that f (n) = 23n3 − 10n2 log n + 7n + 6 ∈ Θ(n3 ).
By the definition of Theta, f (n) ∈ Θ(n3 ) if there exists c1 , c2 and n0
such that c1 n3 ≤ 23n3 − 10n2 log n + 7n + 6 ≤ c2 n3 for all n ≥ n0 .
We first prove that f (n) ∈ Ω(n3 )
c1 n3 ≤ 23n3 − 10n2 log n
10 log n
c1 ≤ 23 −
n
10 log n
where n decreases as n increases : 10 log
n
n
= 5 for n = 2,
10 log n 10 log n
n = 3.75 for n = 8, n = 2.5 for n = 16. Therefore for c1 = 1
and n0 = 2, which is true for c1 = 1 and n0 = 2,
c1 n3 ≤ 23n3 − 10n2 log n + 7n + 6
Exercises with solutions
Prove that f (n) = 23n3 − 10n2 log n + 7n + 6 ∈ Θ(n3 ).
Next we prove that f (n) ∈ O(n3 )
23n3 + 7n + 6 ≤ c2 n3
7 6
23 + 2 + 3 ≤ c2
n n
where for n0 = 2, 23 + n72 + n63 = 23 + 1.75 + 0.75 = 25.5. There for
n0 = 2 and c2 = 26, 23n3 − 10n2 log n + 7n + 6 ≤ 26n3
Exercises with solutions I
1. Find the appropriate Ω relationship between the functions n3 and
3n3 − 2n2 + 2 and find the constants c and n0
Answer : 3n3 − 2n2 + 2 ∈ Ω(n3 ). Looking for c and n0 such that
0 ≤ cn3 ≤ 3n3 − 2n2 + 2. Choose c = 1 to be some positive
constant less than the multiplicative constant of the fastest
growing term of 3n3 − 2n2 + 2.
2. Consider the following iterative procedure :
for (i = 0; i < n; i + +){
for (j = 0; j < 2 ∗ n; j + +)
sum = sum + A[i] ∗ A[j]
for (j = 0; j < n ∗ n; j + +)
sum = sum + A[i] + A[j]
}
2.1 Give a polynomial expression describing the computing time of this
procedure.
Answer : f (n) = n(2n + n2 ) = 2n2 + n3
Exercises with solutions II
2.2 Bound above the asymptotic time complexity of this code using the
”Big-Oh” notation. Prove your result.
Answer : Dominant term is n3 . Defining the constant c for n3 as
the sum of the constants in the polynomial expression, then
2n2 + n3 ≤ 3n3 for c = 3 and all value of n ≥ n0 = 1, therefore
f (n) ∈ O(n3 )
2.3 Give a lower bound on the asymptotic time complexity of this code
using the “Ω” notation. Prove your result. Then argue, based on
your two previous results about an exact bound of f
Answer : For the lower bound, since the two terms in f (n) are
positive, choosing c = 1, we have n3 ≤ 2n2 + n3 for all n ≥ n0 = 1,
therefore f (n) ∈ Ω(n3 ) ⇒ f (n) ∈ Θ(n3 ), given that f (n) ∈ O(n3 )
as well
Exercises with solutions III
3. To illustrate how the asymptotic notation can be used to rank the
efficiency of algorithms, use the relation ⊂ and = to put the
orders of the following functions into a sequence.
√
n2 , 1, n3/2 , 2n , log n, nn , 3n , n, n3 , n log n, n, log log n, n!
√
Answer : 1 < log log n < log n < n < n < n log n < n3/2 < n2 <
n3 < 2n < 3n < n! < nn
√
n! (n + 1)! 2n 2n+1 22n nn n n
nlog n .
√
Answer : nlog n < n n < 2n = 2n+1 < 22n < n! = n + 1! < nn