Chapter 3
Chapter 3
Efficiency analysis of
algorithm
Chapter 2
Efficiency analysis of algorithm
Time and space efficiency
- Time efficiency: measure the time that the algorithm need to finish. (e.g. Is it fast,
slow ? ..etc.)
- Space efficiency: measure the resources that the algorithm need. (e.g. how much
memory it need).
Size of input affect the algorithm
Larger input affect the algorithm (e.g. an algorithm might be faster to analyse 2KB data
than analysing 2GB data).
Note that Size is vary depend on the problems and its type:
Arrays : The input size is the number of elements.
Polynomial equation : x2+4x+2 . The input size is the degree. In this case: it is 2.
Matrices : Given a 4x3 matrix. The input size the number of elements, in this case 4x3 = 12).
Graphs (e.g. Trees): The input size is the number of vertices (nodes) and edges.
How to measure the runtime of an algorithm ?
a small low-budget IoT device (raspberry pi) : 2 minutes to finish.
a normal computer : 1 minutes to finish.
a super high performance computer : 10 seconds to finish.
Therefore, we can measure the time by counting how many times an algorithm executes a
certain instruction (basic operation).
For example,
The basic operation for searching is comparison.
The basic operation for polynomial algorithms is multiplication and addition.
Of course, This will give an estimation about the time that the algorithm require to finish.
Example : measure the runtime of algorithms
Fun1: Fun 2:
a←1 arr ← [1,2,3]
a ← 2*a+2 for i←0 to 2:
b ← a*b +3 +b if(arr[i] = 3)
return true;
return false
Example : measure the runtime of algorithms
Fun1: Fun 2:
a←1 arr ← [1,2,3]
a ← 2*a+2 for i←0 to 2:
b ← a*b +3 +b if(arr[i] = 3)
return true;
return false
To unify the determination of basic operation for this course, the basic operation is:
The most used operation in the algorithm (you can choose multiplication and addition together)
The one that executed in the inner most of the loop.
The one that shaped the algorithm (cannot run the algorithm without it) (e.g. comparison in
searching algorithm, multiplication in factorial algorithm).
Multiplication Comparison
(because it is based on the problem) (because, the problem is about searching for even number)
Quiz: determine the basic operation in the following examples
for i←0 to n: for i←0 to n:
Sum ← I * 2 /2 +5 sum ← i*2
for j←0 to n:
if(i>=0)
sum ← i + 2 *2 +5
Multiplication Comparison
(because it is based on the problem) (because, the problem is about searching for even number)
Worst-case, Best-case, and average-case efficiencies
Answer:
If we consider that the basic operation in this case is the comparison, then :
• Best case, the key is at the first element in the array. The number of operations is just 1.
• Worst case, they is at the end of the array or not in the array (n operations).
• Average case, the element is somewhere in between and depend if the element within the
array or not.
◦ Assumption #1 : The key is in the array(somewhere in the middle)→ (n+1)/2 comparisons.
◦ Assumption #2 : The key is not in the array→similar to the worst case→ n comparisons.
Order of growth
We can denote the algorithm with f(n) where n is the input size.
Find the number of basic operations if the input size =5 (i.e. n =5)
Order of growth
We can denote the algorithm with f(n) where n is the input size.
Find the number of basic operations if the input size =5 (i.e. n =5)
F(5) = 2(5) + 1 = 10 + 1 = 11 operations
G(5) = (5)2 + 4 = 25 + 4 = 29 operations
Order of growth
Consider the following table for algorithms runtime operation given n as input size.
Input Different algorithms
size
Number of
operations
The table based on table 2.1 from [1] (please refer to the reference slide).
Order of growth
We noticed that log is grow slowly regardless of the
base.
Going from n to square and cubic, the values are
doubled in squaring and tripled in cubing.
Exponential exceed even the cubic function.
While factorial are a way higher (for recursion
problems).
Input size
Therefore, the algorithm that need more times (require
large number of operations) can be used with small input
size. Otherwise, the program will take a lot of time to
finish.
Asymptotic notations and basic efficiency classes
“Big-Oh notation describes an upper bound. In other words, big-Oh notation states a claim
about the greatest amount of some resource (usually time) that is required by an algorithm
for some class of inputs of size n (typically the worst such input, the average of all possible
inputs, or the best such input)” [3].
For T(n) a non-negatively valued function, T(n) is in set O(g(n)) if there exist two positive
constants c and n0 such that T(n)<=cg(n) for all n>n0.
Examples:
1. g(n) = n, then O(g(n)) = O(n).
2. g(n) = 2n+5n, then O(g(n)) = O(2n+5n) = O(7n) = O(n).
Find the Big-oh (upper bound) of the following algorithm: T(n) = 2n +3.
Answer: By applying the definition, we need to find g(n), c and n such that T(n)<=cg(n) for all n>n0.
Assume g(n) = 5n2 + 1 and n=1, c=1,Then,
Note: if you try other algorithms for g(n), you will find that T(n) O(n) and T(n) O(n2) and T(n) O(n3)
In summary, anything equal and higher than O(n). So which one to pick? always pick the closest one. In this case it
should be O(n).
“Omega or “big-Omega” is the lower bound for an algorithm is denoted by the symbol Ω,
pronounced “big-Omega” or just “Omega””[3].
For T(n) a non-negatively valued function, T(n) is in set Ω(g(n)) if there exist two positive
constants c and n0 such that T(n)≥cg(n) for all n>n0.
Where, T(n) is the running time, g(n) is the number of instruction for a given n input size.
Example:
1. g(n) = n, then, Ω(g(n)) = Ω(n).
2. g(n) = 2n+5n, then, Ω(g(n)) = Ω(2n+5n) = Ω(7n) = Ω(n).”
Find the Omega-oh (lower bound) of the following algorithm: T(n) = 2n 2 +3.
Answer: By applying the definition, we need to find g(n), c and n such that T(n)>=cg(n) for all n>n0.
Assume g(n) = n+1 and n=1,c=1 Then,
Note: if you try other algorithms for g(n), you will find that T(n) Ω(n), T(n) Ω(n2) and T(n) Ω(log(n))
In summary anything equal and lower than O(n2). So which one to pick, always pick the closest one. In this case it
should be O(n2).
“The definitions for big-Oh and Ω give us ways to describe the upper bound for an
algorithm (if we can find an equation for the maximum number of instructions of a
particular class of inputs of size n) and the lower bound for an algorithm (if we can find
an equation for the minimum cost for a particular class of inputs of size n). When the
upper and lower bounds are the same within a constant factor, we indicate this by using
θ (big-Theta) notation”[3].
Find the Big-theta (lower bound and upper bound) of the following algorithm:
T(n) = 2n +3.
Answer: By applying the definition, we need to find that O((T(n)) = Ω((T(n)) for the given
T(n):
For given algorithms f(n) and g(n), The limit can be used for comparing thier orders of
growths as follow:
We can see here that we apply the derivation twice, you can apply it as
much
as you need until you can find the answer.
Mathematical analysis of non-recursive algorithm
Example 1: Given the following algorithm to find the index of the minium value in
array. Find the the complexity assuming the basic operation is comparison
def find_min(arr[0..n-1]):
minIndex ← 0
for i←0 to n-1 do
if(arr[i] < arr[minIndex])
minIndex = i
end for
return minIndex
Mathematical analysis of non-recursive algorithm
Example 1: Given the following algorithm to find the index of the minium value in
array. Find the the complexity assuming the basic operation is comparison
Example 2. Given the following algorithm. Find time complexity, assuming the basic operation is addition and
multiplication.
def example_2 (arr[0 .. n-1]):
Sum ← 0
average ← 0
n ← length(arr)
for i←0 to n-1 do
arr[i] ← arr[i] * 2 + 3
end for
average ← sum /n
Mathematical analysis of non-recursive algorithm
Example 2. Given the following algorithm. Find time complexity, assuming the basic operation is addition and
multiplication.
def example_2 (arr[0 .. n-1]):
Sum ← 0
average ← 0
n ← length(arr)
for i←0 to n-1 do
arr[i] ← arr[i] * 2 + 3
end for
Example 3: Given the following algorithm. Find the time complexity assuming the add
is the basic operation.
def example_3 (arr[0 .. n-1][0 .. n-1]):
sum ← 0
Example 3: Given the following algorithm. Find the time complexity assuming the add
is the basic operation.
def example_3 (arr[0 .. n-1][0 .. n-1]):
sum ← 0
Example 4: Given the following algorithm. Find the time complexity assuming the add
is the basic operation.
def example_4(arr[0 .. n-1][0 .. n-1]):
sum ← 0
for i←0 to n-1 do
for j←i+1 to n-1 do
sum ← sum + arr[i][j]
end for
end for
Mathematical analysis of non-recursive algorithm
Example 4: Given the following algorithm. Find the time complexity assuming the add
is the basic operation.
def example_4(arr[0 .. n-1][0 .. n-1]):
Sum ← 0
for i←0 to n-1 do
for j←i+1 to n-1 do
sum ← sum + arr[i][j]
end for
end for
Answer : n −1 n −1 n− 1 n −1 n− 1 n− 1 n −1 n−1
∑∑
i=0 j=i +1
1=∑
i=0
(∑ )
j=i +1
1 =∑ ( ( n −1 ) − ( i+1 ) +1 ) =∑ ( n −i −1 )=∑ ( n −1 ) − ∑ i
i=0 i= 0 i =0 i=0
n −1 n− 1 n− 1
( ( n −1 ) ( n ) ) n 2 − n 2
( n −1 ) ∑ 1 − ∑ i=( n −1 ) ( n ) − ∑ i=( n − 1 )( n ) − = θ(n )
i=0 i=0 i= 0 2 2
Mathematical analysis of recursive algorithm
Example 1: Consider the following code for calculating the factorial, find the time
complexity assuming the basic operation is multiplication.
def factorial(n):
if n == 0
return 1
else
return n*factorial(n-1)
Mathematical analysis of recursive algorithm
Example 1 – Solution
Step 1: We need to convert the pseudo code into an equation to easily understand it.
Step 3: We apply different input and find the number of simple instruction.
K=1, T(n) = T(n-1) + 1
K=2, = T(n-2) + 1 +1
K=3, = T(n-3) + 1 +1 + 1
K=4, = T(n-4) + 1 +1 + 1 + 1
..
for k, = T(n-k) + k
Step 3: We apply different input and find the number of simple instruction.
Example 2: Consider the following Fibonacci function, find the time complexity
assuming the basic operation is multiplication and addition.
Example 2 – solution
Step 1: We convert the pesdo code into an equation.
n , if n=0 or n=1
f ( n )=
f ( n −1 ) + f ( n −2 ) , n>1
Step 2: We construct a recurrence relation as a function T(n) that calculate the number of basic operations.
T(n) = T(n-1) + T(n-2)+1 (+1 because of the addition operation in the middle)
T(n) = T(n-1) + T(n-2)+1 (assume T(n-2) ≃ T(n-1), Note this only works for this problem. please refer to [6]). Then,
T(n) = 2T(n-1) +1
T(0) = T(1) = 0
Step 3: We apply either backward substitution or forward substitution (only one of them is enough, no need
for both).
Step 3: We apply different input and find the number of simple instruction.
Step 3: We apply different input and find the number of simple instruction.
Let n = 0, T(0) = 0
Let n = 1, T(1) = 0
Let n = 2, T(2) = 2T(1) +1 = 2 * 0 + 1 =1 = 21 - 1
Let n = 3, T(3) = 2T(2) +1 = 2 *(1) + 1 =3 = 22 - 1
Let n = 4, T(4) = 2T(3) +1 = 2 *(3) + 1 =7 = 23 - 1
Let n = 5, T(5) = 2T(4) +1 = 2 *(7) + 1 = 15 = 24 - 1
..
T(n) = 2n-1 - 1 θ(2n)
References and acknowledgment
- The slide content based on Anany Levitin, Introduction to The Design & Analysis of algorithm, 2nd edition unless stated or cited to other
reference.
[1] Anany Levitin, Introduction to The Design & Analysis of algorithm, 2nd edition.
[2] Jeff Erickson, Algorithms, http://jeffe.cs.illinois.edu/teaching/algorithms/.
[3] Lower Bounds and Θ Notation, OpenDSA Data Structures and Algorithms Modules Collection,OpenDSA Project,
https://opendsa-server.cs.vt.edu/ODSA/Books/Everything/html/AnalLower.html
[4] Not sure the correct reference, But it was from youtube. I believe it was one of these channels. I highly recommned to check them :
- Computer_IT_ICT Engineering Department : LJIET, https://www.youtube.com/playlist?list=PLO14KY9mobCIsDALaKmjGTKacneOxgq26
- Jenny's Lectures CS IT, Data Structures and Algorithms, Thttps://www.youtube.com/playlist?list=PLdo5W4Nhv31bbKJzrsKfMpo_grxuLl8LU
- Abdul Bari, Algorithms, https://www.youtube.com/playlist?list=PLDN4rrl48XKpZkf03iYFl-O29szjTrs_O
[5] Abdul Bari, 1.8.1 Asymptotic Notations Big Oh - Omega - Theta #1 , https://youtu.be/A03oI0znAoc?feature=shared
[6] Emily Marshall, Computational Complexity of Fibonacci Sequence, https://www.baeldung.com/cs/fibonacci-computational-complexity
OpenDSA Project under MIT Licence (see: https://opendsa-server.cs.vt.edu/ODSA/Books/Everything/html/index.html and
https://opensource.org/license/mit/)