0% found this document useful (0 votes)
17 views49 pages

Chapter 3

Uploaded by

yvmfgns8fs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views49 pages

Chapter 3

Uploaded by

yvmfgns8fs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Chapter 2

Efficiency analysis of
algorithm
Chapter 2
Efficiency analysis of algorithm
Time and space efficiency

When we design an algorithm we consider :

- Time efficiency: measure the time that the algorithm need to finish. (e.g. Is it fast,
slow ? ..etc.)

- Space efficiency: measure the resources that the algorithm need. (e.g. how much
memory it need).
Size of input affect the algorithm

Larger input affect the algorithm (e.g. an algorithm might be faster to analyse 2KB data
than analysing 2GB data).

Note that Size is vary depend on the problems and its type:


Arrays : The input size is the number of elements.

Polynomial equation : x2+4x+2 . The input size is the degree. In this case: it is 2.

Matrices : Given a 4x3 matrix. The input size the number of elements, in this case 4x3 = 12).

Graphs (e.g. Trees): The input size is the number of vertices (nodes) and edges.
How to measure the runtime of an algorithm ?

One possible option is to measure the running time by seconds.


Now, consider a simple program running in:


a small low-budget IoT device (raspberry pi) : 2 minutes to finish.

a normal computer : 1 minutes to finish.

a super high performance computer : 10 seconds to finish.

Raspberry Pi Normal Computer/Laptop High performance computer


Photo by Jainath Ponnala on Photo by Clément Hélardot on Photo by imgix on
How to measure the runtime of an algorithm ?

Many factors could affect the program that include:


- The programming language that used.
- The computer that run the program.
- The algorithm that run in the program.

Therefore, we can measure the time by counting how many times an algorithm executes a
certain instruction (basic operation).

For example,
The basic operation for searching is comparison.
The basic operation for polynomial algorithms is multiplication and addition.
Of course, This will give an estimation about the time that the algorithm require to finish.
Example : measure the runtime of algorithms

How many basic operations in the


following algorithms:

Fun1: Fun 2:
a←1 arr ← [1,2,3]
a ← 2*a+2 for i←0 to 2:
b ← a*b +3 +b if(arr[i] = 3)
return true;
return false
Example : measure the runtime of algorithms

How many basic operations in the


following algorithms:

Fun1: Fun 2:
a←1 arr ← [1,2,3]
a ← 2*a+2 for i←0 to 2:
b ← a*b +3 +b if(arr[i] = 3)
return true;
return false

The basic operations are The basic operation is


multiplication and addition. comparison There are 3
There are 5 basic operations basic operations
How to determine the basic operation ?

It might be difficult to determine the basic operation?


This is because it depends on the objective that you want to achieve. Therefore, you can either choose:

- Only 1 operation (The most used one in the algorithm)


if the problem is searching for an element, the basic operation is comparison.
If the problem is involve arithmetic operation, the basic operations are the all the arithmetic operations.

- Some of the operations or all of them


It is possible to consider the basic operation based on what you want to optimize. For example, your
algorithm do comparisons and arithmetic operations, and you want to speed it up by reducing both of
these. So, you might choose both of them as basic operations or just some of them.
How to determine the basic operation ? (This course only)

To unify the determination of basic operation for this course, the basic operation is:


The most used operation in the algorithm (you can choose multiplication and addition together)

The one that executed in the inner most of the loop.

The one that shaped the algorithm (cannot run the algorithm without it) (e.g. comparison in
searching algorithm, multiplication in factorial algorithm).

Let’s take some examples :


Quiz: determine the basic operation in the following examples
for i←0 to n: for i←0 to n:
Sum ← I * 2 /2 +5 sum ← i*2
for j←0 to n:
if(i>=0)
sum ← i + 2 *2 +5

Multiplication and addition


Multiplication, addition and division (because their in the inner most loop and the problem need
calculation to work)
def fact(n) n ← 0
if (n ==0) while(n<10):
return 1 if(n%2 == 0)
else max ← n
return n*fact(n-1)

Multiplication Comparison
(because it is based on the problem) (because, the problem is about searching for even number)
Quiz: determine the basic operation in the following examples
for i←0 to n: for i←0 to n:
Sum ← I * 2 /2 +5 sum ← i*2
for j←0 to n:
if(i>=0)
sum ← i + 2 *2 +5

Multiplication and addition


Multiplication, addition and division (because their in the inner most loop and the problem need
calculation to work)
def fact(n) n ← 0
if (n ==0) while(n<10):
return 1 if(n%2 == 0)
else max←n
return n*fact(n-1)

Multiplication Comparison
(because it is based on the problem) (because, the problem is about searching for even number)
Worst-case, Best-case, and average-case efficiencies

Q1: Measure the time of this algorithm


Description : Search for the key in the list
Input: two parameters: list as an array of size n, and a key value to be searched.
Output: the index of the key from the list or -1 if the key is not in the list.

def Sequential Search (list[0 … n-1], key):


for index ← 0 to n-1 do
if(list[index] = key)
return index
end for
return -1
Worst-case, Best-case, and average-case efficiencies

Answer:
If we consider that the basic operation in this case is the comparison, then :
• Best case, the key is at the first element in the array. The number of operations is just 1.
• Worst case, they is at the end of the array or not in the array (n operations).
• Average case, the element is somewhere in between and depend if the element within the
array or not.
◦ Assumption #1 : The key is in the array(somewhere in the middle)→ (n+1)/2 comparisons.
◦ Assumption #2 : The key is not in the array→similar to the worst case→ n comparisons.
Order of growth

We can denote the algorithm with f(n) where n is the input size.

For example, for given algorithms: f(n) and g(n) as follow:

f(n) = 2n + 1 , g(n) = n2+4

Find the number of basic operations if the input size =5 (i.e. n =5)
Order of growth

We can denote the algorithm with f(n) where n is the input size.

For example, for given algorithms: f(n) and g(n) as follow:

f(n) = 2n + 1 , g(n) = n2+4

Find the number of basic operations if the input size =5 (i.e. n =5)


F(5) = 2(5) + 1 = 10 + 1 = 11 operations

G(5) = (5)2 + 4 = 25 + 4 = 29 operations
Order of growth

Consider the following table for algorithms runtime operation given n as input size.
Input Different algorithms
size

Number of
operations

The table based on table 2.1 from [1] (please refer to the reference slide).
Order of growth

Now, Consider the following graph based on the table of


algorithms runtime operation given n as input size. Number of runtime operations


We noticed that log is grow slowly regardless of the
base.

Going from n to square and cubic, the values are
doubled in squaring and tripled in cubing.

Exponential exceed even the cubic function.

While factorial are a way higher (for recursion
problems).
Input size
Therefore, the algorithm that need more times (require
large number of operations) can be used with small input
size. Otherwise, the program will take a lot of time to
finish.
Asymptotic notations and basic efficiency classes

There three main asymptotic notations:


- Big-Oh (O)
- Big-Omega (Ω)
- Big-Theta (Θ)

Content from and based on [3].


Big-Oh

“Big-Oh notation describes an upper bound. In other words, big-Oh notation states a claim
about the greatest amount of some resource (usually time) that is required by an algorithm
for some class of inputs of size n (typically the worst such input, the average of all possible
inputs, or the best such input)” [3].

For T(n) a non-negatively valued function, T(n) is in set O(g(n)) if there exist two positive
constants c and n0 such that T(n)<=cg(n) for all n>n0.

Examples:
1. g(n) = n, then O(g(n)) = O(n).
2. g(n) = 2n+5n, then O(g(n)) = O(2n+5n) = O(7n) = O(n).

Content from and based on [3].


Big-Oh - Example

Find the Big-oh (upper bound) of the following algorithm: T(n) = 2n +3.

Answer: By applying the definition, we need to find g(n), c and n such that T(n)<=cg(n) for all n>n0.
Assume g(n) = 5n2 + 1 and n=1, c=1,Then,

- T(n) <= cg(n)


- 2n+3 <= 5n2 + 1
- 2(1)+3 <= 5(1)2 +1
- 5 <= 6 (True), Then, T(n) O(g(n)) for all n >= 1
-To find the upper bound of g(n) → O(g(n)) = O(5n2 + 1) = O(n2) . So T(n) O(n2)

Note: if you try other algorithms for g(n), you will find that T(n) O(n) and T(n) O(n2) and T(n) O(n3)
In summary, anything equal and higher than O(n). So which one to pick? always pick the closest one. In this case it
should be O(n).

Content from and based on [5].


Big-Oh - Example

Since Big-Oh it is the upper bound, it is:



n O(n2) It must be higher or equal
1 < log(n) < n < n log(n)< n2 < n3 < n!

n (n+1) O(n3)

(n2/2) O(n2)
But not

n3 ⇥ O(n2)

n4 (n+1) ⇥ O(n2)

2n⇥ O(n2)

Content from and based on [3,5].


Big-Omega

“Omega or “big-Omega” is the lower bound for an algorithm is denoted by the symbol Ω,
pronounced “big-Omega” or just “Omega””[3].

For T(n) a non-negatively valued function, T(n) is in set Ω(g(n)) if there exist two positive
constants c and n0 such that T(n)≥cg(n) for all n>n0.
Where, T(n) is the running time, g(n) is the number of instruction for a given n input size.

Example:
1. g(n) = n, then, Ω(g(n)) = Ω(n).
2. g(n) = 2n+5n, then, Ω(g(n)) = Ω(2n+5n) = Ω(7n) = Ω(n).”

Content from and based on [3].


Big-Omega - Example

Find the Omega-oh (lower bound) of the following algorithm: T(n) = 2n 2 +3.

Answer: By applying the definition, we need to find g(n), c and n such that T(n)>=cg(n) for all n>n0.
Assume g(n) = n+1 and n=1,c=1 Then,

- T(n) >= cg(n)


- 2n2+3 >= n+ 1
- 2(1)2 +3 >= 1 +1
- 5 >= 2 (True), Then, T(n) Ω(g(n)) for all n >= 1
-To find the lower bound of g(n) → Ω(g(n)) = Ω(n + 1) = Ω(n) . So T(n) Ω(n)

Note: if you try other algorithms for g(n), you will find that T(n) Ω(n), T(n) Ω(n2) and T(n) Ω(log(n))
In summary anything equal and lower than O(n2). So which one to pick, always pick the closest one. In this case it
should be O(n2).

Content from and based on [5].


Big-Omega

Since Big-Omega it is the lower bound, it is:



n2 Ω(n) It must be lower or equal
1 < log(n) < n < n log(n)< n2 < n3 < n!

n2 (n+1) Ω(n2)

(n2/2) Ω(n2)
But not

n2 ⇥ Ω(n3)

n2 (n+1) ⇥ Ω(n4)

(n2/2)⇥ Ω(2n)

Content from and based on [3,5].


Big-Theta

“The definitions for big-Oh and Ω give us ways to describe the upper bound for an
algorithm (if we can find an equation for the maximum number of instructions of a
particular class of inputs of size n) and the lower bound for an algorithm (if we can find
an equation for the minimum cost for a particular class of inputs of size n). When the
upper and lower bounds are the same within a constant factor, we indicate this by using
θ (big-Theta) notation”[3].

An algorithm is said to be θ(h(n)) if it is in O(h(n)) and it is in Ω(h(n)). In other words,


if f(n) is θ(g(n)), then g(n) is θ(f(n)).

Content from and based on [3].


Big-Theta - Example

Find the Big-theta (lower bound and upper bound) of the following algorithm:
T(n) = 2n +3.

Answer: By applying the definition, we need to find that O((T(n)) = Ω((T(n)) for the given
T(n):

O(T(n)) = O(2n+3) = O(n)

Ω(T(n)) = Ω(2n+3) = Ω(n)

Therefore T(n) θ(n)

Content from and based on [5].


Big-Theta

Since Big-Theta ,it is:



n2 θ(n2) It must be the same
1 < log(n) < n < n log(n)< n2 < n3 < n!

n2 (n+1) θ(n3)

(n2/2) θ(n2)
But not

n2 ⇥ θ(n3)

n2 (n+1) ⇥ θ(n4)

(n2/2)⇥ θ(2n)

Content from and based on [3,5].


Asymptotic notations and basic efficiency classes - Summary

Let f(n) = 2n2 + 5

- Big-Oh (O) :O(log(n)) O(n) O(n2) O(n3) O(2n)


- Big-Omega (Ω) :Ω(log(n)) Ω(n) Ω(n2) Ω(n3) Ω(2n)
- Big-Theta (Θ) : Θ(n2)

Content from and based on [3].


Comparing Algorithms using order of growths

For given algorithms f(n) and g(n), The limit can be used for comparing thier orders of
growths as follow:

0 : indicates that f ( n ) has a lower growth order than g ( n ) . → f ( n ) =O ( g ( n ) )


f (n)
lim ={ a postive number : indicates that f ( n ) has an equal growth order than g ( n ) . → f ( n ) =θ ( g ( n ) )
n→ ∞ g ( n)
∞ : indicates that f ( n ) has a higher growth order than g ( n ) .→ f ( n )=Ω ( g ( n ) )

Find the function that has higher order of a growth.


1. f(n) = 5n2, g(n) = n2.
2. f(n) = n-1, g(n) = n(n+2).
3. f(n) = 2n, g(n) = n2.

Content from and based on [1, 3].


Comparing Algorithms using order of growths – Example 1 and 2

Find the function that has higher order of a growth.


1. f(n) = 5n2, g(n) = n2
f ( n) 5 n2
lim = lim 2 =lim 5=5 , f ( n )=θ ( g ( n ) )
n→ ∞ g (n) n→∞ n n→∞

2. f(n) = n-1, g(n) = n(n+2).


`
f (n) n −1 n− 1 ( n −1 ) 1 1
lim = lim = lim 2 =lim `
=lim = =0 , f ( n )=O ( g ( n ) )
n→∞ g ( n ) n → ∞ n ( n+2 ) n → ∞ n +2 n n → ∞ ( n +2 n ) n → ∞ ( 2 n+2 ) ∞
2

Take the derivation of both when you can’t simplify it.


This is called l'hospital rule
Comparing Algorithms using order of growths – Example 3

Find the function that has higher order of a growth.


3. f(n) = 2n, g(n) = n2
` `
f (n) 2n ( 2n ) n
2 ln ( 2 ) ( 2n ln ( 2 ) ) n
2 ln ( 2 ) ln ( 2 )
lim = lim 2 = lim 2 ` = lim =lim `
= lim =∞ , f ( n )=Ω ( g ( n ) )
n→ ∞ g (n) n→∞ n n → ∞ (n ) n→ ∞ 2n n→∞ (2 n) n→∞ 2

We can see here that we apply the derivation twice, you can apply it as
much
as you need until you can find the answer.
Mathematical analysis of non-recursive algorithm

Example 1: Given the following algorithm to find the index of the minium value in
array. Find the the complexity assuming the basic operation is comparison

def find_min(arr[0..n-1]):
minIndex ← 0
for i←0 to n-1 do
if(arr[i] < arr[minIndex])
minIndex = i
end for
return minIndex
Mathematical analysis of non-recursive algorithm

Example 1: Given the following algorithm to find the index of the minium value in
array. Find the the complexity assuming the basic operation is comparison

def find_min(arr[0..n-1]): Answer :


minIndex ← 0 n −1

for i←0 to n-1 do ∑ 1=n −1 − 0+1=n θ ( n)


i=0
if(arr[i] < arr[minIndex])
minIndex = i
end for
return minIndex
Mathematical analysis of non-recursive algorithm

Example 2. Given the following algorithm. Find time complexity, assuming the basic operation is addition and
multiplication.
def example_2 (arr[0 .. n-1]):
Sum ← 0
average ← 0
n ← length(arr)
for i←0 to n-1 do
arr[i] ← arr[i] * 2 + 3
end for

for j←0 to n-1 do


sum ← sum + arr[i]
end for

average ← sum /n
Mathematical analysis of non-recursive algorithm

Example 2. Given the following algorithm. Find time complexity, assuming the basic operation is addition and
multiplication.
def example_2 (arr[0 .. n-1]):
Sum ← 0
average ← 0
n ← length(arr)
for i←0 to n-1 do
arr[i] ← arr[i] * 2 + 3
end for

for j←0 to n-1 do


sum ← sum + arr[i]
end for
Answer :
average ← sum /n n −1 n −1 n− 1 n −1

∑ 2+ ∑ 1=2 ∑ 1+∑ 1=2 ( n ) + ( n )=3 n θ ( n)


i=0 j =0 i= 0 j=0
Mathematical analysis of non-recursive algorithm

Example 3: Given the following algorithm. Find the time complexity assuming the add
is the basic operation.
def example_3 (arr[0 .. n-1][0 .. n-1]):
sum ← 0

for i←0 to n-1 do


for j←0 to n-1 do
sum ← sum + arr[i][j]
end for
end for
Mathematical analysis of non-recursive algorithm

Example 3: Given the following algorithm. Find the time complexity assuming the add
is the basic operation.
def example_3 (arr[0 .. n-1][0 .. n-1]):
sum ← 0

for i←0 to n-1 do


for j←0 to n-1 do
sum ← sum + arr[i][j]
end for
end for Answer :
n −1 n − 1 n− 1 n −1

∑ ∑ 1=∑ ( n ) =n ∑ 1=n ( n ) =n 2 θ (n2)


i=0 j=0 i= 0 i=0
Mathematical analysis of non-recursive algorithm

Example 4: Given the following algorithm. Find the time complexity assuming the add
is the basic operation.
def example_4(arr[0 .. n-1][0 .. n-1]):
sum ← 0
for i←0 to n-1 do
for j←i+1 to n-1 do
sum ← sum + arr[i][j]
end for
end for
Mathematical analysis of non-recursive algorithm

Example 4: Given the following algorithm. Find the time complexity assuming the add
is the basic operation.
def example_4(arr[0 .. n-1][0 .. n-1]):
Sum ← 0
for i←0 to n-1 do
for j←i+1 to n-1 do
sum ← sum + arr[i][j]
end for
end for
Answer : n −1 n −1 n− 1 n −1 n− 1 n− 1 n −1 n−1

∑∑
i=0 j=i +1
1=∑
i=0
(∑ )
j=i +1
1 =∑ ( ( n −1 ) − ( i+1 ) +1 ) =∑ ( n −i −1 )=∑ ( n −1 ) − ∑ i
i=0 i= 0 i =0 i=0
n −1 n− 1 n− 1
( ( n −1 ) ( n ) ) n 2 − n 2
( n −1 ) ∑ 1 − ∑ i=( n −1 ) ( n ) − ∑ i=( n − 1 )( n ) − = θ(n )
i=0 i=0 i= 0 2 2
Mathematical analysis of recursive algorithm

Example 1: Consider the following code for calculating the factorial, find the time
complexity assuming the basic operation is multiplication.

def factorial(n):
if n == 0
return 1
else
return n*factorial(n-1)
Mathematical analysis of recursive algorithm

Example 1 – Solution
Step 1: We need to convert the pseudo code into an equation to easily understand it.

factorial ( n ) = 1 ,if n=0


factorial ( n −1 ) ⇥ nif n> 0
Step 2: Using the equation we construct a recurrence relation as a function a function that calculate the
simple instructions.
0 , if n=0
T ( n )=
T ( n −1 ) +1 if n>0
Step 3: Apply either backward substitution or forward substitution. (only one of them is enough, no need
for both).
Mathematical analysis of recursive algorithm

Example 1 – Continue solution 1 via backward substitution.

Step 3: We apply different input and find the number of simple instruction.
K=1, T(n) = T(n-1) + 1
K=2, = T(n-2) + 1 +1
K=3, = T(n-3) + 1 +1 + 1
K=4, = T(n-4) + 1 +1 + 1 + 1
..
for k, = T(n-k) + k

Lets assume n-k = (base case), so n-k = 0. Then, n = k


T(n) = T(n-k) + n
T(n) = T(n-n) + n
T(n) = 0 + n θ(n).
Mathematical analysis of recursive algorithm

Example 1 – Continue solution 1 via forward substitution.

Step 3: We apply different input and find the number of simple instruction.

Let n=0, T(0) = 0


Let n=1, T(1) = T(0) + 1 =0+1=1
Let n=2, T(2) = T(1) + 1 =1+1=2
Let n=3, T(3) = T(2) + 1 =1+1+1=3
Let n=4, T(4) = T(3) + 1 = 1 + 1 + 1 + 1= 4
Then, T(n) = = 1 + 1 + 1 + 1 + …. (n times) = n
T(n) θ(n)
Mathematical analysis of recursive algorithm

Example 2: Consider the following Fibonacci function, find the time complexity
assuming the basic operation is multiplication and addition.

fib(n) = fib(n-1) + fib(n-2)


fib(0) = 0 and fib(1) = 1
Mathematical analysis of recursive algorithm

Example 2 – solution
Step 1: We convert the pesdo code into an equation.
n , if n=0 or n=1
f ( n )=
f ( n −1 ) + f ( n −2 ) , n>1
Step 2: We construct a recurrence relation as a function T(n) that calculate the number of basic operations.
T(n) = T(n-1) + T(n-2)+1 (+1 because of the addition operation in the middle)
T(n) = T(n-1) + T(n-2)+1 (assume T(n-2) ≃ T(n-1), Note this only works for this problem. please refer to [6]). Then,

T(n) = 2T(n-1) +1
T(0) = T(1) = 0
Step 3: We apply either backward substitution or forward substitution (only one of them is enough, no need
for both).

Answer from [4,6].


Mathematical analysis of recursive algorithm

Example 2 – continue solution via backward substitution

Step 3: We apply different input and find the number of simple instruction.

K=1, T(n) = 2 T(n-1) + 1


K=2, = 2 (2 T(n-2) +1) +1 = 22 T(n-2) + 2 + 1
K=3, = 22 (2 T(n-3) +1) +2 +1 = 23 T(n-3) + 22 + 2 + 1
K=4, = 23 (2 T(n-4) +1) + 22 + 2 + 1 = 24 T(n-4) + 23 +22 + 2 + 1
... end
k −1 C end +1 − C start
i
for K, = 2 (T(n-k)) +
k
∑ 2=i 2 (T(n-k)) + 2 – 1
k k
∑ C = C−1
i=0 i=start

Lets assume, n-k = 1 (base case),


→ k = n-1, n=k+1 (we chose n-k=1, to reach T(1), the first case to stop with)
T(n) = 2n-1 T(1) + 2n-1 -1
T(n) = 2n-1 * 0 + 2n-1 -1 = 2n-1 -1 θ(2n).

Answer from [4,6].


Mathematical analysis of recursive algorithm

Example 2 – continue solution via forward substitution

Step 3: We apply different input and find the number of simple instruction.

Let n = 0, T(0) = 0
Let n = 1, T(1) = 0
Let n = 2, T(2) = 2T(1) +1 = 2 * 0 + 1 =1 = 21 - 1
Let n = 3, T(3) = 2T(2) +1 = 2 *(1) + 1 =3 = 22 - 1
Let n = 4, T(4) = 2T(3) +1 = 2 *(3) + 1 =7 = 23 - 1
Let n = 5, T(5) = 2T(4) +1 = 2 *(7) + 1 = 15 = 24 - 1
..
T(n) = 2n-1 - 1 θ(2n)
References and acknowledgment

- The slide content based on Anany Levitin, Introduction to The Design & Analysis of algorithm, 2nd edition unless stated or cited to other
reference.
[1] Anany Levitin, Introduction to The Design & Analysis of algorithm, 2nd edition.
[2] Jeff Erickson, Algorithms, http://jeffe.cs.illinois.edu/teaching/algorithms/.
[3] Lower Bounds and Θ Notation, OpenDSA Data Structures and Algorithms Modules Collection,OpenDSA Project,
https://opendsa-server.cs.vt.edu/ODSA/Books/Everything/html/AnalLower.html
[4] Not sure the correct reference, But it was from youtube. I believe it was one of these channels. I highly recommned to check them :
- Computer_IT_ICT Engineering Department : LJIET, https://www.youtube.com/playlist?list=PLO14KY9mobCIsDALaKmjGTKacneOxgq26
- Jenny's Lectures CS IT, Data Structures and Algorithms, Thttps://www.youtube.com/playlist?list=PLdo5W4Nhv31bbKJzrsKfMpo_grxuLl8LU
- Abdul Bari, Algorithms, https://www.youtube.com/playlist?list=PLDN4rrl48XKpZkf03iYFl-O29szjTrs_O
[5] Abdul Bari, 1.8.1 Asymptotic Notations Big Oh - Omega - Theta #1 , https://youtu.be/A03oI0znAoc?feature=shared
[6] Emily Marshall, Computational Complexity of Fibonacci Sequence, https://www.baeldung.com/cs/fibonacci-computational-complexity
OpenDSA Project under MIT Licence (see: https://opendsa-server.cs.vt.edu/ODSA/Books/Everything/html/index.html and
https://opensource.org/license/mit/)

You might also like