Algorithm Analysis
Algorithm Analysis
Algorithm
Chapter One
Algorithm
Analysis
Introduction
• A program is written in order to solve a problem.
• A solution to a problem actually consists of two things.
A way to organize the data and
Sequence of steps to solve the problem
1/30/2022 UU/S.D 3
Properties of an algorithm
– Finiteness: Algorithm must complete after a finite number of
steps..
succeeding step. The first step (start step) and last step (halt step)
programming language.
1/30/2022 UU/S.D 5
Cont...d
– Completeness: It must solve the problem completely.
1/30/2022 UU/S.D 6
Algorithm
Analysis
• Algorithm analysis refers to the process of determining how much
has to be able to choose the best algorithm for the problem at hand
– Running Time
– Memory Usage
1/30/2022 UU/S.D 8
Cont.
Empirical (Experimental)
⚫ Write a program implementing the
algorithm
System.currentTimeMillis() or
clock() to get an accurate measure of
the actual running time
– Operating Environment
• Accordingly, we can analyze an algorithm according to the number
of operations required, rather than according to an absolute amount of
time involved.
1/30/2022 UU/S.D 10
Cont...d
• Limitations of Experiments
– It is necessary to implement the algorithm, which may be
difficult
– Results may not be indicative of the running time on other inputs
not included in the experiment.
– In order to compare two algorithms, the same hardware and
software environments must be used
1/30/2022 UU/S.D 11
Cont...d
Theoretical Analysis
– Uses a high-level description of the algorithm instead
of an implementation
– Characterizes running time as a function of the input size, n.
1/30/2022 UU/S.D 12
Algorithm Complexity
• Complexity Analysis is the systematic study of the cost of computation,
algorithm analysis:
⚫ Constant 1 f(n) = c
⚫ Logarithmic log n f(n) = logb n
⚫ Linear n f(n) = n
⚫ N-Log-N n log n f(n) = nlogb n
⚫ Quadratic n2 f(n) = n2
⚫ Cubic n3 f(n) = n3
⚫ Exponential 2n f(n) = 2n
1/30/2022 UU/S.D 15
Cont...d
affected by
⚫ constant factors
or
⚫ lower-order
terms
⚫ Examples
function
⚫ 105n2 + 108n is a quadratic
1/30/2022 UU/S.D 16
Notation Execution Time Code Example
O(1) independent of input size, n int counter = 1;
Constant Example: Finding the first cout << “Algorithm" << counter << "\n";
element of a list.
O(log𝑥 𝑛) Problem complexity increases int counter = 1; int i = 0;
Logarithmic slowly as the problem size for (i = x; i <= n; i = i * x) {
increases. Squaring the problem // x must be > 1
size only doubles the time cout << "Algorithm " << counter << "\n";
Example throw away ½ at each counter++;
step }
1/30/2022 UU/S.D 17
Notation Execution Time Code Example
O(n log𝑥 𝑛) Log-linear increase - int counter = 1;
Problem for (int i = x; i <= n; i = i * x) {
complexity // x must be > 1
increases a little int j=1;
faster than n while (j <= n) {
cout << "Algorithm " << counter << "\
n"; counter++; j++;
}}
1/30/2022 UU/S.D 18
Notation Execution Time Code Example
O(n3) Cubic increase. int counter = 1;
Practical for small input for (int i = 1; i <= n; i++) {
size, n. for (int j = 1; j <= n; j++)
{ for (int j = 1; j <= n; j+
+) {
cout << "Algorithm " << counter << "\n";
counter++;
} }}
O(2n) Exponential increase - int counter = 1; int i = 1; int j =
Increase too rapidly to 1; while (i <= n) {
be practical j = j * 2;
Problem complexity i++;
increases very fast }
Generally unmanageable for for (i =
any meaningful n 1; i <=
Example: Find all subsets of j; i++) {
a set of n elements cout << "Algorithm " << counter << "\n";
counter++;
}
1/30/2022 UU/S.D 19
Analysis Rules
1. We assume an arbitrary time unit.
3.Running time of a selection statement (if, switch) is the time for the
condition evaluation + the maximum of the running times for the
individual clauses in the selection.
1/30/2022 UU/S.D 20
Cont...d
4. Running time for a loop is equal to the running time for the
statements inside the loop * number of iterations.
For nested loops, analyze inside out. Always assume that the loop
executes the maximum number of iterations possible.
1/30/2022 UU/S.D 21
Cont...d
5. Running time of a function call is 1 for setup + the time for
any parameter calculations + the time required for the execution
of the function body.
Example
int count() Time of Units to Compute
{ • 1 for the assignment statement: int k=0
• 1 for the output statement.
int k=0;
• 1 for the input statement.
int n; • In the for loop:
– 1 assignment, n+1 tests, and n
cout<<“enter the number”; increments.
cin>>n; – n loops of 2 units for an assignment,
and an addition.
for(int i=0;i<n; i++) • 1 for the return statement.
• T (n)= 1+1+1+(1+n+1+n)+2n+1
k=k+1;
= 4n+6 = O(n)
return 0;
}
1/30/2022 UU/S.D 23
Cont...d
int total( int n) Time of Units to Compute
• 1 for the assignment statement: int
{ sum=0
int sum=0; • In the for loop:
– 1 assignment, n+1 tests, and n
for(int i=1;i<=n; i++) increments.
sum=sum+1: – n loops of 2 units for an
assignment, and an addition.
return sum; • 1 for the return statement.
} • T (n)= 1+ (1+n+1+n)+2n+1
= 4n+4 = O(n)
1/30/2022 24
UU/S.D
Cont...d
int total( int n) Time of Units to Compute
• 1 for the assignment statement: int
{ sum=0
int sum=0; • In the for loop:
– 1 assignment, n tests, and n-1
for(int i=1;i<n; i++) increments.
sum=sum+1: – n-1 loops of 2 units for an
assignment, and an addition.
return sum; • 1 for the return statement.
} • T (n)= 1+ (1+n+n-1)+2(n-1)+1
= 4n = O(n)
1/30/2022 25
UU/S.D
Cont...d
void func ( ) Time of Units to Compute
{ • 1 for the first assignment statement: x=0;
int x=0; • 1 for the second assignment statement: i=0;
int i • 1 for the third assignment statement: j=1;
=0; • 1 for the output statement.
• 1 for the input statement.
int j =1;
• In the first while loop:
cout<<“enter an enteger " ;
– n+1 tests
cin>>n ; – n loops of 2 units for the two
while ( i<n ){ increment
x++; i++; } (addition) operations
while ( j<n ) • In the second while loop:
– n tests
{ j
– n-1 increments
++;
1/30/2022 • T (n)== 5n+5 = O(n) 23
26
}} UU/S.D
Cont...d
int sum (int n)
{
int partialsum= 0 ;
for ( int i = 1 ; i <= n ; i++)
partialsum= partialsum+( i * i * i ) ;
return partialsum;
}
Time Units to Compute
1 for the assignment.
1 assignment, n+1 tests, and n increments.
n loops of 4 units for an assignment, an addition, and two multiplications.
1 for the return statement.
T (n)= 1+(1+n+1+n)+4n+1 = 6n+4 = O(n)
1/30/2022 UU/S.D 27
Loop Complexity
Problem 1
sum = 0 ;
f o r ( i = 0 ; i < n ; i++) {
sum++; }
sum = 0 ;
f o r ( i = 0 ; i < n ; i++)
{ for ( j = 0 ; j < n ; j+
+) { sum++;}
sum = 0 ;
sum++;} }
1/30/2022 UU/S.D 30
Cont...d
Problem 4
sum = 0 ;
f o r ( i = 0 ; i < n ; i++) {
f o r ( j = 0 ; j < i * i ; j++) {
f o r ( k = 0 ; k < j ; k++)
{ sum++; }}}
The running time for the operation sum++ is a
constant. The most inner
loop runs at most n*n times, the middle loop also runs at most n*n
times, and the outer loop runs n times, thus the overall complexity
would be
1/30/2022 O(n5) UU/S.D 31
Cont...d
Problem 5
sum = 0 ;
f o r ( i = 0 ; i < n ; i++) {
f o r ( j = 0 ; j < i * i ; j++) {
i f ( j % i ==0) {
f o r ( k = 0 ; k < j ; k++)
{
sum++ }}}}
Compare this problem with Problem 4. Obviously the most inner loop will
run less times than in Problem 4, and a refined analysis is possible, we are
usually content to neglect the if statement and consider its running time to
be O(n2), yielding an overall running time of O(n5)
1/30/2022 UU/S.D 32
Cont...d
Problem 6
sum = 0 ;
f o r ( i = 0 ; i < n ; i++) {
sum++; }
val=1;
f o r ( j = 0 ; j < n*n ; j+
+) {
Val = val * j; }
This problem consists a function in the loop body, whose complexity is not
a constant it depends on n and is given to be O(nlogn).
The second loop runs n*n times, so its complexity would be O(n2 * nlogn)
= O(n3logn). The first loop has less running time - O(n), we take the
1/30/2022 UU/S.D 34
maximum and conclude that the overall running time would be O(n logn)
3
Formal Approach to
Analysis
• Analysis can be simplified by using some formal approach in
book keeping.
1/30/2022 UU/S.D 35
For Loops: Formally
f o r ( i = 1 ; i < N ; i++) {
Sum= sum+i;
}
𝑁
∑ 1=𝑁
𝑖=1
Suppose we count the number of additions
that are
done. There is 1 addition per iteration of the loop,
hence N additions in total.
1/30/2022 UU/S.D 36
Nested Loops: Formally
Nested for loops translate into multiple summations, one for
each for loop.
f o r ( int i = 1 ; i <= N ; i++)
{ for (int j=1; j<= M; j+
+){
Sum= sum+i+j;
}}
∑𝑁
𝑖 =1∙ ∑𝑗𝑀 =12 = ∑𝑁 2𝑀 =
Count the number𝑖 =1
of additions.
2𝑀𝑁 The outer summation is for
the outer for loop.
1/30/2022 UU/S.D 37
Consecutive Statements: Formally
Running times of separate blocks of he code are added.
for (int i=1; i<= N; i++){
Sum= sum+i;
}
f o r (int i = 1 ; i <= N ; i++) {
for (int j=1; j<= N; j++){
Sum= sum+i+j;
}}
the best, the average and the worst case running time of the
algorithm respectively.
1/30/2022 UU/S.D 40
Cont.
• Worst Case (Tworst): The amount of time the algorithm takes on the worst
•Best Case (Tbest): The amount of time the algorithm takes on the smallest
•We are interested in the worst-case time, since it provides a bound for all
1/30/2022 UU/S.D 41
Asymptotic
Analysis
• Asymptotic analysis is concerned with how the running time
of an algorithm increases with the size of the input in the
limit, as the size of the input increases without bound.
• There are five notations used to describe a running
time function. These are:
Big-Oh Notation (O)
Big-Omega Notation ()
Theta Notation ()
Little-o Notation (o)
1/30/2022 Little-Omega Notation () 42
UU/S.D
The Big – Oh Notation
•Big-Oh notation is a way of comparing algorithms and is used
for computing the complexity of algorithms; i.e., the amount of
time that it takes for computer program to run .
• It’s only concerned with what happens for very a large value
of
n. Therefore only the largest term in the expression (function)
is needed.
1/30/2022 UU/S.D 43
Cont.
•For example, if the number of operations in an algorithm is n2
– n, n is insignificant compared to n2 for large values of n.
Hence the n term is ignored. Of course, for small values of n,
it may be important. However, Big-Oh is mainly concerned
with large values of n.
• Formal Definition: f (n) is O (g (n)) if there exist c, k ∊
ℛ+
such that for all n≥ k, f (n) ≤ c.g (n).
1/30/2022 UU/S.D 44
Cont.
1/30/2022 UU/S.D 45
Cont.
•Demonstrating that a function f(n) is big-O of a function g(n)
requires that we find specific constants c and k for which the
inequality holds (and show that the inequality does in fact
hold).
•Big-O expresses an upper bound on the growth rate of a
function, for sufficiently large values of n.
• An upper bound is the best algorithmic solution that has
been
found for a problem.
46
1/30/2022 UU/S.D
Cont...d
• Examples: The following points are facts that can be used
for
Big-Oh problems:
1/30/2022 UU/S.D 48
Cont...d
f(n) = 3n2 +4n+1. Show that f(n)=O(n2)
• 4n <=4n2 for all n>=1 and 1<=n2 for all n>=1
1/30/2022 UU/S.D 49
Cont.
•Here is a growth rate some important functions. This uses
logarithms to base 2, but these are simply proportional to
logarithms in other base.
1/30/2022 UU/S.D 50
Cont.
•It shows the importance of good algorithm design, because an
asymptotically slow algorithm is beaten in the long run by an asymptotically
faster algorithm, even if the constant factor for the asymptotically faster
algorithm is worse
1/30/2022 UU/S.D 51
Cont.
•It is considered poor taste, in general, to say “ f(n) ≤ O(g(n)),”
since the big-Oh already denotes the “less-than or-equal-to” concept.
• Although common, it is not fully correct to say “ f(n) = O(g(n))”
since there is no way to make sense of the statement “O(g(n)) = f(n).”
•It is completely wrong to say “ f(n) ≥ O(g(n))” or “ f(n) > O(g(n))”
since the g(n) in the big-Oh expresses an upper bound on f(n).
• It is best to say, “ f(n) is O(g(n)).”
• It is also correct to say, “ f(n) ∈ O(g(n)),”
for the big-Oh notation is, technically speaking, denoting a whole
collection of functions.
1/30/2022 UU/S.D 52
Big – O
Theorems
• For all the following theorems, assume that f(n) is a function of
n and that k is an arbitrary constant.
Theorem 1: k is O(1)
Theorem 2: A polynomial is O(the term containing the
highest power of n).
3n
larger constants to the nth power
f(n)=3nlogbn + 4 logbn+2 n!
is O(nlogbn) and (n2) is O(2n)
1/30/2022 nn UU/S.D
55
Big – Omega Notation
•Just as O-notation provides an asymptotic upper bound on a
function, notation provides an asymptotic lower bound.
Formal Definition: A function f(n) is ( g (n)) if there exist
constants c and k ∊ ℛ+ such that f(n) >=c. g(n) for all n>=k.
•f(n) is ( g (n)) means that f(n) is greater than or equal to some
constant multiple of g(n) for all values of n greater than or equal
to some k.
•In simple terms, f(n) is ( g (n)) means that the growth rate of f(n)
is greater than or equal to g(n).
1/30/2022
Example: If f(n) =n2, then f(n) is ( n) 56
UU/S.D
Theta Notation
•A function f (n) belongs to the set of (g(n)) if there exist
positive constants c1 and c2 such that it can be sandwiched
between c1.g(n) and c2.g(n), for sufficiently large values of n.
•Formal Definition: A function f (n) is (g(n)) if it is both
O( g(n) ) and ( g(n) ). In other words, there exist constants c1,
c2, and k >0 such that c1.g (n)<=f(n)<=c2. g(n) for all n >= k
• If f(n) is (g(n)), then g(n) is an asymptotically tight bound
for
f(n).
• In simple terms,
same rate f(n) is (g(n)) means that f(n) and g(n) have
of growth. 57
1/30/2022
Cont.
• Example:
f(n) is O(n4)
f(n) is O(n3)
f(n) is O(n2)
All these are technically correct, but the last expression is the
best and tight one. Since 2n2 and n2 have the same growth
rate, it can be written as f(n) is (n2).
1/30/2022 UU/S.D 58
Little - oh Notation
•Big-Oh notation may or may not be asymptotically tight, for
example: 2n2 is O(n2)
is O(n3)
•f(n) is o(g(n)) means for all c>0 there exists some k>0 such that
f(n)<c.g(n) for all n>=k. Informally, f(n) is o(g(n)) means f(n)
becomes insignificant relative to g(n) as n approaches infinity.
• Example: f(n)=3n+4 is o(n2)
1/30/2022 UU/S.D 60
Relational Properties of Asymptotic Notations
• Transitivity
• if f(n) is (g(n)) and g(n) is (h(n)) then f(n) is (h(n)),
• if f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) is O(h(n)),
• if f(n) is (g(n)) and g(n) is (h(n)) then f(n) is (h(n)),
• if f(n) is o(g(n)) and g(n) is o(h(n)) then f(n) is o(h(n)), and
• if f(n) is (g(n)) and g(n) is (h(n)) then f(n) is (h(n)).
1/30/2022 UU/S.D 61
Cont.
• Symmetry
• f(n) is (g(n)) if and only if g(n) is (f(n)).
• Transpose symmetry
• f(n) is O(g(n)) if and only if g(n) is (f(n)),
• f(n) is o(g(n)) if and only if g(n) is (f(n)).
• Reflexivity
• f(n) is (f(n)),
• f(n) is O(f(n)),
• f(n) is (f(n)).
1/30/2022 UU/S.D 62
Ex.
1. Suppose we have hardware capable of executing 106
instructions per second. How long would it take to execute
an algorithm whose complexity function was:
T(n) = 2N2 on an input size of n= 108?
2.An algorithm takes 0.5 ms for input size 100. How long will it
take for input size500 if the running time is the following
(assume low-order terms are negligible)?
a. linear
b. O(N logN)
c. quadratic
d. cubic
1/30/2022 UU/S.D 63
Cont.
3.An algorithm takes 0.5 ms for input size 100. How large a
problem can be solved in 1 min if the running time is the
following (assume low-order terms are negligible)?
a. linear
b. O(N logN)
c. quadratic
d. cubic
1/30/2022 UU/S.D 64
Cont.
4. Order the following functions by growth rate: N,√N, N1.5, N2, N
logN, N log logN, N log2 N, N log(N2), 2/N, 2N, 2N/2, 37, N2 logN,
N3. Indicate which functions grow at the same rate.
5. Prove that for any constant k, logk N = o(N).
6. Suppose T1(N) = O(f (N)) and T2(N) = O(f (N)). Which of the
following are true?
a. T1(N) + T2(N) = O(f (N))
b. T1(N) − T2(N) = o(f (N))
c. T1(N)/T2(N) = O(1)
d. T1(N) = O(T2(N))
1/30/2022 UU/S.D 65
Cont.
7. For each of the following six program fragments:
a. Give an analysis of the running time (Big-Oh will do).
b. Implement the code in the language of your choice,
and give the running time for several values
of N.
c. Compare your analysis with the actual running times.
1/30/2022 UU/S.D 66