2 Algorithm Analysis
2 Algorithm Analysis
CHAPTER 2
ANALYSIS OF ALGORITHM
1
MAIN CONTENTS
2 Types of algorithm
3 Time complexity
4 Space complexity
5 Asymptotic notations
2
1. Algorithms
3
Algorithms (cont.) The result should be
produced after
completion of the job
4
Algorithms - Criteria
• Definiteness: Each step must be clear, unambiguous, and
precisely defined
• Finiteness: Algorithm should be terminated after a finite
number of steps. Also, each step should be finished in a finite
amount of time.
• Effectiveness: Each step of the algorithm must be feasible
Every instruction must be elementary.
5
2. Types of algorithms
6
Types of algorithms:
Divide and conquer
7
Types of algorithms:
Greedy Method
8
Types of algorithms:
Backtracking
9
Types of algorithms:
Branch and Bound
10
Types of algorithms:
Dynamic programming
• Dynamic Programming is mainly an optimization over
plain recursion. Wherever we see a recursive solution
that has repeated calls for same inputs, we can optimize
it using Dynamic Programming.
• The idea is to simply store the results of sub-problems,
so that we do not have to re-compute them when needed
later.
• This simple optimization reduces time complexities from
exponential to polynomial.
11
Types of algorithms:
Deterministic or non-deterministic algorithm
12
Types of algorithms:
Serial or parallel or distributed algorithm
• A sequential algorithm or serial algorithm is an algorithm
that is executed sequentially – once through, from start
to finish, without other processing executing
• A parallel algorithm is an algorithm that can execute
several instructions simultaneously on different
processing devices and then combine all the individual
outputs to produce the final result.
• Distributed algorithms are algorithms designed to run on
multiple processors, without tight centralized control.
13
Algorithm Development Life Cycle: 4 phases
1. Designing
3. Testing/
Experiment
14
Algorithm Development Life Cycle:
Designing phase
• Algorithm design refers to a method or process of solving a
problem.
• One of the algorithmic design techniques and the data
structure is used.
• Your design techniques are the algorithms you use.
Note that: Multiple algorithms can solve a problem but not all
algorithms can solve it efficiently using a suitable algorithm
design method.
Example: Design an algorithm to calculate n!
One of the design techniques used to solve this problem is divide
and conquer.
15
Algorithm Development Life Cycle:
Writing phase
Writing algorithm for solving a problem offers these advantages:
Promotes effective communication between team members
Enables analysis of problem at hand
Acts as blueprint for coding
Assists in debugging
Becomes part of software documentation for future reference
during maintenance phase
The algorithm is written by using English like language,
pseudocodes (or using flowchart).
16
Algorithm Development Life Cycle:
Writing phase
Initialization Assignment Decision Repetition Output
Input Step
Step Step Step Step Step.
17
Algorithm Development Life Cycle:
Writing phase
Example: Algorithm of Linear search
Linear search starts from the beginning of the array until the
desired key is found or the end of the array is reached.
Input
Initialization
Repetition
Sequential
Selection/Decision
Output
18
Algorithm Development Life Cycle:
Testing/Implement phase
19
Algorithm Development Life Cycle:
Testing/Implement phase
20
Algorithm Development Life Cycle:
Analyzing phase
• Algorithms are to be analyzed to compute the objective
criteria for different input size before actual implementation
• Decide that which algorithm is better
by using some performance measurement systems (time and
space for example).
• Suppose M is an algorithm and n is the size of the input data
The complexity of an algorithm M is the function f(n) which
gives the running time and/or storage space requirement of
the algorithm in terms of the size n of the input data.
21
Complexity of the algorithm
22
Complexity of the algorithm
Analysis of
algorithm
Apriori Posteriori
Analysis Analysis
(Theoretical) (Empirical)
23
Theoretical / Apriori Analysis
24
Empirical/Posteriori Analysis
25
Empirical/Posteriori Analysis
27
Computational Complexity
28
3. Time complexity
The factors affecting the execution time
Programmer
skills
Compiler
…..
options
Time
complexity
Hardware
Input size characteristics
Algorithm
used
29
3. Time complexity
The rules for computing running time:
1. Sequence: Add the time of the individual statements.
The maximum is the one that counts.
2. Alternative structures: Time for testing the condition plus
the maximum time taken by any of the alternative paths.
3. Loops: Execution time of a loop is at most the execution
time of the statements of the body (including the
condition tests) multiplied by the number of iterations.
4. Nested loops: Analyze them inside out.
30
3. Time complexity
The rules for computing running time:
5. Subprograms: Analyze them as separate algorithms
and substitute the time wherever necessary.
6. Recursive Subprograms: Generally, the running time
can be expressed as a recurrence relation. The solution
of the recurrence relation yields the expression for the
growth rate of execution time.
Time taken by a program P means:
T (P) = compile time + run time
31
3. Time complexity
While measuring the time complexity of an algorithm, we
concentrate on the frequency count of all key statements
(important statement).
32
3. Time complexity
The frequency count of all key statements:
1 for (i = 0; i < n; i++)
2 {
3 x = a[i] / 2;
4 a[i] = x + 1;
5 }
Analyze
• Two key statements performed in Lines 3 and 4.
• They are repeated n times (for loop).
• Time complexity: f(n) = 2*n.
33
3. Time complexity
The frequency count of all key statements:
34
3. Time complexity
Time Complexity
Average- Worst-
Best-case
case case
36
4. Space complexity
The components of space needed by a program:
Instruction space: Space needed to store the executable
version of the program and it is fixed
Data Space: Space needed to store all constants,
variable values
In-build stack space: Space needed to store the
information needed to resume the suspended functions.
37
4. Space complexity
Total
space
38
5. Asymptotic Complexity
• For both measures, we are interested in the algorithm’s
asymptotic complexity
• This asks: when n (number of input items) goes to
infinity, what happens to the algorithm’s performance?
• To illustrate this, consider f(n) = n2 + 100n + log10n + 1000.
As the value of n increases, the importance of each term
shifts until for large n, only the n2 term is significant.
39
Big-O Notation
40
Big-O Notation
Definition: Let f(n) and g(n) be functions, where n Z is a
positive integer.
We write f(n) = O(g(n)) (read as "f of n is big-oh of g of n“) if
and only if c > 0 and n0 > 0 (c R +, n0 Z+ ) satisfying
0 f(n) c*g(n) n n0
c*g(n)
f(n)
n0 f(n)=O(g(n)) 41
Big-O Notation (continued)
• Although the definition of big-O is correct, it lacks
important information
• While c and N exist, it does not tell us how to calculate
them or what to do if multiple candidates exist (and they
often do)
Example 1: Prove that 2n +10 = O(n).
Solution: Find a value of c R + and a value of n0 Z+ satisfying
0 2n+10 ≤ c*n, n n0
We have: 2n+10 ≤ c*n 10 ≤ (c-2)*n
(c-2)*n 10
n 10/(c-2) (constraint: c>2)
Select c=3 and n0=10 or we can select c = 4 and n0 = 5
Conclusion: 2n +10 = O(n)
42
Big-O Notation (continued)
Example 2: Prove that f(n) = n2 + 2n + 1 là O(n2)
Solution:
Find a value of c R + and a value of N Z+ satisfying
0 n2 + 2n + 1 ≤ c*n2 n n0
We have: For all n ≥ 1
2n ≤ 2n2
1 ≤ n2
Thus:
n2 + 2n + 1 ≤ n2 + 2n2 + n2 = 4*n2, n ≥ 1
We can choose c = 4 and n0 = 1
43
Properties of Big-O Notation
44
Properties of Big-O Notation (continued)
45
Example of Big-O Notation
• Consider f(n) = n2 + 100n + log10n + 1000. As the value of
n increases, the importance of each term shifts
46
Example of Big-O Notation
• Consider f(n) = n2 + 100n + log10n + 1000. As the value of
n increases, the importance of each term shifts
52
Ω Notations
• Big-O only gives us the upper bound of a function
• So if we ignore constant factors and let n get big enough,
some function will never be bigger than some other
function
• This can give us too much freedom
• Consider that selection sort is O(n3), since n2 is O(n3) -
but O(n2) is a more meaningful upper bound
• We need a lower bound, a function that always grows
more slowly than f(n), and a tight bound, a function that
grows at about the same rate as f(n)
53
Ω Notations (continued)
• Big-Ω is for lower bounds what big-O is for upper bounds
Definition: Let f(n) and g(n) be functions, where n is a
positive integer. We write f(n) = Ω(g(n)) if and only if
g(n) = O(f(n)). We say "f of n is omega of g of n.“
• Or we can state that f(n) = (g(n)) if and only if
c > 0 and n0 > 0 satisfying f(n) c*g(n) n n0
54
Ω Notations (continued)
f(n)
cg(n)
n0
f(n)=Ω(g(n))
55
Ω Notations (continued)
56
Θ Notations
Finally, theta notation combines upper bounds with lower
bounds to get tight bound
Definition: Let f(n) and g(n) be functions, where n is a
positive integer. We write f(n) = Θ(g(n)) if and only if g(n)
= O(f(n)) and g(n) = (f(n)). We say "f of n is theta of g of
n.“
We can state that: f(n) = (g(n)) if and only if c1 > 0, c2 > 0
and n0 > 0 satisfying c1*g(n) f(n) c2*g(n) n n0
57
Θ Notations
c2g(n)
c1g(n)
n0
f(n)=Ө(g(n))
58
O, Ω and Θ Notations
59
O, Ω and Θ Notations (continued)
60
O, Ω and Θ Notations (continued)
61
Growth Functions of Algorithm
62
Growth Functions of Algorithm
63
Growth Functions of Algorithm
64
Growth Functions of Algorithm
65
Examples of Complexities
• Since we examine algorithms in terms of their time and
space complexity, we can classify them this way, too
• This is illustrated in the next figure
Fig. 2.4 Classes of algorithms and their execution times on a computer executing 1 million operations per
second (1 sec = 106 μsec = 103 msec)
66
Examples of Complexities (continued)
67
Examples of Complexities (continued)
“Non-deterministic
“Polynomial” time
Polynomial” time
71
NP problems: NP, NP-Hard, NP-Complete
72
NP problems: NP-Hard, NP-Complete
73
NP problems: NP-Hard, NP-Complete
74
Summary
The algorithm is a finite sequence of instructions/steps,
each of which is very elementary that must be followed to
solve a problem.
The space complexity of a program is the amount of
memory it needed to run to completion.
The time complexity of a program is the amount of
computer time needs to run to completion.
The asymptotic notations commonly used in performance
analysis to characterize the complexity of an algorithm.
The big-O notation is the formal method of expressing the
upper bound of an algorithm's running time.
75
Summary
Rule 1: O ( c f (n)) = O (f (n))
Rule 2: O (O (f (n)) = O (f (n))
Rule 3: O (f (n) g (n)) = f (n) O (g (n))
Rule 4: O (f (n) O (g (n)) = O (f (n) g (n))
Rule 5: O (f (n) + g (n)) = O (max (f (n), g (n))
Rule 6: If f(n) = O(g(n)) and g(n) = O(h(n)),
then f(n) = O(h(n)). [Transitivity Rule]
Rule 7: f(n) = O(g(n)) iff g(n) = Ω(f(n)). [Symmetry Rule]
76
78