Week2 Complexity Analysis
Week2 Complexity Analysis
T(n) ≈ copC(n)
running time execution time Number of times
for basic operation basic operation is
executed
2
Input size and basic operation examples
Checking primality of a
n’size = number of digits Division
given integer n
Visiting a vertex or
Typical graph problem # vertices and/or edges
traversing an edge
3
Empirical analysis of time efficiency
• Select a specific (typical) sample of inputs
4
Best-case, average-case, worst-case
5
Best-case, average-case, worst-case
For some algorithms, efficiency depends on form of input:
• Average case: Cavg(n) – “average” over inputs of size n
– Number of times the basic operation will be executed on typical
input
– NOT the average of worst and best case
– Expected number of basic operations considered as a random
variable under some assumption about the probability
distribution of all possible inputs
6
Example: Sequential search
• Worst case ?
• Best case ?
• Average case ?
7
Example: Sequential search
• Average case
• The standard assumptions are that
(a) the probability of a successful search is
equal to p (0 ≤ p ≤ 1) and
(b) the probability of the first match
occurring in the ith position of the list is
the same for every i.
8
Sequential Search Variation
• Consider a variation of sequential search
that scans a list to return the number of
occurrences of a given search key in the
list.
• Does its efficiency differ from the
efficiency of classic sequential search?
9
Types of formulas for basic operation’s count
• Exact formula
e.g., C(n) = n(n-1)/2
11
Values of some important functions as n →
12
Asymptotic notation
• Big O notation or Big Oh notation, and also
known as Landau notation or asymptotic notation
• A mathematical notation used to describe the
asymptotic behavior of functions
• Characterizes a function's behavior for very large
(or very small) inputs in a simple but rigorous way
that enables comparison to other functions
13
Asymptotic notation
• More precisely, the symbol O is used to describe
an asymptotic upper bound for the magnitude of
a function in terms of another, usually simpler,
function
• In computer science, useful in the analysis of the
complexity of algorithms.
14
Formal Definition of Big Oh
Definition: f(n) is in O(g(n))
if order of growth of f(n) ≤ order of growth of g(n)
(within constant multiple),
i.e., if there exist positive constant c and non-
negative integer n0 such that
f(n) ≤ c g(n) for every n ≥ n0
Examples:
• 10n is O(n) and also O(n2)
• 5n+20 is O(n)
15
Which running time is better?
16
Simplification
• Big-O notation lets us focus on the big picture.
• When faced with a complicated function like
3n2 + 4n + 5, we just replace it with O(f(n)),
where f(n) is as simple as possible.
• In this particular example we’d use O(n2),
because the quadratic portion of the sum
dominates the rest.
17
Simplification
Simplification Rules
1. Multiplicative constants can be omitted:
14n2 becomes n2 .
2. na dominates nb if a > b: for instance,
n2 dominates n.
3. Any exponential dominates any polynomial:
3n dominates n5 (it even dominates 2n ).
4. Likewise, any polynomial dominates any
logarithm:
n dominates (log n)3.
This also means, for example, that
n2 dominates nlogn.
18
Common plots of O(T(n))
19
Common plots of O(T(n))
Source: medium.com
20
Basic asymptotic efficiency classes
1 constant
log n logarithmic
n linear
n2 quadratic
n3 cubic
2n exponential
n! factorial
21
Basic asymptotic efficiency classes
Remember a
search
algorithm?
22
A Hierarchy of Growth Rates
𝑐 < log 𝑛 < 𝑛 < 𝑛 log 𝑛 < 𝑛2 < 𝑛3 < 2𝑛 < 3𝑛 < 𝑛! < 𝑛𝑛
23
General Rules
• If T1(n) = O(f(n)) and T2(n) = O(g(n)) then:
Example:
Algorithm A:
• Step 1: Run algorithm A1 that takes T1(n) = O(n3) time
• Step 2: Run algorithm A2 that takes T2(n) = O(n2) time
T(n) = T1(n) + T2(n) = O(n3) + O(n2) = max(O(n3), O(n2)) = O(n3)
24
Look for the term highest in
Exercise: hierarchy!
2. 4𝑛 log 𝑛 + 𝑛 + 2𝑛 2. O(2𝑛 )
3. 𝑛𝑛 +𝑛100 3. O(𝑛𝑛 )
𝑐 < log 𝑛 < 𝑛 < 𝑛 log 𝑛 < 𝑛2 < 𝑛3 < 2𝑛 < 3𝑛 < 𝑛! < 𝑛𝑛
25
The Model used for Complexity Analysis
• To analyze algorithms in the formal framework, we need a model of
computation.
i =1
• Time Units to Compute:
- 1 for assignment.
- 1 for assignment, n+1 for tests, and
n for increments.
- n iterations of 3 units each: 1 for
assignment, 1 for addition, 1 for
multiplication
- 1 for the return statement.
• Total: 1+(1+n+1+n)+3n+1 = 5n+4
=
O(n)
28
General Rules
• Loops
– The running time of a “for” loop is at most the running time
of the statements inside the “for” loop (including tests)
times the number of iterations
for(i =1; i<=n; i++)
sum=sum+i;
• The above example is 2n = O(n).
• We have the same for this loop:
for(i=n; i>=1; i--)
sum=sum+i;
29
General Rules (cont.)
• Nested loops
– The total running time of a statement inside a group of nested
loops is the running time of the statement multiplied by the
product of the sizes of all the loops
for(i=1; i<=n; i++)
for(j=1; j<=m; j++)
sum=sum+i+j;
• 3mn=O(mn)
• The above example is O(mn).
30
General Rules (cont.)
• 4pmn=O(pmn)
31
General Rules (cont.)
• Consecutive statements
– These just add, and the maximum is the
one that counts
32
General Rules (cont.)
33
General Rules (cont.)
• If (test) s1 else s2
– The running time is never more than the running time
of the test plus the largest of the running times of s1
and s2.
34
General Rules (cont.)
35
Comparison of two solutions - Example
36
First Solution
The algorithm will just iterate through all the values in the
array and keep track of the smallest integer and place it in a
variable named curMin.
37
CompareSmallestNumber
int CompareSmallestNumber (int array[]) {
int i, curMin;
// set smallest value to first item in
array
curMin = array[0];
// iterate through array to find
smallest value
for (i=0; i<n; i++) {
if (array[i]<curMin)
curMin = array[i];
} 38
// return smallest value in the array
return curMin;
} 38
Second Solution
The algorithm will compare each value in the array to all of
the other numbers in the array. If any number in the array is
less than or equal to all of the other numbers in the array
then we know that it is the smallest number in the array.
39
CompareToAllNumbers
int CompareToAllNumber (int array[]) {
bool isMin;
int i, j;
// iterate through each element in array and compare
to the others
for (i=0; i<n; i++) {
isMin = true;
for (j=0; j<n; j++) {
if (array[i]>array[j])
isMin = false;
}
if (isMin == true)
break;
}
// return smallest value in the array
return array[i];
} 40
40
Big-Oh analysis of the algorithms
• Each solution uses a different algorithm.
• In our examples, the input is the array that is passed into the
different functions. But, input could also be the number of
elements of a linked list, the nodes in a tree, or whatever
data structure you are dealing with.
41
Big-Oh analysis of the algorithms
42
Big-Oh analysis measures efficiency
If n = 10,000, CompareSmallestNumber will check 10,000
elements,
43
Summary so far - Quick observations in
determining Big-Oh:
44
Constant O(1): Linear O(n):
45
Quadratic O(n2): Cubic O(n3):
46
More Cases to consider:
Let us consider the following example:
for (i = n; i > 0; i/=2)
printf(”Hello World ”);
If n = 64, how many times do you think ”Hello World” will be printed?
In other words, how many iterations do we have?
47
More Cases to consider:
In a more general manner, if n is any integer, how many times can we divide
it by 2?
• Iteration 1, i = n
• Iteration 2, i = n/2
• Iteration 3, i = n/4 = n/22
• Iteration 4, i = n/8 = n/23
• …
• After 𝑘 + 1 iterations, we have i = n/2𝑘
If 𝑘 + 1 is the last check, and the condition is not satisfied anymore, then we shoul
have i<= 0 (or i<1):
𝑛
<1
2𝑘
The total number of iterations before breaking the loop can be obtained as follows:
𝑛 < 2𝑘 ⇔ 𝑘 > log 2 𝑛
The number of steps is approximately log 2 𝑛, so we can say here that the complexit
is O(log 2 𝑛).
48
More Cases to consider:
• In summary, for the following example:
for (i = n; i> 0; i/=2)
printf(”Hello World ”);
49
More Examples:
for (i = n; i> 0; i/=3)
printf(”Hello World ”);
51
More Examples:
for (i = 0; i <n; i++)
for (j = 0; j < i, j++)
m += j;
52
Mathematical Analysis of
Recursive Algorithms
53
Mathematical Analysis of Recursive
Algorithms: Factorial
• EXAMPLE 1
Compute the factorial function F(n) = n! for an
arbitrary nonnegative integer n.
• Since, n!= 1 . . . . . (n − 1) . n = (n − 1)! . n
for n ≥ 1 and 0!= 1 by definition, we can
compute F(n) = F(n − 1) x n with the following
recursive algorithm.
54
Mathematical Analysis of Recursive
Algorithms: Factorial
55
Mathematical Analysis of Recursive
Algorithms: Factorial
• Input size: n
• The basic operation of the algorithm is
multiplication: number of executions we
denote M(n)
56
Mathematical Analysis of Recursive
Algorithms: Factorial
Recurrence relation
or Recurrences
57
Mathematical Analysis of Recursive
Algorithms: Factorial
Recurrence relation
or Recurrences
58
Mathematical Analysis of Recursive
Algorithms: Factorial
Pattern:
59
Mathematical Analysis of Recursive
Algorithms
60
Example: Fibonacci Numbers
61
Fibonacci
• But what is the precise value of F100, or of
F200?
• Fibonacci himself would surely have wanted to
know such things.
• To answer, we need an algorithm for
computing the nth Fibonacci number
62
Solution 1 for Fibonacci:
An exponential algorithm
• One idea is to slavishly implement the
recursive definition of Fn.
63
Fibonacci:
An exponential algorithm
• For larger values of n, there are two recursive
invocations of fib1, taking time T(n−1) and
T(n−2), respectively, plus three computer steps
(checks on the value of n and a final addition).
T(n) = T(n − 1) + T(n − 2) + 3 for n > 1.
65
Fibonacci:
An exponential algorithm
• To compute F200, the fib1 algorithm executes
T(200) ≥ F200 ≥ 2138 elementary computer steps.
66
Fibonacci:
A polynomial algorithm
• Problem with recursive algorithm:
Many computations are repeated!
67
Fibonacci:
A polynomial algorithm
• A more sensible scheme would store the
intermediate results—the values F0, F1, . . . ,
Fn−1— as soon as they become known.
68
Fibonacci:
A polynomial algorithm
• It is correct since it directly uses the definition of
Fn
• How long does it take?
– The inner loop consists of a single computer step and
is executed n − 1 times.
– Therefore the number of computer steps used by fib2
is linear in n.
– From exponential we are down to polynomial, a huge
breakthrough in running time.
• The right algorithm makes all the difference!
69
Fibonacci Algorithm Implementation
• Implement recursive Fibonacci and test for
numbers between 1 and 50. Keep the time
elapsed.
• Implement polynomial Fibonacci and test for
numbers between 1 and 50. Keep the time
elapsed.
70
Mathematical Analysis of Recursive
Algorithms: Sum of First n Cubes
• Consider the following recursive algorithm for
computing the sum of the first
n cubes: S(n) = 13 + 23 + . . . + n3.
71
Mathematical Analysis of Recursive
Algorithms: Sum of First n Cubes
a. Set up and solve a recurrence relation for
the number of times the algorithm’s basic
operation is executed.
b. Implement a nonrecursive algorithm for the
same task
c. How does this algorithm compare with the
straightforward nonrecursive algorithm for
computing this sum?
72
Important Note
• Although ignored in Big-OH notation, constants are
very important!
• Programmers and algorithm developers are very
interested in constants and ready to spend nights in
order to make an algorithm run faster by a factor of
2.
• But understanding algorithms would be impossible
without the simplicity afforded by big-O notation
73
Overview
• Both time and space efficiencies are measured as
functions of the algorithm’s input size.
• Time efficiency: Measured by counting the number of
times the algorithm’s basic operation is executed.
• Space efficiency: Measured by counting the number of
extra memory units consumed by the algorithm.
• The efficiencies of some algorithms may differ
significantly for inputs of the same size. So,
distinguishing between the worst-case, average-case,
and best-case efficiencies may be required!
74