0% found this document useful (0 votes)
54 views36 pages

02 Algoanalysis

The document discusses algorithm analysis and determining the efficiency of algorithms. It defines key terms like asymptotic analysis, Big O notation, and complexity classes to describe how an algorithm's running time grows relative to the input size. Examples are provided to illustrate calculating the time complexity of simple algorithms using these concepts.

Uploaded by

nttqn203
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views36 pages

02 Algoanalysis

The document discusses algorithm analysis and determining the efficiency of algorithms. It defines key terms like asymptotic analysis, Big O notation, and complexity classes to describe how an algorithm's running time grows relative to the input size. Examples are provided to illustrate calculating the time complexity of simple algorithms using these concepts.

Uploaded by

nttqn203
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Algorithm Analysis

1
Outline
• Introduction to Algorithm
• Algorithm Analysis
• Rules for estimating running time
• Growth rates (Big O, Omega, Theta)
• Worst-Case, Best-Case, Average-Case
Algorithm
• An algorithm is a set of instructions to be followed to
solve a problem.
– There can be more than one solution (more than one
algorithm) to solve a given problem.
– An algorithm can be implemented using different
programming languages on different platforms.

• An algorithm must be correct. It should correctly solve


the problem.

• Once we have a correct algorithm for a problem, we


have to determine the efficiency of that algorithm.
3
Algorithmic Performance
There are two aspects of algorithmic performance:
• Time
• Instructions take time.
• How fast does the algorithm perform?
• What affects its runtime?
• Space
• Data structures take space
• What kind of data structures can be used?
• How does choice of data structure affect the runtime?

• We will focus on time


– How to estimate the time required for an algorithm
– How to reduce the time required
4
Analysis of Algorithms
• Analysis of Algorithms is the area of computer science that
provides tools to analyze the efficiency of different methods of
solutions.
• How do we compare the time efficiency of two algorithms that
solve the same problem?
Naïve Approach: implement these algorithms in a programming
language (C++), and run them to compare their time
requirements. Comparing the programs (instead of algorithms)
has difficulties.
– How are the algorithms coded?
• Comparing running times means comparing the implementations.
• We should not compare implementations, because they are sensitive to programming
style that may cloud the issue of which algorithm is inherently more efficient.
– What computer should we use?
• We should compare the efficiency of the algorithms independently of a particular
computer.
– What data should the program use?
• Any analysis must be independent of specific data.
5
Analysis of Algorithms
• When we analyze algorithms, we should employ
mathematical techniques that analyze algorithms
independently of specific implementations,
computers, or data.

• To analyze algorithms:
– First, we start to count the number of significant
operations in a particular solution to assess its
efficiency.
– Then, we will express the efficiency of algorithms using
growth functions.

6
The Execution Time of Algorithms
• Each operation in an algorithm (or a program) has a cost.
è Each operation takes a certain of time.

count = count + 1; è take a certain amount of time, but it is constant

A sequence of operations:

count = count + 1; Cost: c1


sum = sum + count; Cost: c2

è Total Cost = c1 + c2

7
The Execution Time of Algorithms (cont.)
Example: Simple If-Statement
Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n; c3 1

Total Cost <= c1 + max(c2,c3)

8
The Execution Time of Algorithms (cont.)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}

Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5


è The time required for this algorithm is proportional to n

9
The Execution Time of Algorithms (cont.)
Example: Nested Loop
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i + 1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
è The time required for this algorithm is proportional to n2

10
General Rules for Estimation
• Loops: The running time of a loop is at most the running time
of the statements inside of that loop times the number of
iterations.
• Nested Loops: Running time of a nested loop containing a
statement in the inner most loop is the running time of statement
multiplied by the product of the sized of all loops.
• Consecutive Statements: Just add the running times of those
consecutive statements.
• If/Else: Never more than the running time of the test plus the
larger of running times of S1 and S2.

11
Algorithm Growth Rates
• We measure an algorithm’s time requirement as a function of the
problem size.
– Problem size depends on the application: e.g. number of elements in a list for a
sorting algorithm.
• So, for instance, we say that (if the problem size is n)
– Algorithm A requires 5*n2 time units to solve a problem of size n.
– Algorithm B requires 7*n time units to solve a problem of size n.
• The most important thing to learn is how quickly the algorithm’s
time requirement grows as a function of the problem size.
– Algorithm A requires time proportional to n2.
– Algorithm B requires time proportional to n.
• An algorithm’s proportional time requirement is known as growth
rate.
• We can compare the efficiency of two algorithms by comparing
their growth rates.
12
Algorithm Growth Rates (cont.)

Time requirements as a function


of the problem size n

13
Common Growth Rates

Function Growth Rate Name


c Constant
log N Logarithmic
log2 N Log-squared
N Linear
N log N Log-linear
N2 Quadratic
N3 Cubic
2N Exponential
Running Times for Small Inputs
Running time

Input size (x = n)

15
Running Times for Large Inputs
Running time

Input size (x = n)

16
Big O Notation

• If Algorithm A requires time proportional to g(n), Algorithm A is


said to be order g(n), and it is denoted as O(g(n)).
• The function g(n) is called the algorithm’s growth-rate
function.
• Since the capital O is used in the notation, this notation is called
the Big O notation.
• If Algorithm A requires time proportional to n2, it is O(n2).
• If Algorithm A requires time proportional to n, it is O(n).

17
Definition of Big O Notation
Definition:
Algorithm A is order g(n) – denoted as O(g(n)) –
if constants k and n0 exist such that A requires
no more than k*g(n) time units to solve a problem
of size n ³ n0. è f(n) ≤ k*g(n) for all n ³ n0

• The requirement of n ³ n0 in the definition of O(f(n)) formalizes


the notion of sufficiently large problems.
– In general, many values of k and n can satisfy this definition.

18
Big O Notation
• If an algorithm requires f(n) = n2–3*n+10 seconds to solve a
problem size n. If constants k and n0 exist such that
k*n2 > n2–3*n+10 for all n ³ n0 .
the algorithm is order n2 (In fact, k is 3 and n0 is 2)
3*n2 > n2–3*n+10 for all n ³ 2 .
Thus, the algorithm requires no more than k*n2 time units for n ³
n0 ,
So it is O(n2)

19
Big O Notation

20
Big O Notation
• Show 2x + 17 is O(2x)

• 2x + 17 ≤ 2x + 2x = 2*2x for x > 5


• Hence k = 2 and n0 = 5

21
Big O Notation
• Show 2x + 17 is O(3x)

• 2x + 17 ≤ k3x
• Easy to see that rhs grows faster than lhs over time à k=1
• However when x is small 17 will still dominate à skip over some
smaller values of x by using n0 = 2
• Hence k = 1 and n0 = 2

22
Definition of Omega
Definition:
Algorithm A is omega g(n) – denoted as Ω(g(n)) –
if constants k and n0 exist such that A requires
more than k*g(n) time units to solve a problem
of size n ³ n0. è f(n) ≥ k*g(n) for all n ³ n0

23
Definition of Theta
Definition:
Algorithm A is theta g(n) – denoted as 𝚹(g(n)) –
if constants k1, k2 and n0 exist such that
k1*g(n) ≤ f(n) ≤ k2*g(n) for all n ³ n0
(Drop the equalities and you get small o and small omega (ω) notations, respectively, e.g., f(n) is o(g(n)) if f(n)<kg(n))

24
Example
• Show f(n) = 7n2 + 1 is 𝚹(n2)

• You need to show f(n) is O(n2) and f(n) is Ω(n2)


• f(n) is O(n2) because 7n2 + 1 ≤ 7n2 + n2 ∀n ≥ 1 è k1 = 8 n0 = 1
• f(n) is Ω (n2) because 7n2 + 1 ≥ 7n2 ∀n ≥ 0 è k2 = 7 n0 = 0
• Pick the largest n0 to satisfy both conditions naturally è k1 = 8,
k2 = 7, n0 = 1

25
A Comparison of Growth-Rate Functions

26
Growth-Rate Functions
O(1) Time requirement is constant, and it is independent of the problem’s size.
O(log2n) Time requirement for a logarithmic algorithm increases increases slowly
as the problem size increases.
O(n) Time requirement for a linear algorithm increases directly with the size
of the problem.
O(n*log2n) Time requirement for a n*log2n algorithm increases more rapidly than
a linear algorithm.
O(n2) Time requirement for a quadratic algorithm increases rapidly with the
size of the problem.
O(n3) Time requirement for a cubic algorithm increases more rapidly with the
size of the problem than the time requirement for a quadratic algorithm.
O(2n) As the size of the problem increases, the time requirement for an
exponential algorithm increases too rapidly to be practical.

27
Properties of Growth-Rate Functions
1. We can ignore low-order terms in an algorithm’s growth-rate
function.
– If an algorithm is O(n3+4n2+3n), it is also O(n3).
– We only use the higher-order term as algorithm’s growth-rate function.

2. We can ignore a multiplicative constant in the higher-order term


of an algorithm’s growth-rate function.
– If an algorithm is O(5n3), it is also O(n3).

3. O(f(n)) + O(g(n)) = O(f(n)+g(n))


– We can combine growth-rate functions.
– If an algorithm is O(n3) + O(4n), it is also O(n3 +4n2) è So, it is O(n3).
– Similar rules hold for multiplication.

28
Growth-Rate Functions – Example 1
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}

T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*c5


= (c3+c4+c5)*n + (c1+c2+c3)
= a*n + b
è So, the growth-rate function for this algorithm is O(n)

29
Growth-Rate Functions – Example 2

for (i=1; i<=n; i++)

for (j=1; j<=i; j++)

for (k=1; k<=j; k++)

x=x+1;

n i j

T(n) = ∑∑∑1
i=1 j= 1 k=1

è So, the growth-rate function for this algorithm is O(n3)


30
Growth-Rate Functions – Example 3
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
= (c5+c6+c7)*n2 + (c3+c4+c5+c8)*n + (c1+c2+c3)
= a*n2 + b*n + c
è So, the growth-rate function for this algorithm is O(n2)
31
Worst-Case, Best-Case, Average-Case
• An algorithm can require different times to solve different
problems of the same size.
– Eg. Searching an item in a list of n elements using sequential search. è Cost:
1,2,...,n
• Worst-Case Analysis –The maximum amount of time that an
algorithm require to solve a problem of size n.
– This gives an upper bound for the time complexity of an algorithm.
– Normally, we try to find worst-case behavior of an algorithm.
• Best-Case Analysis –The minimum amount of time that an
algorithm require to solve a problem of size n.
– The best case behavior of an algorithm is NOT so useful.
• Average-Case Analysis –The average amount of time that an
algorithm require to solve a problem of size n.
– Sometimes, it is difficult to find the average-case behavior of an algorithm.
– We have to look at all possible data organizations of a given size n, and their
distribution probabilities of these organizations.
– Worst-case analysis is more common than average-case analysis.
32
Sequential Search
int sequentialSearch(const int a[], int item, int n){
for (int i = 0; i < n && a[i]!= item; i++);
if (i == n)
return –1;
return i;
}
Unsuccessful Search: è O(n)

Successful Search:
Best-Case: item is in the first location of the array èO(1)
Worst-Case: item is in the last location of the array èO(n)
Average-Case: The number of key comparisons 1, 2, ..., n
n
∑i ( n 2 +n)/2
i=1
= è O(n)
n n

33
Binary Search
We can do binary search if the array is sorted:

int binarySearch(int a[], int size, int x) {


int low =0;
int high = size –1;
int mid; // mid will be the index of
// target when it’s found.
while (low <= high) {
mid = (low + high)/2;
if (a[mid] < x)
low = mid + 1;
else if (a[mid] > x)
high = mid – 1;
else
return mid;
}
return –1;
}
34
Binary Search – Analysis
• For an unsuccessful search:
– The number of iterations in the loop is ëlog2nû + 1
è O(log2n)
• For a successful search:
– Best-Case: The number of iterations is 1. è O(1)
– Worst-Case: The number of iterations is ëlog2nû +1 è O(log2n)
– Average-Case: The avg. # of iterations < log2n è O(log2n)

0 1 2 3 4 5 6 ß an array with size 7


3 2 3 1 3 2 3 ß # of iterations
The average # of iterations = 17/7 = 2.4285 < log27

For a proof of the general case see:


http://raider.mountunion.edu/~cindricbb/sp06/340/avgbinsearch.html
35
How much better is O(log2n)?

n O(log2n)
16 4
64 6
256 8
1024 10
16,384 14
131,072 17
262,144 18
524,288 19
1,048,576 20
1,073,741,824 30

36

You might also like