0% found this document useful (0 votes)
18 views39 pages

2 - Algorithm Efficiency and Big-O

The document discusses algorithm efficiency, focusing on Big-O notation to express performance as a function of input size. It covers various growth rates, including linear, quadratic, and logarithmic, and emphasizes the importance of analyzing the worst-case scenario for algorithm performance. Additionally, it outlines the theoretical analysis of algorithms, limitations of experimental studies, and the use of pseudocode for algorithm description.

Uploaded by

mellaelvi23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views39 pages

2 - Algorithm Efficiency and Big-O

The document discusses algorithm efficiency, focusing on Big-O notation to express performance as a function of input size. It covers various growth rates, including linear, quadratic, and logarithmic, and emphasizes the importance of analyzing the worst-case scenario for algorithm performance. Additionally, it outlines the theoretical analysis of algorithms, limitations of experimental studies, and the use of pseudocode for algorithm description.

Uploaded by

mellaelvi23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Algorithm Efficiency and Big-O

Running Time
2

 Most algorithms transform best case


average case
input objects into output worst case
objects. 120

 The running time of an 100

algorithm typically grows

Running Time
80
with the input size.
60
 Average case time is often
difficult to determine. 40

 We focus on the worst case 20

running time. 0
1000 2000 3000 4000
 Easier to analyze Input Size
 Crucial to applications such as
games, finance and robotics
Analysis of Algorithms
Experimental 9000

Studies 8000

7000

 Write a program 6000

Time (ms)
implementing the algorithm 5000

 Run the program with 4000


inputs of varying size and 3000
composition, noting the
2000
time needed:
1000
 Plot the results
0
0 50 100
Input Size

© 2014 Goodrich,
3 Analysis of Algorithms
Tamassia, Goldwasser
Limitations of Experiments
4

 It is necessary to implement the algorithm,


which may be difficult
 Results may not be indicative of the running
time on other inputs not included in the
experiment.
 In order to compare two algorithms, the same
hardware and software environments must be
used

© 2014 Goodrich,
Analysis of Algorithms
Tamassia, Goldwasser
Theoretical Analysis
5

 Uses a high-level description of the


algorithm instead of an implementation
 Characterizes running time as a function of
the input size, n
 Takes into account all possible inputs
 Allows us to evaluate the speed of an
algorithm independent of the
hardware/software environment

© 2014 Goodrich,
Analysis of Algorithms
Tamassia, Goldwasser
Pseudocode
6

 High-level description of an algorithm


 More structured than English prose
 Less detailed than a program
 Preferred notation for describing algorithms
 Hides program design issues

© 2014 Goodrich,
Analysis of Algorithms
Tamassia, Goldwasser
Pseudocode Details
Analysis of
Algorithms

 Control flow  Method call


 if … then … [else …] method (arg [, arg…])
 while … do …  Return value
 repeat … until … return expression
 for … do …  Expressions:
 Indentation replaces braces Assignment
 Method declaration = Equality testing
Algorithm method (arg [, arg…])
Input … n2 Superscripts and other
Output … mathematical formatting
allowed
© 2014 Goodrich,
7
Tamassia, Goldwasser
The Random Access Machine (RAM)
Model
8

A RAM consists of
 A CPU

 An potentially unbounded bank


of memory cells, each of which 2
1
can hold an arbitrary number or 0
character
 Memory cells are numbered and

accessing any cell in memory


takes unit time
© 2014 Goodrich,
Analysis of Algorithms
Tamassia, Goldwasser
Seven Important Functions
9

❑ Seven functions that often


1E+30
appear in algorithm 1E+28 Cubic
analysis: 1E+26
1E+24 Quadratic
◼ Constant  1
1E+22
Linear
◼ Logarithmic  log n 1E+20
◼ Linear  n 1E+18

T (n )
1E+16
◼ N-Log-N  n log n 1E+14
◼ Quadratic  n2 1E+12
1E+10
◼ Cubic  n3 1E+8
◼ Exponential  2n 1E+6
1E+4
1E+2
❑ In a log-log chart, the slope 1E+0
of the line corresponds to 1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
the growth rate n

© 2014 Goodrich,
Analysis of Algorithms
Tamassia, Goldwasser
Log-log chart/plot
10

 y = xa
 log y = a log x

 Slope is a

© 2014 Goodrich, Tamassia,


Analysis of Algorithms
Goldwasser
Functions Graphed Slide by Matt Stallmann
included with permission.
Using “Normal” Scale
11

g(n) = n lg n
g(n) = 1 g(n) = 2n

g(n) = n2
g(n) = lg n

g(n) = n
g(n) = n3

© 2014 Goodrich,
Analysis of Algorithms
Tamassia, Goldwasser
Primitive Operations
Analysis of
Algorithms

 Basic computations
 Examples:
performed by an algorithm
 Evaluating an
 Identifiable in pseudocode expression
 Largely independent from the  Assigning a value to
a variable
programming language
 Indexing into an
 Exact definition not important array
(we will see why later)  Calling a method
 Assumed to take a constant  Returning from a
method
amount of time in the RAM
model
© 2014 Goodrich,
12
Tamassia, Goldwasser
Algorithm Efficiency and Big-O
 Getting a precise measure of the performance of
an algorithm is difficult
 Big-O notation expresses the performance of an
algorithm as a function of the number of items to be
processed
 This permits algorithms to be compared for
efficiency
 For more than a certain number of data items, some
problems cannot be solved by any computer
Linear Growth Rate
 If processing time increases in proportion to the
number of inputs n, the algorithm grows at a linear
rate

public static int search(int[] x, int target) {


for(int i=0; i < x.length; i++) {
if (x[i]==target)
return i;
}
return -1; // target not found
}
Linear Growth Rate
• If the target is not present, the for loop will
execute x.length times
• If the target is present the for loop will execute
(on average) (x.length + 1)/2 times
 If processing time• increases in proportion to the
Therefore, the total execution time is directly
number of inputs n, the algorithm grows at a linear
proportional to x.length
rate •

This is described as a growth rate of order n OR
O(n)

public static int search(int[] x, int target) {


for(int i=0; i < x.length; i++) {
if (x[i]==target)
return i;
}
return -1; // target not found
}
n x m Growth Rate
 Processing time can be dependent on two different
inputs

public static boolean areDifferent(int[] x, int[] y) {


for(int i=0; i < x.length; i++) {
if (search(y, x[i]) != -1)
return false;
}
return true;
}
n x m Growth Rate (cont.)
• The for loop will execute x.length times
• But it will call search, which will execute
 Processing time can be dependent on two different
y.length times
• The total execution time is proportional to
inputs. (x.length * y.length)
• The growth rate has an order of n x m or
• O(n x m)

public static boolean areDifferent(int[] x, int[] y) {


for(int i=0; i < x.length; i++) {
if (search(y, x[i]) != -1)
return false;
}
return true;
}
Quadratic Growth Rate
 If processing time is proportional to the square of the
number of inputs n, the algorithm grows at a quadratic rate

public static boolean areUnique(int[] x) {


for(int i=0; i < x.length; i++) {
for(int j=0; j < x.length; j++) {
if (i != j && x[i] == x[j])
return false;
}
}
return true;
}
Quadratic Growth Rate (cont.)
• The for loop with i as index will execute
x.length times
 If processing time is •proportional to the
The for loop with square
j as of execute
index will the
number of inputs n, thex.length
algorithm times
grows at a quadratic rate
• The total number of times the inner loop will
execute is (x.length)2
• The growth rate has an order of n2 or
• O(n2)
public static boolean areUnique(int[] x) {
for(int i=0; i < x.length; i++) {
for(int j=0; j < x.length; j++) {
if (i != j && x[i] == x[j])
return false;
}
}
return true;
}
Big-O Notation
 The O() in the previous examples can be thought of as
an abbreviation of "order of magnitude"
 A simple way to determine the big-O notation of an
algorithm is to look at the loops and to see whether the
loops are nested
 Assuming a loop body consists only of simple
statements,
 a single loop is O(n)
 a pair of nested loops is O(n2)
 a nested pair of loops inside another is O(n3)
 and so on . . .
Big-O Notation (cont.)
 You must also examine the number of times a loop is executed
for(i=1; i < x.length; i *= 2) {
// Do something with x[i]
}
 The loop body will execute k-1 times, with i having the
following values:
1, 2, 4, 8, 16, . . ., 2k
until 2k is greater than x.length
 Since 2k-1 = x.length < 2k and log22k is k, we know that k-1
= log2(x.length) < k
 Thus we say the loop is O(log n) (in analyzing algorithms, we
use logarithms to the base 2)
 Logarithmic functions grow slowly as the number of data items
n increases
Formal Definition of Big-O
 Consider the following program structure:

for (int i = 0; i < n; i++) {


for (int j = 0; j < n; j++) {
Simple Statement
}
}
for (int i = 0; i < n; i++) {
Simple Statement 1
Simple Statement 2
Simple Statement 3
Simple Statement 4
Simple Statement 5
}
Simple Statement 6
Simple Statement 7
...
Simple Statement 30
Formal Definition of Big-O (cont.)
 Consider the following program structure:

for (int i = 0; i < n; i++) { This nested loop


for (int j = 0; j < n; j++) { executes a Simple
Simple Statement Statement n2 times
}
}
for (int i = 0; i < n; i++) {
Simple Statement 1
Simple Statement 2
Simple Statement 3
Simple Statement 4
Simple Statement 5
}
Simple Statement 6
Simple Statement 7
...
Simple Statement 30
Formal Definition of Big-O (cont.)
 Consider the following program structure:

for (int i = 0; i < n; i++) {


for (int j = 0; j < n; j++) {
Simple Statement
}
}
for (int i = 0; i < n; i++) {
Simple Statement 1
Simple Statement 2
This loop executes 5
Simple Statement 3 Simple Statements n times
Simple Statement 4 (5n)
Simple Statement 5
}
Simple Statement 6
Simple Statement 7
...
Simple Statement 30
Formal Definition of Big-O (cont.)
 Consider the following program structure:

for (int i = 0; i < n; i++) {


for (int j = 0; j < n; j++) {
Simple Statement
}
}
for (int i = 0; i < n; i++) {
Simple Statement 1
Simple Statement 2
Simple Statement 3
Simple Statement 4
Simple Statement 5
} Finally, 25 Simple
Simple Statement 6 Statements are executed
Simple Statement 7
...
Simple Statement 30
Formal Definition of Big-O (cont.)
 Consider the following program structure:

for (int i = 0; i < n; i++) {


for (int j = 0; j < n; j++) {
Simple Statement
} We can conclude that the relationship
} between processing time and n (the
for (int i = 0; i < n; i++) { number of date items processed) is:
Simple Statement 1
Simple Statement 2
Simple Statement 3 T(n) = n2 + 5n + 25
Simple Statement 4
Simple Statement 5
}
Simple Statement 6
Simple Statement 7
...
Simple Statement 30
Formal Definition of Big-O (cont.)
 In terms of T(n),
T(n) = O(f(n))
 There exist
 two constants, n0 and c, greater than zero, and
 a function, f(n),

 such that for all n > n0, cf(n) = T(n)


 In other words, as n gets sufficiently large (larger
than n0), there is some constant c for which the
processing time will always be less than or equal to
cf(n)
 cf(n) is an upper bound on performance
Formal Definition of Big-O (cont.)
 The growth rate of f(n) will be determined by the
fastest growing term, which is the one with the
largest exponent
 In the example, an algorithm of
O(n2 + 5n + 25)
is more simply expressed as
O(n2)
 In general, it is safe to ignore all constants and to
drop the lower-order terms when determining the
order of magnitude
Big-O Example 1
 Given T(n) = n2 + 5n + 25, show that this is O(n2)
 Find constants n0 and c so that, for all n > n0, cn2 >
n2 + 5n + 25
 Find the point where cn2 = n2 + 5n + 25
 Let n = n0, and solve for c
c = 1 + 5/ n0, + 25/ n0 2
 When n0 is 5(1 + 5/5 + 25/25), c is 3
 So, 3n2 > n2 + 5n + 25 for all n > 5
 Other values of n0 and c also work
Big-O Example 1 (cont.)
Big-O Example 2
 Consider the following loop
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
3 simple statements
}
}

 T(n) = 3(n – 1) + 3 (n – 2) + … + 3
 Factoring out the 3,
3(n – 1 + n – 2 + n – 3 + … + 1)
 1 + 2 + … + n – 1 = (n x (n-1))/2
Big-O Example 2 (cont.)
 Therefore T(n) = 1.5n2 – 1.5n
 When n = 0, the polynomial has the value 0
 For values of n > 1, 1.5n2 > 1.5n2 – 1.5n
 Therefore T(n) is O(n2) when n0 is 1 and c is 1.5
Big-O Example 2 (cont.)
Symbols Used in Quantifying
Performance
Common Growth Rates
Different Growth Rates
Effects of Different Growth Rates
Algorithms with Exponential and
Factorial Growth Rates
 Algorithms with exponential and factorial growth
rates have an effective practical limit on the size of
the problem they can be used to solve
 With an O(2n) algorithm, if 100 inputs takes an
hour then,
 101 inputs will take 2 hours
 105 inputs will take 32 hours

 114 inputs will take 16,384 hours (almost 2 years!)


Algorithms with Exponential and
Factorial Growth Rates (cont.)
 Encryption algorithms take advantage of this
characteristic
 Some cryptographic algorithms can be broken in
O(2n) time, where n is the number of bits in the key
 A key length of 40 is considered breakable by a
modern computer,
 but a key length of 100 bits will take a billion-
billion (1018) times longer than a key length of 40

You might also like