0% found this document useful (0 votes)
9 views

Algorithm Analysis

The document discusses algorithm analysis and Big O notation. It explains that Big O can describe how long an algorithm takes to run based on input size, regardless of hardware. Common time complexities like constant, logarithmic, linear, and quadratic are covered. Examples of measuring time for different algorithms that calculate a sum are provided to illustrate Big O analysis.

Uploaded by

Isan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Algorithm Analysis

The document discusses algorithm analysis and Big O notation. It explains that Big O can describe how long an algorithm takes to run based on input size, regardless of hardware. Common time complexities like constant, logarithmic, linear, and quadratic are covered. Examples of measuring time for different algorithms that calculate a sum are provided to illustrate Big O analysis.

Uploaded by

Isan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Algorithm Analysis

Objectives
• Why is algorithm analysis important
• Be able to use Big-O to describe execution
time
• Know the common Big-O categories
• Understand Big-O timing for common
operations on lists and dictionaries in Python
• Know how to measure (benchmark) simple
python programs
Compare two programs for adding:
1 + 2 + 3 + 4 +… + n

Which program is better?


What to compare
• you can compare memory size, readability,
self-commenting, speed, or other things
• What we will be more in computing resources:
• speed and memory
• Speed will be our most important criteria
when comparing two programs
• Why not memory size?
Measure algorithm to solve a problem
• We are interested in various computing
problems and measuring the speed of one
algorithm A vs. another algorithm B for solving
the same problem.
• One way to do this is to actually write both
and 'benchmark' their speed by timing them.
Algorithm A
import time
# sum 1 + 2 + 3 up to n
# with timer
def sum_of_n(n): # algorithm A
start = time.time() # system time in seconds
the_sum = 0
for i in range(1,n+1):
the_sum += i
stop = time.time() # stop time
return the_sum, stop-start

print("times for n = ", 100000)

for times in range(5):


print( "Sum is %d required %10.7f seconds"%(sum_of_n(100000)))
Algorithm A
times for n = 100000
Sum is 50005000 required 0.0011640 seconds
Sum is 50005000 required 0.0011241 seconds
Sum is 50005000 required 0.0010769 seconds
Sum is 50005000 required 0.0010211 seconds
Sum is 50005000 required 0.0010970 seconds

times for n = 10,000, 100,000, 1,000,000, 100,000,000


Sum up to 10000 required 0.0019689 seconds
Sum up to 100000 required 0.0175431 seconds
Sum up to 1000000 required 0.1773720 seconds
Sum up to 100000000 required 11.6861229 seconds
Algorithm A & B
Algorithm A:
times for n = 10,000, 100,000, 1,000,000, 100,000,000
Sum up to 10000 required 0.0019689 seconds
Sum up to 100000 required 0.0175431 seconds
Sum up to 1000000 required 0.1773720 seconds
Sum up to 100000000 required 11.6861229 seconds

Algorithm B:
times for n = 10,000, 100,000, 1,000,000, 100,000,000
Sum up to 10000 required 0.0000012 seconds
Sum up to 100000 required 0.0000010 seconds
Sum up to 1000000 required 0.0000010 seconds
Sum up to 100000000 required 0.0000041 seconds
Summation symbol

Can be simplified to algorithm B


Algorithm B:
times for n = 10,000, 100,000, 1,000,000, 100,000,000
Sum up to 10000 required 0.0000012 seconds
Sum up to 100000 required 0.0000010 seconds
Sum up to 1000000 required 0.0000010 seconds
Sum up to 100000000 required 0.0000041 seconds

• Time will vary wildly depending on computer


speed, memory, language, compiler
• Can we somehow characterize algorithm
speed independent of these factors?
Big-O (Omega)
• Approach, break the algorithm into the
number of steps involved and examine that
• We could count the number of assignments
• So in Algorithm A we have:
the_sum = 0 # 1 assignment to the_sum
for i in range(1,n+1): # n assignments to i
the_sum += i # n assignments to the_sum
• Total of 1 + 2n assignments
• We denote this total 1 + 2n as a special
function: T(n) where n is know as the size of
the problem
• In algorithm A, T(n) = 1 + 2n
• As n gets really big, the first term (1) is
insignificant to the second term 2n
• T(n) = 1 + 2n
• since we are really interested in how
algorithms 'scale' to really big problems, we
will be interested in the term that way over
shadows the other terms.
• So time complexity for Algorithm A will be
O(n).
• Consider if we had the following formula for an
algorithm

T(n) = 9023 + 11n + n2 + 2n3

• As n grows big 1000, 100000, 100000000, of the


four terms, 2n3 will start to dominate the others.
This term is called the Order of magnitude term.
It can be symbolized by the Order of magnitude
function O(f(n)) where f(n) is the largest term in
the T(n) expression
• T(n) = 9023 + 11n + n2 + 2n3
• so for this T(n) since 2n3 is the dominate term,
we can say that in O(f(n) is O( n3 )
• We also drop the constant multiplier in the
term 2n3 of the 2
• We say f(n) = n3 or simply that this algorithm
is O(n3) or we say
"this algorithm is Big-O of n cubed"
• Note that some algorithms time depend of
the data, so they may have a different Big-O
characteristic for the worst time, the average
time and the best time.
Basic Big-O's
f(n) Name
1 Constant
log n Logarithmic
n Linear
n log n Log Linear
n2 Quadratic
n3 Cubic
2n Exponential
n! Factorial
©2014 Brad Miller and David Ranum
Example 1
a = 5
b = 6
c = 10
for i in range(n):
for j in range(n):
x = i * i
y = j * j
z = i * j
for k in range(n):
w = a * k + 45
v = b * b
d = 33

T(n) = 3 + 4n2 + 3n + 1
= 4n2 + 3n + 4
is O(n2)

You might also like