0% found this document useful (0 votes)
29 views71 pages

Alg-Chapter2-Part 2

This document discusses analysis of recursive algorithms and provides examples of recursive functions and their recurrence relations. It analyzes the runtime of recursive functions like Factorial, Fibonacci, and Binary Search and derives their recurrence relations. It also discusses different techniques for solving recurrence relations like substitution, iteration, and recursion tree methods.

Uploaded by

yarajehadrab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views71 pages

Alg-Chapter2-Part 2

This document discusses analysis of recursive algorithms and provides examples of recursive functions and their recurrence relations. It analyzes the runtime of recursive functions like Factorial, Fibonacci, and Binary Search and derives their recurrence relations. It also discusses different techniques for solving recurrence relations like substitution, iteration, and recursion tree methods.

Uploaded by

yarajehadrab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

Algorithms Analysis and Design

Chapter 2

Analysis of Algorithms – Part 2


Analysis of recursive algorithms
Example 1

Function RecFunc(n): cost time


// Base case
if n <= 0 c1 1
return c2 1 x t

RecFunc(n - 1) T(n-1) 1 x (1-t)

for i = 1 to n c4 (n + 1) x (1-t)
print(i) c5 n x (1-t)

If t=1 (n<=0) : T(n) = c1 + c2 = C [constant]


𝑪 , 𝒏 <= 𝟎
If t=0 (n>0): T(n) = c1 + T(n-1) + c3 (n+1) + c5(n) T 𝒏 =
𝑻 𝒏 − 𝟏 + 𝒏, 𝒏 > 𝟎
= T(n-1) + an + b [a and b are constants]
Remember that we care about the worst-case so
that we will take the maximum possible runtime.

T(n) = T(n-1) + n n>0


Example 1

Function RecFunc(n): cost time


// Base case
if n <= 0
return

RecFunc(n - 1)
for i = 1 to n
print(i)
Example 1
 Compute the factorial function F (n) = n! for an arbitrary nonnegative integer n
n!= n . (n-1) . (n-2) . ...... . 2 . 1= n . (n − 1)! for n ≥ 1
and 0!= 1 (base case)
we can compute Fact (n) = n. Fact (n − 1) with the following recursive algorithm.

ALGORITHM Fact ( n ) If t=1 (n=0) : T(n) = c1 + c2 = d [constant]


If t=0 (n>0): T(n) = c1 + T(n-1) + c3
{ cost time = T(n-1) + C where C is constant
1 If (n == 0) c1 1
𝒅, 𝒏 = 𝟎
2 return 1 c2 t T 𝒏 =
𝑻 𝒏 − 𝟏 + 𝒄, 𝒏 > 𝟎
Else
Remember that we care about the worst-case so
3 return n * Fact (n – 1) T(n-1) + c3 (1-t) that we will take the maximum possible runtime.
}
T(n) = T(n-1) + c n>0
To compute Fact(n-1) To multiply Fact(n-1) by n Recurrence relation
Example 1
 Compute the factorial function F (n) = n! for an arbitrary nonnegative integer n
n!= n . (n-1) . (n-2) . ...... . 2 . 1= n . (n − 1)! for n ≥ 1
and 0!= 1 (base case)
we can compute Fact (n) = n. Fact (n − 1) with the following recursive algorithm.

ALGORITHM Fact ( n )

{ cost time

1 If (n == 0)

2 return 1

Else

3 return n * Fact (n – 1)

}
Example 2
 Fibonacci Numbers Sequence 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ……..

ALGORITHM Fib ( n )
{ cost time
1 If (n == 1 || n==2) c1 1
2 return 1 c2 t
Else
3 return Fib (n – 1) + Fib (n-2) T(n-1) + T(n-2) + c3 (1-t)
}

If t=1 (n=1or 2) : T(n) = c1 + c2 = d [constant] 𝒅, 𝒏 = 𝟏 ,𝟐


If t=0 (n>2): T(n) = c1 + T(n-1) + T(n-2) + c3 T 𝒏 =
𝑻 𝒏 − 𝟏 + 𝑻(𝒏 − 𝟐) + 𝒄, 𝒏>𝟐
= T(n-1) + T(n-2) + C where C is constant

T(n) = T(n-1) + T(n-2) + c n>2 Recurrence relation


Example 2
 Fibonacci Numbers Sequence 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ……..

ALGORITHM Fib ( n )

{ cost time

1 If (n == 1 || n==2)

2 return 1

Else

3 return Fib (n – 1) + Fib (n-2)


}
Example 3
 Multiply is a recursive function to perform multiplication of two positive integers (m and n)
6x1 = 6
6 x 2 = 6 + (6 x 1)
6 x 3 = 6 + (6 x 2) = 6 + [6 + (6 x 1)]

m * n = m + m * (n-1) base case is m * 1 = m


multiply (m , n )
{ cost time
if ( n == 1 )
return m
𝒅, 𝒏=𝟏
else T 𝒏 =
𝑻 𝒏 − 𝟏 + 𝒄, 𝒏 > 𝟏
return m + multiply (m , n – 1);
}
T(n) = T(n-1) + c n>1
Recurrence relation
Example 4
 Derive the recurrence relation for the following algorithm. Assume T(1) = 1

Algorithm A (n )
{ cost time
1 A(n-1) T(n-1) 1 Note: For Algorithm A to
2 for i=1 to n c2 n+1 function properly, it is
crucial to include a defined
3 statement c3 n stopping point.
} T(n) = T(n-1) + c2(n+1) + c3n
T(n) = T(n-1) + cn + d

Recurrence relation
Example 5
 Recursive binary search algorithm
Algorithm BinarySearch (A , start , end , key )
{
if (start > end) 𝒅, 𝒏=𝟏
return -1 T 𝒏 =
𝑻 + 𝒄, 𝒏 > 𝟏
else
mid = (start + end)/2
if key == A[mid]
return mid
else if key < A[mid] T(n) = T( ) + c n>1
return BinarySearch (A , start, mid-1 , key)
else
Cost of solving Cost of
return BinarySearch (A, mid+1, end, key one subproblem dividing the
} of size n/2 problem
Example 5
 Recursive binary search algorithm
Algorithm BinarySearch (A , start , end , key )
{
if (start > end)
return -1
else
mid = (start + end)/2
if key == A[mid]
return mid
else if key < A[mid]
return BinarySearch (A , start, mid-1 , key)
else
return BinarySearch (A, mid+1, end, key
}
Exercise
Consider the following recurrence relation:

Write a pseudocode for an algorithm that solves a problem with the given recurrence
relation.
𝐴𝐿𝐺𝑂𝑅𝐼𝑇𝐻𝑀 𝑀𝑦𝐴𝑙𝑔 𝐴 𝑛
{

} 13
Solving recurrence equations
Recurrence relation
 A recurrence relation is an equation that recursively defines a sequence where the next term is a
function of the previous terms.
• Example: Fibonacci series Fib(n) = Fib(n-1) + Fib(n-2)

 When an algorithm contains a recursive call to itself, we can often describe its running time
by a recurrence equation or recurrence, which describes the overall running time on a
problem of size n in terms of the running time on smaller inputs.

• Recall the recursive algorithms that were explained in the lecture

Recurrence equation of running time


factorial(n) T(n) = T(n-1) + c n>0
Fibonacci Sequence T(n) = T(n-1) + T(n-2) + c n>2
Multiply(m , n) T(n) = T(n-1) + c n>1
Binary search T(n) = T( ) + c n>1
Solving Recurrence relation

 Solving recurrence relation means that we want to convert the recursive definition to closed
formula.
 There are different techniques for solving recurrence:
 Substitution method: Guess the Solution, and then Use the mathematical induction to find the
boundary condition and shows that the guess is correct.
 Iteration method: It means to expand the recurrence and express it as a summation of terms of
n and initial condition.
 Recursion tree method: A pictorial representation of an iteration method which is in the form of
a tree where at each level nodes are expanded.
 Characteristic equation
 Master method
Iteration method
Example 1

 T(n) = T(n-1) + c , n>1 T(0) = 1 (base case)


Repeat the procedure for k times
Then derive the general form

k T(n) T(n-1) = T (n-1-1) + c = T(n-2) + c


1 T(n) = T(n-1) + c
T(n-2) = T (n-2-1) + c = T(n-3) + c
2 = [ T(n-2) + c ] + c = T(n-2) + 2c
3 = [T(n-3) + c] + 2c = T(n-3) + 3c T(n-3) = T (n-3-1) + c = T(n-4) + c
4 = [T(n-4) + c] + 3c = T(n-4) + 4c
…….. ……..
T(0) = 1
k T(n) = T(n-k) + kc
T(n-k) = 1 → 𝑛 − 𝑘 = 0 → 𝒏 = 𝒌

Now substitute k = n: T(n) = T(n - n) + n c


General form T(n) = T(0) + c n
Order of growth is
= 1 + cn
O(n)
Example 2

 T(n) = T( ) + c , n>1 T(1) = 1 (base case)

k T(n)
T( ) = T ( ) + c = T( ) + c
1 T(n) = T( ) + c
2 = [ T( ) + c ] + c = T( ) + 2c T( ) = T ( ) + c = T( ) + c
3 = [T( ) + c] + 2c = T( ) + 3c
T( ) = T ( ) + c = T( ) + c
4 = [T( ) + c] + 3c = T( ) + 4c
…….. ……..
T(1) = 1
k T(n) = T( ) + kc T( ) = 1
=1 → 𝑛=2 → 𝒍𝒈𝒏 = 𝒍𝒈2
𝑙𝑔𝑛 = k lg2 → 𝒌 = 𝒍𝒈𝒏

General form Now substitute k = lgn: T(n) = T(1) + c lgn


T(n) = 1 + c lgn Order of growth is
O(lgn)
Iteration method
Example 3 (Solution 1)

 T(n) = 2 T(n-1) + 1 , n>1 T(0) = 1 (base case)

k T(n) T(n-1) = 2T (n-2) + 1


1 T(n) = 2 T(n-1) + 1 T(n-2) = 2T (n-3) + 1
2 =2 [2T(n-2) + 1 ] + 1 = 4 T(n-2) + 2 + 1 T(n-3) = ٢T (n-٤) + ١
3 = 4 [2T(n-3) + 1] + 2 + 1 = 8 T(n-3) + 4 + 2 + 1
4 = 8 [2T(n-4) + 1] + 4 + 2 + 1 = 16 T(n-4) + 8+4 + 2 + 1
1 + 2 + 4 + 8 + 16 + ….. ar(n-1)
…….. …….. Is the summation of geometric sequence
Sn = a
k T(n) = 2 T(n-k) + [ 1 + 2 + 4 + 8 + …….. 2(k-1)]
a: the first term in the sequence
r: common ratio
T(0) = 1
T(n-k) = 1 → 𝑛 − 𝑘 = 0 → 𝒏 = 𝒌

Now substitute T(n) = 2 T(n - n) + [2 - 1]


T(n) = 2 T(0) + [2 - 1]
Order of growth is O(2 )
= 2 + [2 - 1] exponential
Iteration method
Example 3 (Solution 2)

 T(n) = 2 T(n-1) + 1 , n>1 T(0) = 1 (base case)

k T(n) T(n-1) = 2T (n-2) + 1


1 T(n) = 2 T(n-1) + 1 T(n-2) = 2T (n-3) + 1
2 =2 [2T(n-2) + 1 ] + 1 = 4 T(n-2) + 3 T(n-3) = ٢T (n-٤) + ١
3 = 4 [2T(n-3) + 1] + 2 + 1 = 8 T(n-3) + 7
4 = 8 [2T(n-4) + 1] + 4 + 2 + 1 = 16 T(n-4) + 15
…….. ……..
k T(n) = 2 T(n-k) + 2 - 1

T(0) = 1
T(n-k) = 1 → 𝑛 − 𝑘 = 0 → 𝒏 = 𝒌

Now substitute T(n) = 2 T(n - n) + [2 - 1]


T(n) = 2 T(0) + [2 - 1]
Order of growth is O(2 )
= 2 + [2 - 1] exponential
Example 4 (solution 1)

 T(n) = T( ) + n , n>1 T(1) = 1 (base case)


T( ) = T( ) +
k T(n)
1 T(n) = T( ) + n T( ) = T( ) +
𝒏 𝒏
2 = [ T( ) + ] + n = T( ) + +n
𝟒
𝒏
𝟐
𝒏 𝒏
T( ) = T( ) +
3 = [T( ) + ] + + n = T( ) + + +n
𝟖 𝟒 𝟐
𝒏 𝒏 𝒏 𝒏
4 = [T( ) + ]+ + + n = T( ) + + + +n
𝟏𝟔 𝟖 𝟒 𝟐
𝒏 𝒏 𝒏
…….. …….. n+ + + + … … 𝟏 = 𝟐(𝒏 − 𝟏)
𝟐 𝟒 𝟖
𝒏 𝒏 𝒏
k T(n) = T( ) + [ + +…….. + + +n]
𝟖 𝟒 𝟐

T( ) = 1
=1 → 𝑛=2 → 𝒌 = 𝒍𝒐𝒈𝒏 T(n) ≈ T(1) + 2(n-1) Order of growth is
≈ 1 + 2 (n – 1) O(n)
Example 4 (solution 2)

 T(n) = T( ) + n , n>1 T(1) = 1 (base case)


T( ) = T( ) +
k T(n)
1 T(n) = T( ) + n T( ) = T( ) +
2 𝒏 𝟑𝒏
= [ T( ) + ] + n = T( ) +
𝟒 𝟐 T( ) = T( ) +
3 𝒏 𝟕𝒏
= [T( ) + ] + + n = T( ) +
𝟖 𝟒
4 = [T( ) + ]+ + + n = T( ) +
𝒏 𝟏𝟓 𝒏
𝟏𝟔 𝟖
…….. ……..
k T(n) = T( ) + [

]

T( ) = 1
=1 → 𝑛=2 → 𝒌 = 𝒍𝒐𝒈𝒏 T(n) ≈ T(1) +
(
𝒏
)∗ Order of growth is
𝟐 O(n)
≈ 1 + 2 (n – 1)
Exercise:

 Solve the following recurrence relations using iteration method.

 T(n) = T(n-1) + n T(1) = 1 O(n2)

 T(n) = T(n-1) + 2n T(0) = 0 O(n2)

 T(n) = 2 T( ) + n T(1) = 1 O(n logn)


Master Method

 Master method provides bound for recurrences of the form:

T(n) = a T( ) + f(n)

T(n) = a T( ) + c.

Where a ≥ 𝟏 , 𝒃 > 𝟏 𝑎𝑛𝑑 𝑓 𝑛 𝑖𝑠 𝑎 𝑔𝑖𝑣𝑒 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑤ℎ𝑖𝑐ℎ 𝑖𝑠 𝑎𝑠𝑦𝑚𝑝𝑡𝑜𝑡𝑖𝑐𝑎𝑙𝑙𝑦 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒.

 This recurrence characterizes a divide-and-conquer algorithm that divides a problem of

size n into subproblems, each of size , and solves them recursively.

We will explain the parts of this


formula in detail later
Master Method

 There are three cases:

 if a = T(n) = O( )

 if a > T(n) = O( )

 if a < T(n) = O( )
Examples

 Apply master method to solve the following recurrences:

 T(n) = T( ) + c

T(n) = T( ) + c.𝑛
a=1 b=2 d=0
a ?? 𝑏 1=2 , so we will follow case 1
T(n) = O(𝑛 log 𝑛) T(n) = O(log 𝑛)

 T(n) = T( ) + n
a=1 b=2 d=1
a ?? 𝑏 1<2 , so we will follow case 3
T(n) = O(𝑛 ) T(n) = O(𝑛)
Examples

 T(n) = 2 T( ) + n
a=2 b=2 d=1
a ?? 𝑏 2=2 , so we will follow case 1
T(n) = O(𝑛 log 𝑛) T(n) = O(n 𝒍𝒐𝒈𝟐 𝒏)

 T(n) = 3 T( ) + n2
a=3 b=2 d=2
a ?? 𝑏 3<2 , so we will follow case 3
T(n) = O(𝑛 ) T(n) = O(𝒏𝟐 )

 T(n) = 8 T( ) + n2
a=8 b=2 d=2
a ?? 𝑏 8>2 , so we will follow case 2

T(n) = O(𝑛 ) T(n) = O(𝑛 ) = O (𝒏𝟑 )


Exercises

 Solve the following recurrence relations using Master’s theorem


1) T(n) = 8 T(𝑛) – n2

The given recurrence relation does not correspond to the general form of Master’s theorem.

So, it can not be solved using Master’s theorem.

2) T(n) = 8 T( ) + 1000 n2

3) T(n) = 16 T( ) + n

4) T(n) = 3 T( ) +

5) T(n) = 7 T( ) + n2

6) T(n) = 64 T( ) - n2 (Does not apply because f(n) is not positive)


Types of Analysis
Types of Analysis

 Three cases of analysis: Best, worst, and average case running time
 Worst case running time:
− constraints on the input, rather than size, resulting in the slowest possible running time.
− Provides an upper bound on running time
− An absolute guarantee that the algorithm would not run longer, no matter what the inputs are.
 Best case running time:
− constraints on the input, rather than size, resulting in the fastest possible running time
− Provides a lower bound on running time best case
average case

Lower Bound  Running Time  Upper Bound


worst case
120

100

 Average case running time:

Running Time
80

− average running time over every possible type of input 60

40
− usually involve probabilities of different types of input
20

− Provides a prediction about the running time 0


1000 2000 3000 4000
− Assumes that the input is random30 Input Size
Types of Analysis

The worst case is usually fairly:


Because it is usually very hard to compute the average running
time (the expected performance averaged over all possible
inputs!)

The worst case is usually fairly easy to analyze and often close
to the average or real running time.
Example: Linear Search Algorithm
search for x in array A of n items.

If x = 33 → 𝟕 𝒄𝒐𝒎𝒑𝒂𝒓𝒊𝒔𝒐𝒏𝒔
If x = 10 → 1 comparison

Best case: x present at the first element Time is constant → TB(n) = O(1)

Worst case: x does not present in the array Time is linear → Tw(n) = O(n)

Average case:

( )
……….
= = = Time is linear → Tavg(n) = O(n)

𝑛(𝑛 + 1)
Recall 𝑖=
2
Example: Linear Search Algorithm
search for x in array A of n items.
Your algorithm should return the index of found item and n+1 if not found (assuming that the first
index is 1)

Seq_search (A, n , x)
{ cost time
1. i=1 c1 1
2. while(i<=n && A[i] != x) c2 ? The # of times lines 2, 3 are
executed depends on the data in the
3. i++ c3 ? array and not just n
4. return i c4 1
}
Example: Linear Search Algorithm
Worst case: x does not present in the original array

Seq_search (A, n , x)
{ cost time
1. i=1 c1 1
2. while(i<=n && A[i] != x) c2 n+1
3. i++ c3 n
4. return I c4 1
} T(n) = c1 + c2(n+1) + c3 n + c4
= c1+ c2+ c4 + (c2 + c3) n
T(n) = a n + b where a and b are constants O(n)
Example: Linear Search Algorithm
Best case: x is found at the first element A[1]

Seq_search (A, n , x)
{ cost time
1. i=1 c1 1
2. while(i<=n && A[i] != x) c2 1
3. i++ c3 0
4. return I c4 1
} T(n) = c1 + c2 + c4
= C [constant time] O(1)
Example: Linear Search Algorithm

What about average case analysis???


Seq_search (A, n , x)
{ cost time
1. i=1 c1 1 It is hard to compute the average
2. while(i<=n && A[i] != x) c2 ? running time (the expected
3. i++ c3 ? performance averaged over all
4. return I c4 1 possible inputs!)
}
Exercise

Consider the following algorithm that finds the largest element of an array
(Assume the first index is 1)
1) What is the worst, best, and average cases?

Find_Max(A, n) 2) Do a line by line analysis for the worst and


{ cost time best cases and derive the time complexity
1. Max=A[1] (use Big O notation)
2. for i=2 to n
[2, 3, 4, 5, 6, 7, 8, 9] Max = 9
3. If(A[i]>Max) The worst case is to input a sorted array from smallest to largest
4. Max=A[i] What is the best case?
5. return Max 9, 8, 7, 6, 5, 4, 3, 2 Max = 9

} What is the average case?


Asymptotic Notations
Asymptotic Analysis

 To compare two algorithms with running times f(n) and g(n), we need a
rough measure that characterizes how fast each function grows.

• Hint: use rate of growth

• Compare functions in the limit, that is, asymptotically!

(i.e., for large values of n)


Order of Growth

• The low order terms in a function are relatively insignificant for large n
n4 + 100n2 + 10n + 50 ~ n4

i.e., we say that n4 + 100n2 + 10n + 50 and n4 have the same rate of growth
Order of growth

Suppose you have analyzed two algorithms and expressed their run times in terms of
the size of the input:
 Algorithm A: takes 100 n + 1 steps to solve a problem with size n;
 Algorithm B: takes n2 + n + 1 steps. The leading term is the term
with the highest exponent.

The following table shows the run time of these algorithms for different problem sizes:

Input size Run time of Algorithm A Run time of Algorithm B


n 100 n + 1 steps n2 + n + 1 steps
10 1 001 111
Which algorithm
100 10 001 10 101
is better????
1 000 100 001 1 001 001
10 000 1 000 001 > 109
Order of growth

Input size Run time of Algorithm A Run time of Algorithm B


n 100 n + 1 steps n2 + n + 1 steps
10 1 001 111
100 10 001 10 101
1 000 100 001 1 001 001
10 000 1 000 001 > 109

 At n =10, Algorithm A looks bad


 For Algorithm A, the leading term has a large coefficient, 100, which is why B does better than A for
small n.
 But for n=100 they are about the same,
 for larger values of n, A is much better.
 any function that contains an n2 term will grow faster than a function whose leading term is n.

Algorithm A is better than Algorithm B for sufficiently large n.


Values of some important functions as n grows
How to compare algorithms?
 for large problems, we expect an algorithm with a smaller leading term to
be a better algorithm
 but for smaller problems, there may be a crossover point where another
algorithm is better.
 The location of the crossover point depends on the details of the algorithms,
the inputs and the hardware,

On the graph, as you go to the right, a


faster growing function eventually
becomes larger...

Visualizing Orders of Growth


How to compare algorithms?

 If two algorithms have the same leading order term, it is hard


to say which is better; the answer will depend on the details.
 they are considered equivalent, even if they have different
coefficients.

Algorithm A : T(n) = 1000 n + 100 Which algorithm


Algorithm B: T(n) = 2n + 3 is better????
Order of growth

An order of growth is a set of functions whose growth is


considered equivalent.
Examples:
 2n, 100n and n + 1 belong to the same order of growth, which is written
O(n) in “Big-Oh notation”
 All functions with the leading term n2 belong to O(n2);

 What is the order of growth of n3 + n2?


 What about 1000000 n3 + n2. What about n3 + 1000000 n2?
 What is the order of growth of (n2 + n) * (n + 1)?
Basic asymptotic efficiency classes

1 constant
log n logarithmic
n linear
n log n n-log-n or linearithmic
n2 quadratic
n3 cubic
2n exponential
n! factorial
Execution times for algorithms with the given time complexities
Asymptotic Notations

 Notations used for representing the simple form of a function or showing the class of a function.

 It is a simple method for representing the time complexity.

 A way of comparing functions that ignores constant factors and small input sizes.

 Asymptotic Notations:

• O Big-O notation : upper bound of a function.

•  omega notation: Lower bound of a function.

•  theta notation : Average bound

Recall
1 < logn < < n < nlogn < n2 < n3 < ……. < 2n < 3n < ……… < nn < n!
Big-O notation
Let f and g be nonnegative functions on the positive integers

 We write f(n) = O (g(n)) and say that:


"Asymptotically" because it
 f(n) is big oh of g(n) or,
matters for only large values of n
 f(n) is of order at most g(n) or,
 g is an asymptotic upper bound for f
 O(g(n)): class of functions f(n) that grow no faster than g(n)

Definition:

0
0
Big-O notation

Example: f(n) = 3n + 2
1

f(n) c g(n) f(n) = O(n)

We can also say that 3n + 2 5 n2 f(n) = O(n2)

1 < logn < < n < nlogn < n2 < n3 < ……. < 2n < 3n < ……… < nn
Upper bound

But when writing Big-O notation we try to find the closest function, So 3n + 2 is O(n)
More Examples …

Drop constants and lower order terms. E.g. O(3*n^2 + 10n + 10) becomes O(n^2).

• n4 + 100n2 + 10n + 50 is O(n4)


• 10n3 + 2n2 is O(n3)
• n3 - n2 is O(n3)
• nlogn +n O(logn)
• 2 n + n2
• constants
− 10 is O(1)
− 1273 is O(1)
Example
• Show that 30n + 8 is O(n).

Solution: Use Big-O definition

g(n) = n

Show that  (there exist) c,n0 such that 30n + 8  cn, n n0 .

• Let c= 31, n0=8.

• 30n + 8  31 n n 8
The following table shows some of the orders of growth (using Big-O notation) that
appear most commonly in algorithmic analysis, in increasing order of badness.

Order of growth Name


O(1) Constant.
O(log n) Logarithmic. n is the
O(n) Linear. problem size.
O(n log n) Log-linear or linearithmic

O(n2) Quadratic.
O(n3) Cubic.
O(nc), c > 1 Polynomial, sometimes called algebraic.
Examples: O(n2), O(n3), O(n4).
O(cn) Exponential, sometimes called geometric.
Examples: O(2n), O(3n).
O(n!) Factorial, sometimes called combinatorial.

badness
Plot of the most common Big-O notation
 notation
Let f and g be nonnegative functions on the positive integers
 We write f(n) =  (g(n)) and say that:
 f(n) is omega of g(n) or,
 f(n) is of order at least g(n) or,
 g is an asymptotic lower bound for f
 Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)

Definition:
Ω
0
0
 notation
Example: f(n) = 3n + 2
1

f(n) c g(n) f(n) = (n)

We can also say that 3n + 2 l f(n) = (logn)

Is f(n) = (n2) (wrong)

1 < logn < < n < nlogn < n2 < n3 < ……. < 2n < 3n < ……… < nn
Lower bound

But when writing  notation we try to find the closest function, So 3n + 2 is (n)
 notation
Let f and g be nonnegative functions on the positive integers

 We write f(n) =  (g(n)) and say that: A tight bound implies that both the lower
and the upper bound for the computational
 f(n) is theta of g(n) or,
complexity of an algorithm are the same.
 f(n) is of order of g(n) or,
 g is an asymptotic tight bound for f
 Θ(g(n)): class of functions f(n) that grow at same rate as g(n)

Definition:

0
0
 notation
Example: f(n) = 3n + 2
1

c1 g(n) f(n) c2 g(n) f(n) = (n)

Is f(n) = (n2) (wrong)

1 < logn < < n < nlogn < n2 < n3 < ……. < 2n < 3n < ……… < nn
 notation
Example: f(n) = n g(n) = 2n
Is f(n)= (g(n))? Is n is (2n)?

1 < logn < < n < nlogn < n2 < n3 < ……. < 2n < 3n < ……… < nn
Solution:

→n<=c1* 2n 0 yes always → n is O(2n)


→n>=c2* 2n 0 NO → n is not (2n)

f(n) is not (2n)


Example

f(n) = 60n2+5n+1 g(n) =n2 Is f(n)= (g(n))?

1 < logn < < n < nlogn < n2 < n3 < ……. < 2n < 3n < ……… < nn

→Is 60n2+5n+1 =O(n2)?


f(n)<=c1g(n) for all n>=no and some constant c1>0
→ 60n2+5n+1 <=c1* n2
yes for c1=66 and n>=1
→Is 60n2+5n+1 = (n2)?
f(n)>=c2g(n) for all n>= no and some constant c2>0
→ 60n2+5n+1 >=c2* n2
yes for c2=60 and n>=1
f(n) is (n2)
Example

T(n) = 32n2 + 17n + 32


Represent T(n) using asymptotic notations

ignore the multiplicative constants and


T(n)=O(n2)
the lower order terms
T(n)= (n2)

T(n)= (n2)
Summary

O notation  notation  notation


The function f(n) = O(g(n)) iff The function f(n) = Ω(g(n)) iff ∃ The function f(n) = (g(n)) iff ∃
+ 𝑣𝑒 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡𝑠 𝑐 𝑎𝑛𝑑 𝑛0 + 𝑣𝑒 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡𝑠 𝑐 𝑎𝑛𝑑 𝑛0 + 𝑣𝑒 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡𝑠 𝑐1, 𝑐2 𝑎𝑛𝑑 𝑛0
𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≤ 𝑐 𝑔 𝑛 ∀ 𝑛 ≥ 𝑛0 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≥ 𝑐 𝑔 𝑛 ∀ 𝑛 ≥ 𝑛0 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡
𝑐1 𝑔(𝑛) ≤ 𝑓 𝑛 ≤ 𝑐2 𝑔 𝑛 ∀ 𝑛
≥ 𝑛0
Asymptotic Analysis of Algorithms
(Asymptotic → for large n)

big oh expressions greatly simplify the analysis of the running time of


algorithms
 all that we get is an upper bound on the running time of the
algorithm

 the result does not depend upon the values of the constants
 the result does not depend upon the characteristics of the computer
and compiler actually used to execute the program!
Conventions for Writing Big Oh Expressions
(Tight bound)

 Ignore the multiplicative constants


 Instead of writing O(3n2), we simply write O(n2)
 If the function is constant (e.g. O(1024) we write O(1))

 Ignore the lower order terms


 Instead of writing O(nlogn+n+n2), we simply write O(n2)
Examples

T(n) = 32n2 + 17n + 32


T(n)=O(n2)
 n, n+1, n+80, 40n, n+log n is O(n)
 n2 + 10000000000n is O(n2)
 3n2 + 6n + log n + 24.5 is O(n2)
NOTES

 A problem that has a worst case Polynomial time algorithm is considered to have a
good algorithm.
 Such problems are called feasible or tractable

 A problem that does not have a worst case Polynomial time algorithm is said to be
intractable

 NP-complete problems and NP-hard problems


Math you need to review
Math you need to Review
Summations
Logarithms and Exponents
• properties of logarithms:
logb(xy) = logbx + logby
logb (x/y) = logbx - logby
logbax = alogbx
logba = logxa/logxb
• properties of exponentials:
a(b+c) = aba c
abc = (ab)c
Proof techniques
ab /ac = a(b-c)
b = a logab
bc = a c*logab
Common summation formulas

c + c + c + ………. cn

1+2+3+4+5…+n

12 + 22 + 32 + 42 + 52 … + n2

13 + 23 + 33 + 43 + 53 … + n3

 2
i 0
i
 2 n 1
1 1 + 2 + 4 + 8 + 16 + …+ 2n
Logarithms and properties

• In algorithm analysis we often use the notation “log n” without specifying the base

Binary logarithm lg n  log2 n log x y  y log x


Natural logarithm ln n  loge n log xy  log x  log y
x
lg k n  (lg n ) k log  log x  log y
y
lg lg n  lg(lg n ) log a
a logb x  x b

log b x  log a x
log a b

You might also like