BİM122 Lecture Note 7
BİM122 Lecture Note 7
Copyright © McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education.
Section 3.2
Section Summary Donald E. Knuth
( )
Big-O Notation
Big-O Estimates for Important Functions
Big-Omega and Big-Theta Notation
For many applications, the goal is to select the function g(x) in O(g(x))
as small as possible (up to multiplication by a constant, of course).
Using the Definition of Big-O Notation
Example: Show that x is O(x ).
Solution: When x > , x x Take C and k
as witnesses to establish that x is O(x ).
Would C and k work?)
Example: Show that n is not O(n).
Solution: Suppose there are constants C and k for
which n Cn, whenever n > k. Then (by dividing
both sides of n Cn) by n, then n C must hold for
all n > k. A contradiction!
Big-O Estimates for Polynomials
Example: Let
where are real numbers with an .
Then f x xn). Uses triangle inequality,
an exercise in Section 1.8.
Proof: |f(x)| = |anx + an-1 x + ∙∙∙ + a1x + a0|
n n-1 1
Continued
Big-O Estimates for some
Important Functions
Example: Use big-O notation to estimate log n!
Solution: Given that (previous slide)
then .
Hence, log(n!) is O(n C
Display of Growth of Functions
f6(n) = ( )3 6
•f (n) = 8n +17n +111 (tied with the one below)
2
3 2
f9(n) = •f (n) = 2 n
(grows faster than one above since 2 > 1.5)
4
f10(n) = n •f (n) = 2 (n +1) (grows faster than above because of the n +1 factor)
7
n 2 2
Θ(n).
Worst-Case Complexity of Linear Search
Example: Determine the time complexity of the
linear search algorithm.
procedure linear search(x:integer,
a1, a2, …,an: distinct integers)
i := 1
while (i ≤ n and x ≠ ai)
i := i + 1
if i ≤ n then location := i
else location := 0
return location{location is the subscript of the term that equals x, or is 0 if
x is not found}
Θ(n2) since .
Worst-Case Complexity of Insertion Sort
Example: What is the worst-case complexity of
insertion sort in terms of the number of comparisons
made? procedure insertion sort(a1,…,an:
real numbers with n ≥ )
Solution: The total number of for j := to n
comparisons are: i :=
while aj > ai
i := i +
m := aj
Therefore the complexity is Θ(n2). for k := to j i
aj-k := aj-k-1
ai := m
Matrix Multiplication Algorithm
The definition for matrix multiplication can be expressed
as an algorithm; C = A B where C is an m n matrix that is
the product of the m k matrix A and the k n matrix B.
This algorithm carries out matrix multiplication based on
its definition.
O(n ).
Boolean Product Algorithm
The definition of Boolean product of zero-one
matrices can also be converted to an algorithm.
O(n )
Algorithmic Paradigms
An algorithmic paradigm is a a general approach
based on a particular concept for constructing
algorithms to solve a variety of problems.
Greedy algorithms were introduced in Section .
We discuss brute-force algorithms in this section.
We will see divide-and-conquer algorithms (Chapter 8),
dynamic programming (Chapter 8), backtracking
(Chapter ), and probabilistic algorithms (Chapter ).
There are many other paradigms that you may see in
later courses.
Brute-Force Algorithms
A brute-force algorithm is solved in the most
straightforward manner, without taking advantage of
any ideas that can make the algorithm more efficient.
Brute-force algorithms we have previously seen are
sequential search, bubble sort, and insertion sort.
Computing the Closest Pair of
Points by Brute-Force
Example: Construct a brute-force algorithm for
finding the closest pair of points in a set of n points in
the plane and provide a worst-case estimate of the
number of arithmetic operations.
Solution: Recall that the distance between (xi,yi) and
(xj, yj) is . A brute-force algorithm
simply computes the distance between all pairs of
points and picks the pair with the smallest distance.
Note: There is no need to compute the square root, since the square of the
distance between two points is smallest when the distance is smallest.
Continued
Computing the Closest Pair of
Points by Brute-Force
Algorithm for finding the closest pair in a set of n points.
procedure closest pair((x1, y1), (x2, y2), … (xn, yn): xi, yi real numbers)
min =
for i := to n
for j := to i
if (xj xi)2 + (yj yi)2 < min
then min := (xj xi)2 + (yj yi)2
closest pair := (xi, yi), (xj, yj)
return closest pair
The algorithm loops through n(n )/ pairs of points, computes the value
(xj xi ) 2
+ (yj yi)2
and compares it with the minimum, etc. So, the algorithm
uses Θ(n2) arithmetic and comparison operations.
We will develop an algorithm with O(log n) worst-case complexity in Section
.
Understanding the Complexity of
Algorithms
Understanding the Complexity of
Algorithms
47
References
Discrete Mathematics and Its Applications, Seventh Edition, Kenneth Rosen
48