0% found this document useful (0 votes)
28 views11 pages

UNIT 2 (Asymptotic Notations)

Uploaded by

hpl83659
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views11 pages

UNIT 2 (Asymptotic Notations)

Uploaded by

hpl83659
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

UNIT 2-ASYMPTOTIC NOTATIONS

UNIT 2
ASYMPTOTIC NOTATIONS
The efficiency analysis framework concentrates on the order of growth of
an algorithm’s basic operation count as the principal indicator of the algorithm’s
efficiency. To compare and rank such orders of growth, computer scientists use
three notations: O (big oh), Ω (big omega), and Ɵ (big theta).
In the following discussion, t (n) and g(n) can be any nonnegative
functions defined on the set of natural numbers. In the context we are interested
in, t (n) will be an algorithm’s running time (usually indicated by its basic
operation count C(n)), and g(n) will be some simple function to compare the
count with.

Informal Introduction
Informally, O(g(n)) is the set of all functions with a lower or same order
of growth as g(n) (to within a constant multiple, as n goes to infinity). Thus, to
give a few examples, the following assertions are all true:
n ∈ O(n2 ), 100n + 5 ∈ O(n2 ), 1/ 2 n(n − 1) ∈ O(n2)

Indeed, the first two functions are linear and hence have a lower order of
growth than g(n) = n2, while the last one is quadratic and hence has the same
order of growth as n2. On the other hand,

n3 !∈ O(n2 ), 0.00001n3 !∈ O(n2 ), n4 + n + 1 !∈ O(n2 ).

Indeed, the functions n3 and 0.00001n3 are both cubic and hence have a
higher order of growth than n2, and so has the fourth-degree polynomial n4 + n +
1. The second notation, Ω (g(n)), stands for the set of all functions with a higher
or same order of growth as g(n)(to within a constant multiple, as n goes to
infinity). For example,
n3 ∈ Ω (n2 ), 1/ 2 n(n − 1) ∈ Ω (n2 ), but 100n + 5 !∈ Ω (n2 ).
Finally, Ɵ (g(n)) is the set of all functions that have the same order of
growth as g(n) (to within a constant multiple, as n goes to infinity). Thus, every
quadratic function an2 + bn + c with a > 0 is in Ɵ (n2), but so are, among
1
UNIT 2-ASYMPTOTIC NOTATIONS

infinitely many others, n2 + sin n and n2 + log n. (Can you explain why?)
Hopefully, this informal introduction has made you comfortable with the idea
behind the three asymptotic notations. So now come the formal definitions.

O-notation
DEFINITION :A function t (n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)),
if t (n) is bounded above by some constant multiple of g(n) for all large n, i.e., if
there exist some positive constant c and some nonnegative integer n0 such that
t (n) ≤ cg(n) for all n ≥ n0.
The definition is illustrated in Figure (1) where, for the sake of visual clarity, n
is extended to be a real number.
As an example, let us formally prove one of the assertions made in the
introduction: 100n + 5 ∈ O(n2). Indeed,
100n + 5 ≤ 100n + n (for all n ≥ 5) = 101n ≤ 101n2 .
Thus, as values of the constants c and n0 required by the definition, we can take
101 and 5, respectively.
Note that the definition gives us a lot of freedom in choosing specific values for
constants c and n0. For example, we could also reason that
100n + 5 ≤ 100n + 5n (for all n ≥ 1) = 105n
to complete the proof with c = 105 and n0 = 1

cg(n)

t(n)

Doesn’t
matter

n0 n

FIGURE (1) Big-oh notation: t (n)


∈ O(g(n))

2
UNIT 2-ASYMPTOTIC NOTATIONS

Ω -notation
DEFINITION: A function t (n) is said to be in (g(n)), denoted t (n) ∈ Ω (g(n)),
if t (n) is bounded below by some positive constant multiple of g(n) for all large
n, i.e., if there exist some positive constant c and some nonnegative integer n 0
such that
t (n) ≥ cg(n) for all n ≥ n0.
The definition is illustrated in Figure 2.2.
Here is an example of the formal proof that n3 ∈ Ω (n2):
n3 ≥ n2 for all n ≥ 0, i.e.,
we can select c = 1 and n0 = 0.

3
UNIT 2-ASYMPTOTIC NOTATIONS

Ɵ notation
DEFINITION A function t (n) is said to be in Ɵ (g(n)), denoted t (n) ∈ Ɵ g(n)),
if t (n) is bounded both above and below by some positive constant multiples of
g(n) for all large n, i.e., if there exist some positive constants c1 and c2 and some
non-negative integer n0 such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.
The definition is illustrated in Figure 2.3.
For example, let us prove that 1 /2n(n − 1) ∈ Ɵ(n2). First, we prove the right
inequality (the upper bound):
1/ 2 n(n − 1) = 1 /2 n2 – 1/ 2 n ≤ 1/ 2 n2 for all n ≥ 0.
Second, we prove the left inequality (the lower bound):
1/ 2 n(n − 1) = 1/ 2 n2 – 1/ 2 n ≥ 1 /2 n2 – 1/ 2 n 1 /2 n (for all n ≥ 2) = 1/ 4 n2 .
Hence, we can select c2 = 1/ 4 , c1 = 1/ 2 , and n0 = 2.

4
UNIT 2-ASYMPTOTIC NOTATIONS

Mathematical Analysis of Non-recursive algorithms:


General Plan for Analysing the Time Efficiency of Non-recursive
Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in the inner
most loop.)
3. Check whether the number of times the basic operation is executed depends
only on the size of an input. If it also depends on some additional property, the
worst-case, average-case, and, if necessary, best-case efficiencies have to be
investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation
is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed-
form formula for the count or, at the very least, establish its order of growth.

EXAMPLE 1 Consider the problem of finding the value of the largest element
in a list of n numbers. For simplicity, we assume that the list is implemented as
an array. The following is pseudocode of a standard algorithm for solving the
problem.
ALGORITHM MaxElement(A[0..n − 1])
//Determines the value of the largest element in a given array
//Input: An array A[0..n − 1] of real numbers
//Output: The value of the largest element in A
maxval ← A[0]
for i ← 1 to n − 1 do
if A[i] > maxval
maxval ← A[i]
return maxval

5
UNIT 2-ASYMPTOTIC NOTATIONS

The obvious measure of an input’s size here is the number of elements in the
array, i.e., n. The operations that are going to be executed most often are in the
algorithm’s for loop. There are two operations in the loop’s body: the
comparison A[i] > maxval and the assignment maxval ← A[i]. Which of these
two operations should we consider basic? Since the comparison is executed on
each repetition of the loop and the assignment is not, we should consider the
comparison to be the algorithm’s basic operation. Note that the number of
comparisons will be the same for all arrays of size n; therefore, in terms of this
metric, there is no need to distinguish among the worst, average, and best cases
here.
Let us denote C(n) the number of times this comparison is executed and try to
find a formula expressing it as a function of size n. The algorithm makes one
comparison on each execution of the loop, which is repeated for each value of
the loop’s variable i within the bounds 1 and n − 1, inclusive. Therefore, we get
the following sum for C(n):

This is an easy sum to compute because it is nothing other than 1 repeated n − 1


times. Thus

EXAMPLE 2 Consider the element uniqueness problem: check whether all the
elements in a given array of n elements are distinct. This problem can be solved
by the following straightforward algorithm. ALGORITHM
UniqueElements(A[0..n − 1]) //Determines whether all the elements in a given
array are distinct //Input: An array A[0..n − 1] //Output: Returns “true” if all the
elements in A are distinct // and “false” otherwise for i ← 0 to n − 2 do for j ← i
+ 1 to n − 1 do if A[i] = A[j ]return false return true The natural measure of the
input’s size here is again n, the number of elements in the array. Since the
innermost loop contains a single operation (the comparison of two elements),
we should consider it as the algorithm’s basic operation. Note, however, that the
number of element comparisons depends not only on n but also on whether
there are equal elements in the array and, if there are, which array positions they
occupy. We will limit our investigation to the worst case only. By definition, the
worst case input is an array for which the number of element comparisons

6
UNIT 2-ASYMPTOTIC NOTATIONS

Cworst(n) is the largest among all arrays of size n. An inspection of the


innermost loop reveals that there are two kinds of worst-case inputs—inputs for
which the algorithm does not exit the loop prematurely: arrays with no equal
elements and arrays in which the last two elements are the only pair of equal
elements. For such inputs, one comparison is made for each repetition of the
innermost loop, i.e., for each value of the loop variable j between its limits i + 1
and n − 1; this is repeated for each value of the outer loop, i.e., for each value of
the loop variable i between its limits 0 and n − 2. Accordingly, we get

where the last equality is obtained by applying summation formula (S2). Note
that this result was perfectly predictable: in the worst case, the algorithm needs
to compare all n(n − 1)/2 distinct pairs of its n elements

EXAMPLE 3 Given two n × n matrices A and B, find the time efficiency of the
definition-based algorithm for computing their product C = AB. By definition,
C is an n × n matrix whose elements are computed as the scalar (dot) products
of the rows of matrix A and the columns of matrix B

7
UNIT 2-ASYMPTOTIC NOTATIONS

ALGORITHM MatrixMultiplication(A[0..n − 1, 0..n − 1], B[0..n − 1, 0..n − 1])


//Multiplies two square matrices of order n by the definition-based algorithm
//Input: Two n × n matrices A and B
//Output: Matrix C = AB
for i ← 0 to n − 1 do
for j ← 0 to n − 1 do
C[i, j ]← 0.0
for k ← 0 to n − 1 do
C[i, j ]← C[i, j ] + A[i, k] ∗ B[k, j ]
Return C

Mathematical Analysis of Recursive algorithms:


EXAMPLE 1 Compute the factorial function F (n) = n! for an arbitrary non-
negative integer n. Since
n!= 1 . ... . (n − 1) . n = (n − 1)!. n for n ≥ 1
and 0!= 1 by definition, we can compute F (n) = F (n − 1) . n with the following
recursive algorithm.
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return F (n − 1) ∗ n

8
UNIT 2-ASYMPTOTIC NOTATIONS

For simplicity, we consider n itself as an indicator of this algorithm’s input size


(rather than the number of bits in its binary expansion). The basic operation of
the algorithm is multiplication, whose number of executions we denote M(n).
Since the function F (n) is computed according to the formula
F (n) = F (n − 1) . n for n > 0,
the number of multiplications M(n) needed to compute it must satisfy the
equality

Indeed, M(n − 1) multiplications are spent to compute F (n − 1), and one more
multiplication is needed to multiply the result by n. The last equation defines the
sequence M(n) that we need to find. This equation defines M(n) not explicitly,
i.e., as a function of n, but implicitly as a function of its value at another point,
namely n − 1. Such equations are called recurrence relations or, for brevity,
recurrences. Recurrence relations play an important role not only in analysis of
algorithms but also in some areas of applied mathematics. They are usually
studied in detail in courses on discrete mathematics or discrete structures; a very
brief tutorial on them is provided in Appendix B. Our goal now is to solve the
recurrence relation M(n) = M(n − 1) + 1, i.e., to find an explicit formula for
M(n) in terms of n only. Note, however, that there is not one but infinitely many
sequences that satisfy this recurrence. (Can you give examples of, say, two of
them?) To determine a solution uniquely, we need an initial condition that tells
us the value with which the sequence starts. We can obtain this value by
inspecting the condition that makes the algorithm stop its recursive calls:
if n = 0 return 1
This tells us two things. First, since the calls stop when n = 0, the smallest value
of n for which this algorithm is executed and hence M(n) defined is 0. Second,
by inspecting the pseudocode’s exiting line, we can see that when n = 0, the
algorithm performs no multiplications. Therefore, the initial condition we are
after is

9
UNIT 2-ASYMPTOTIC NOTATIONS

As we just showed, M(n) is defined by recurrence (2.2). And it is recurrence


(2.2) that we need to solve now. Though it is not difficult to “guess” the solution
here (what sequence starts with 0 when n = 0 and increases by 1 on each step?),
it will be more useful to arrive at it in a systematic fashion. From the several
techniques available for solving recurrence relations, we use what can be called
the method of backward substitutions. The method’s idea (and the reason for the
name) is immediately clear from the way it applies to solving our particular
recurrence:
M(n) = M(n − 1) + 1 substitute M(n − 1) = M(n − 2) + 1
= [M(n − 2) + 1] + 1 = M(n − 2) + 2 substitute M(n − 2) = M(n − 3) + 1
= [M(n − 3) + 1] + 2 = M(n − 3) + 3.
After inspecting the first three lines, we see an emerging pattern, which makes it
possible to predict not only the next line (what would it be?) but also a general
formula for the pattern: M(n) = M(n − i) + i. Strictly speaking, the correctness
of this formula should be proved by mathematical induction, but it is easier to
get to the solution as follows and then verify its correctness. What remains to be
done is to take advantage of the initial condition given. Since it is specified for n
= 0, we have to substitute i = n in the pattern’s formula to get the ultimate result
of our backward substitutions:
M(n) = M(n − 1) + 1 = ... = M(n − i) + i = ... = M(n − n) + n = n.

10
UNIT 2-ASYMPTOTIC NOTATIONS

You should not be disappointed after exerting so much effort to get this
“obvious” answer. The benefits of the method illustrated in this simple example
will become clear very soon, when we have to solve more difficult recurrences.
Also, note that the simple iterative algorithm that accumulates the product of n
consecutive integers requires the same number of multiplications, and it does so
without the overhead of time and space used for maintaining the recursion’s
stack. The issue of time efficiency is actually not that important for the problem
of computing n!, however. As we saw in Section 2.1, the function’s values get
so large so fast that we can realistically compute exact values of n! only for very
small n’s. Again, we use this example just as a simple and convenient vehicle to
introduce the standard approach to analysing recursive algorithms.
Generalizing our experience with investigating the recursive algorithm for
computing n!, we can now outline a general plan for investigating recursive
algorithms.
General Plan for Analysing the Time Efficiency of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can vary
on different inputs of the same size; if it can, the worst-case, average-case, and
best-case efficiencies must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the
number of times the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.

11

You might also like