0% found this document useful (0 votes)
48 views3 pages

Algorithm Running Times

The document discusses different types of asymptotic notation used to characterize the running time of algorithms including O-notation, Ω-notation, and Θ-notation. It explains what each notation represents and provides examples using the running time of insertion sort. O-notation represents an upper bound growth rate, Ω-notation a lower bound, and Θ-notation a tight bound between the upper and lower bounds.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views3 pages

Algorithm Running Times

The document discusses different types of asymptotic notation used to characterize the running time of algorithms including O-notation, Ω-notation, and Θ-notation. It explains what each notation represents and provides examples using the running time of insertion sort. O-notation represents an upper bound growth rate, Ω-notation a lower bound, and Θ-notation a tight bound between the upper and lower bounds.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

         Characterizing Running Times

The order of growth of the running time of an algorithm, defined in


Chapter 2, gives a simple way to characterize the algorithm’s efficiency
and also allows us to compare it with alternative algorithms. Once the
input size n becomes large enough, merge sort, with its Θ(n lg n) worst-
case running time, beats insertion sort, whose worst-case running time
is Θ(n2). Although we can sometimes determine the exact running time
of an algorithm, as we did for insertion sort in Chapter 2, the extra
precision is rarely worth the effort of computing it. For large enough
inputs, the multiplicative constants and lower-order terms of an exact
running time are dominated by the effects of the input size itself.
When we look at input sizes large enough to make relevant only the
order of growth of the running time, we are studying the asymptotic
efficiency of algorithms. That is, we are concerned with how the running
time of an algorithm increases with the size of the input in the limit, as
the size of the input increases without bound. Usually, an algorithm
that is asymptotically more efficient is the best choice for all but very
small inputs.
This chapter gives several standard methods for simplifying the
asymptotic analysis of algorithms. The next section presents informally
the three most commonly used types of “asymptotic notation,” of
which we have already seen an example in Θ-notation. It also shows one
way to use these asymptotic notations to reason about the worst-case
running time of insertion sort. Then we look at asymptotic notations
more formally and present several notational conventions used
throughout this book. The last section reviews the behavior of
functions that commonly arise when analyzing algorithms.

3.1      O-notation, Ω-notation, and Θ-notation


When we analyzed the worst-case running time of insertion sort in
Chapter 2, we started with the complicated expression

We then discarded the lower-order terms (c1 + c2 + c4 + c5/2 – c6/2 –


c7/2 + c8)n and c2 + c4 + c5 + c8, and we also ignored the coefficient
c5/2 + c6/2 + c7/2 of n2. That left just the factor n2, which we put into
Θ-notation as Θ(n2). We use this style to characterize running times of
algorithms: discard the lower-order terms and the coefficient of the
leading term, and use a notation that focuses on the rate of growth of
the running time.
Θ-notation is not the only such “asymptotic notation.” In this
section, we’ll see other forms of asymptotic notation as well. We start
with intuitive looks at these notations, revisiting insertion sort to see
how we can apply them. In the next section, we’ll see the formal
definitions of our asymptotic notations, along with conventions for
using them.
Before we get into specifics, bear in mind that the asymptotic
notations we’ll see are designed so that they characterize functions in
general. It so happens that the functions we are most interested in
denote the running times of algorithms. But asymptotic notation can
apply to functions that characterize some other aspect of algorithms
(the amount of space they use, for example), or even to functions that
have nothing whatsoever to do with algorithms.

O-notation
O-notation characterizes an upper bound on the asymptotic behavior of
a function. In other words, it says that a function grows no faster than a
certain rate, based on the highest-order term. Consider, for example, the
function 7n3 + 100n2 – 20n + 6. Its highest-order term is 7n3, and so we
say that this function’s rate of growth is n3. Because this function grows
no faster than n3, we can write that it is O(n3). You might be surprised
that we can also write that the function 7n3 + 100n2 – 20n + 6 is O(n4).
Why? Because the function grows more slowly than n4, we are correct
in saying that it grows no faster. As you might have guessed, this
function is also O(n5), O(n6), and so on. More generally, it is O(nc) for
any constant c ≥ 3.

Ω-notation
Ω-notation characterizes a lower bound on the asymptotic behavior of a
function. In other words, it says that a function grows at least as fast as
a certain rate, based — as in O-notation—on the highest-order term.
Because the highest-order term in the function 7n3 + 100n2 – 20n + 6
grows at least as fast as n3, this function is Ω(n3). This function is also
Ω(n2) and Ω(n). More generally, it is Ω(nc) for any constant c ≤ 3.

Θ-notation
Θ-notation characterizes a tight bound on the asymptotic behavior of a
function. It says that a function grows precisely at a certain rate, based
—once again—on the highest-order term. Put another way, Θ-notation
characterizes the rate of growth of the function to within a constant
factor from above and to within a constant factor from below. These
two constant factors need not be equal.
If you can show that a function is both O(f (n)) and Ω(f (n)) for some
function f (n), then you have shown that the function is Θ(f (n)). (The
next section states this fact as a theorem.) For example, since the
function 7n3 + 100n2 – 20n + 6 is both O(n3) and Ω(n3), it is also Θ(n3).

You might also like