Order
Order
4 – Order
Once we determine the number of operations
performed as a function of n (T(n) or W(n)),
we are interested in classifying algorithms
according to their order or complexity class.
◦ Algorithms with time complexities such as n, 100n,
and 50n + 10 are called linear-time algorithms
◦ Algorithms with time complexities such as n2, 0.5n2,
and n2 + 10n + 1 are called quadratic-time algorithms
Any linear-time algorithm is eventually more
efficient than any quadratic-time algorithm
Example: How to compute 1 + 2 + 3 + … + n
# ops
public int sumA(int n) {
int sum; sumA(10) 1
sum = n*(n+1)/2; constant
return sum; sumA(1000) 1
}
Only concerned
with growth as N
grows large.
Performance
gap
The Dominant Term
In a function such as
f(n) = 0.1n2 + n + 100,
the quadratic term eventually dominates (table 1.3)
Big-Theta Notation
We “throw away” terms with smaller exponents,
and we state that, f(n) is “order of” n2, or
f(n) ∈ Θ(n2)
f(n) is called a quadratic-time algorithm, or Θ(n2)
Common complexity classes:
Θ(1) Θ(n lg n)
Θ(lg n) Θ(n2) Θ(2n)
Θ(n) Θ(n3)
Figure 1.3: Growth rates of
some common complexity
functions.
Practical implications
Assume a machine executing at one billion instructions per sec.
1,000 10,000 100,000 1,000,000 10,000,000
Reasonable for
O(log N) < 1 ns 10 ns 132 ns 166 ns 199 ns
most inputs
O(N) 1 ns 10 ns 100 ns 1 ms 10 ms
O(N!) ?? ?? ?? ?? ??
4. The cost of a loop is the cost of the body multiplied by the by the
maximum number of iterations that the loop makes.
= ~cN
which is
Order of N
Example: loops
Cost of the i loop = cost of the i loop body x number of i loop iterations
= cost of the j loop x number of i loop iterations
= (cost of the j loop body x number of j loop iterations)
x number of i loop iterations
= (constant x N) x N
= ~cN2
which is Order of N2
Example: loops
Cost of the i loop = cost of the i loop body x number of i loop iterations
= cost of the j loop x number of i loop iterations
= (cost of the j loop body x number of j loop iterations)
x number of i loop iterations
= (constant x N) x N
= ~cN2
which is Order of N2
Example: sequence
if (a < b)
{
while (k > 1)
{
c1•log2n k = k / 2;
count++;
}
}
else
{
} Order of N
A Rigorous Introduction to Order
In order to define order formally, it is useful to
define a similar concept: Big O notation.
Big O notation is used to establish an “upper
bound” on the growth of a function.
For example, let g(n) = n2 + 10n
We look for a constant c so that g(n)
eventually falls “below” cn2,
If we find c, we say that g(n) is “big O” of n2, or
n2 + 10n ∈ O(n2)
Figure 1.5: The function n2 + 10n eventually stays beneath the
function 2n2
Big-O notation
Let f(n) and g(n) be functions defined on the nonnegative integers. We say “f(n) is
O(g(n))” if and only if there exists a nonnegative integer n0 and a constant c > 0
such that for all integers n ≥ n0 we have f(n) ≤ c•g(n).
Example: f(n) = n2 + 2n + 1
f is O(n2)
(c = 4, n0 = 1)
Why Big O Notation is not enough
By definition, if algorithm a is O(n2), then a is O(n3),
a is O(2n), etc.
These “larger” big O functions are not useful for
comparing a with other algorithms
Big Theta is defined in a similar way as Big O, except
that it provides both upper (big O) and lower (big
Omega) bounds on the growth of the function
If a function g(n) is both O(n2) and Ω(n2), then
g(n) ∈ Θ(n2)
Figure 1.4: Illustrating "big O", Ω and Θ
Figure 1.6: The sets O (n2), Ω (n2)and Θ (n2). Some exemplary members
are shown.
Properties of Order (pp. 38-39)
If b > 1 and a > 1, then loga n ∈ Θ(logb n)
If b > a > 0, then an ∈ O(bn)
For all a > 0, an ∈ O(n!)
“… any logarithmic function is eventually better
than any polynomial, any polynomial is eventually
better than any exponential function, and any
exponential is eventually better than the factorial
function.”
Which of the following are true?
Practice ◦ n2 + 2n + 1 is O(n log n)
◦ n2 + 2n + 1 is O(n2)
◦ n2 + 2n + 1 is Ω(n)
◦ n2 + 2n + 1 is O(n3)
◦ n2 + 2n + 1 is Θ(n2)
◦ n2 + 2n + 1 is Θ(n3)
◦ 2n + 2n + 1 is O(2n)
◦ 2n + 2n + 1 is Θ(2n)
◦ 2n + 2n + 1 is O(4n)
◦ 2n + 2n + 1 is Θ(4n)
Estimating Complexity Empirically
It is sometimes useful to experiment by running an
algorithm with increasing values of n and measuring
either the time elapsed or number of operations
performed
Plot the measurements with their corresponding
values of n. Observe the shape of the curve.
Compare the ratio of the increase in the number of
operations performed to the ratio of the increase in
values of n.
Example
8 123
16 507 4.122
32 2043 4.030
64 8187 4.007