0% found this document useful (0 votes)
10 views36 pages

Order

Uploaded by

rregmi967
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views36 pages

Order

Uploaded by

rregmi967
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

1.

4 – Order
 Once we determine the number of operations
performed as a function of n (T(n) or W(n)),
we are interested in classifying algorithms
according to their order or complexity class.
◦ Algorithms with time complexities such as n, 100n,
and 50n + 10 are called linear-time algorithms
◦ Algorithms with time complexities such as n2, 0.5n2,
and n2 + 10n + 1 are called quadratic-time algorithms
 Any linear-time algorithm is eventually more
efficient than any quadratic-time algorithm
Example: How to compute 1 + 2 + 3 + … + n
# ops
public int sumA(int n) {
int sum; sumA(10) 1
sum = n*(n+1)/2; constant
return sum; sumA(1000) 1
}

public int sumB(int n) {


int sum = 0;
for (int i = 1; i <= n; i++) { sumB(10) ~10
sum = sum + i; ~n
} sumB(1000) ~1000
return sum;
}

public int sumC(int n) {


int sum = 0;
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= i; j++ {
sumC(10) ~50
sum = sum + 1; ~(1/2)n2
} sumC(1000) ~500,000
}
return sum;
}
Order
We’re really interested in an algorithm’s order of growth.
How does the algorithm’s running time change as N grows large?

Only concerned
with growth as N
grows large.

Performance
gap
The Dominant Term
 In a function such as
f(n) = 0.1n2 + n + 100,
the quadratic term eventually dominates (table 1.3)
Big-Theta Notation
 We “throw away” terms with smaller exponents,
and we state that, f(n) is “order of” n2, or
f(n) ∈ Θ(n2)
 f(n) is called a quadratic-time algorithm, or Θ(n2)
 Common complexity classes:
Θ(1) Θ(n lg n)
Θ(lg n) Θ(n2) Θ(2n)
Θ(n) Θ(n3)
Figure 1.3: Growth rates of
some common complexity
functions.
Practical implications
Assume a machine executing at one billion instructions per sec.
1,000 10,000 100,000 1,000,000 10,000,000

Reasonable for
O(log N) < 1 ns 10 ns 132 ns 166 ns 199 ns
most inputs
O(N) 1 ns 10 ns 100 ns 1 ms 10 ms

O(N logN) 10 ns 100 ns 2 ms 20 ms 0.2 sec Enables new


technologies.
O(N2) 1 ms 0.1 sec 10 sec 17 min 28 hours

O(N3) 1 sec 17 min 12 days 32 yrs 32,000 yrs

O(N4) 17 min 4 months 3,200 yrs 3.2 M yrs 3.17e15 yrs


Unreasonable
O(2N) 3.4e284 yrs ?? ?? ?? ?? for all inputs

O(N!) ?? ?? ?? ?? ??

Age of the universe: 433.6 x 1015 seconds


Time needed for an exponential
algorithm to process 1000 things: 3.4 x 10284 years
More Practical implications
Hardware is getting faster, but not enough …

growth problem size solvable in practical time


rate 1970s 1980s 1990s 2000s
1 any any any any
log N any any any any
N millions tens of hundreds of billions
millions millions
N log N hundreds of millions millions tens of
thousands millions
N2 hundreds thousand thousands tens of
thousands
N3 hundred hundreds thousand thousands
2N 20 20s 20s 30s

Advances in hardware can’t compensate for some


large time complexities.
Calculating worst-case complexity

1. All simple statements and primitive operations have constant cost.

2. The cost of a sequence of statements is the sum of the costs of each


individual statement.

3. The cost of a selection statement is the cost of the most expensive


branch.

4. The cost of a loop is the cost of the body multiplied by the by the
maximum number of iterations that the loop makes.

5. Once we obtain a polynomial to describe the number of operations


performed in terms of n,
• we eliminate all terms except the one with the highest exponent
• we ignore any multiplicative constants
Example: loops

for (int i = 0; i < N; i++) N iterations


{
a = a + i;
b++; constant cost
System.out.println(a + b);
}

Cost of the loop = cost of the body x number of iterations


= constant x N

= ~cN

which is
Order of N
Example: loops

for (int i = 0; i < N; i++) N iterations


{
for (int j = 0; j < N; j++) N iterations
{
System.out.println(i*j); constant cost
}

Cost of the i loop = cost of the i loop body x number of i loop iterations
= cost of the j loop x number of i loop iterations
= (cost of the j loop body x number of j loop iterations)
x number of i loop iterations
= (constant x N) x N
= ~cN2
which is Order of N2
Example: loops

for (int i = 0; i < N; i++) N iterations


{
for (int j = i; j < N; j++) N iterations (max)
{
System.out.println(i*j); constant cost
}

Cost of the i loop = cost of the i loop body x number of i loop iterations
= cost of the j loop x number of i loop iterations
= (cost of the j loop body x number of j loop iterations)
x number of i loop iterations
= (constant x N) x N
= ~cN2
which is Order of N2
Example: sequence

int count = 0; T(n) = ~c3•n2 + c4•n + c2•log2n + c1


c1
int k = n;
which is
while (k > 1)
{
Order of N2
c2•log2n k = k / 2;
count++;
}

for (int i = 0; i < n; i++)


{
for (int j = 0; j < n; j++)
c3•n2 {
System.out.println(i*j);
}
}

for (int i = 0; i < n; i++)


{
c4•n
count = count + i;
}
Example: decision

if (a < b)
{
while (k > 1)
{
c1•log2n k = k / 2;
count++;
}
}
else
{

for (int i = 0; i < n; i++)


c2•n {
count = count + i;
}

} Order of N
A Rigorous Introduction to Order
 In order to define order formally, it is useful to
define a similar concept: Big O notation.
 Big O notation is used to establish an “upper
bound” on the growth of a function.
 For example, let g(n) = n2 + 10n
We look for a constant c so that g(n)
eventually falls “below” cn2,
If we find c, we say that g(n) is “big O” of n2, or
n2 + 10n ∈ O(n2)
Figure 1.5: The function n2 + 10n eventually stays beneath the
function 2n2
Big-O notation
Let f(n) and g(n) be functions defined on the nonnegative integers. We say “f(n) is
O(g(n))” if and only if there exists a nonnegative integer n0 and a constant c > 0
such that for all integers n ≥ n0 we have f(n) ≤ c•g(n).

Example: f(n) = n2 + 2n + 1
f is O(n2)
(c = 4, n0 = 1)
Why Big O Notation is not enough
 By definition, if algorithm a is O(n2), then a is O(n3),
a is O(2n), etc.
 These “larger” big O functions are not useful for
comparing a with other algorithms
 Big Theta is defined in a similar way as Big O, except
that it provides both upper (big O) and lower (big
Omega) bounds on the growth of the function
 If a function g(n) is both O(n2) and Ω(n2), then
g(n) ∈ Θ(n2)
Figure 1.4: Illustrating "big O", Ω and Θ
Figure 1.6: The sets O (n2), Ω (n2)and Θ (n2). Some exemplary members
are shown.
Properties of Order (pp. 38-39)
 If b > 1 and a > 1, then loga n ∈ Θ(logb n)
 If b > a > 0, then an ∈ O(bn)
 For all a > 0, an ∈ O(n!)
“… any logarithmic function is eventually better
than any polynomial, any polynomial is eventually
better than any exponential function, and any
exponential is eventually better than the factorial
function.”
Which of the following are true?
Practice ◦ n2 + 2n + 1 is O(n log n)
◦ n2 + 2n + 1 is O(n2)
◦ n2 + 2n + 1 is Ω(n)
◦ n2 + 2n + 1 is O(n3)
◦ n2 + 2n + 1 is Θ(n2)
◦ n2 + 2n + 1 is Θ(n3)
◦ 2n + 2n + 1 is O(2n)
◦ 2n + 2n + 1 is Θ(2n)
◦ 2n + 2n + 1 is O(4n)
◦ 2n + 2n + 1 is Θ(4n)
Estimating Complexity Empirically
 It is sometimes useful to experiment by running an
algorithm with increasing values of n and measuring
either the time elapsed or number of operations
performed
 Plot the measurements with their corresponding
values of n. Observe the shape of the curve.
 Compare the ratio of the increase in the number of
operations performed to the ratio of the increase in
values of n.
Example

# of operations Ratio of running


N OR run time times

8 123

16 507 4.122

32 2043 4.030

64 8187 4.007

128 32763 4.002

256 131067 4.000

512 524283 4.000

1024 2097147 4.000

2048 8388603 4.000

You might also like