0% found this document useful (0 votes)
27 views

Algorithm-Lecture3-Asymtotic Analysis

The document discusses asymptotic analysis and analytical approaches to determine the time complexity of algorithms. It covers concepts like best case, average case, and worst case scenarios. Growth rates of time complexities are expressed as functions of increasing input size.

Uploaded by

Hesham Ali Sakr
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Algorithm-Lecture3-Asymtotic Analysis

The document discusses asymptotic analysis and analytical approaches to determine the time complexity of algorithms. It covers concepts like best case, average case, and worst case scenarios. Growth rates of time complexities are expressed as functions of increasing input size.

Uploaded by

Hesham Ali Sakr
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 47

Part 2: Asymptotic Analysis

Lecture 3

1
Analytical Approach Effect of time and space

Functionality Fixed part

Space Complexity
Usability
Variable part
Efficiency No. of
I/O size dependent Steps as
function of
Reliability Compile time size (n)
Machine
Portability Time Complexity dependent

Maintainability Run time


I/O size dependent

2
Analytical Approach

Number of Steps vs Input size

3
ATnmi ayl etiCcaolmApppl Effect of Data Size
eroxaticyh
Growth Rates of the Time Complexities of Algorithms
with respect to increasing problem sizes. On a machine running
at Speed of 1 GHz (1,000,000 instructions per second)
Time (sec) = # of inst. ..f(n) / # of Inst. per sec …..Speed (S)

Time= f(n) / S

4
Analytical Effect of Data Size
Approach
Growth Rate of the Complexity time function
f n   n 2 100n  log
10 n
1000

Upper bound Concept


5
ATnimalyetiCcoaml

Applperxoitaych
BIG IDEA:
 Focus on number of steps
 Ignore machine-dependent constants.
 Look at growth of # of steps as
n→∞. n … . Very large

“Asymptotic Analysis”

6
Analytical
Approach
“Asymptotic Analysis”

Using asymptotic analysis, we can very well conclude the


best case, average case, and worst case scenarios of an
algorithm.

worst case best case average case

g(n) .. Upper bound g(n) ..Lower bound


function function

7
AAnnaallyyttiiccaallAA

ppppo
r roaachch:
Using Pseudo Code
Problem Express No. of Steps
Algorithm
as function of Input
Size … … F(n)

Complexity
Indicator Grow with the Performance
data size (n) Indicator
Focus at Large .. n
Consider Upper No. of Steps
Bound & Lower Examples:
bounds F(n) = 2n+3
F(n) = n2+ n
“Asymptotic F(n) = n logn
Analysis” F(n) = n3 + n logn+ 10

8
Upper bound
No. of g(n) =n2 O(n2) Worst
Big-Oh Case
Steps
Algorithm f(n) = n2+8

g(n) = n Ω(n) Best


Big-
Lower bound Omega
Case

𝛳(𝑛) Average
Big- Case
Theta

“Asymptotic Analysis” at growth of f (n) as n→∞.

9
Asymptotic Notation (Worst,
Best, Average) worst case
Big-Oh
f(n) is O(g(n)) if f(n) is asymptotically
less than or equal to g(n)

best case

Big-Omega
f(n) is (g(n)) if f(n) is asymptotically
greater than or equal to g(n)
average case
Big-Theta
f(n) is (g(n)) if f(n) is asymptotically
equal to g(n)
No. of Steps Upper bound Big-Oh
Algorithm 1 g(n) = 3n
f(n) = 2n+3
O(n)
Algorithm 2 f(n) = 4n+8 g(n) =5n

Algorithm 3 g(n) =n2 O(n2)


f(n) = n +8
2

O(n3)
Algorithm f(n) = 2n 3

4
O(log n)

Algorithm .. “Asymptotic Analysis” Worst Case


at growth of f (n)as n→∞.
11
“Asymptotic Analysis”
Calculating Running Determine O-notation
Time f(n) (upper bounds) Worst
case
•Use the pseudo code
•Ignore constants
•Consider Coding
implementation

Iterative
?
Ex: Do Loop
f(n) = n

Recurrence
Ex: Merge Sort
f(n) = n + 2f(n/2)

12
Calculating
Running Time
f(n)

• Use the source/pseudo code


• Ignore constants
• Ignore lower order terms
• Explicitly assume either
• The average case (harder to
do)
• The worst case (easier)
• Most analysis uses the worst case
Linear-time Loop
n
for x = 1 to n

x11  1111 ... 1 
f(n) = n
constant-time operation
n
Note: Constant-time mean independent of the input size.

2-Nested Loops  Quadratic


n1 n1 n1
for x = 0 to n-1 f(n) = n2
for y = 0 to n-1 
x0 
y 1
x0n  n  n  n
2

0
Constant time operation

3-Nested Loops  Cubic


for x = 0 to n-1
for y = 0 to n-1 f(n) = n3
for z = 0 to n-1
constant-time operation
The number of nested loops determines the exponent
14
Non-trivial
loops
for x = 1 to n
n x n n(n  n2  n  n2
for y = 1 to x  1  x
x1 y 1  1)2 2
x1
constant-time operation f(n) = n2

for z = 1 to n n z y
n 3  3n 2  2n
for y = 1 to z  1 
6
 n3
for x = 1 to y z 1 y 1 x1

constant-op
f(n) = n3

15
i = N; N= 8
while i >= 1 i=8
{ print i , i = i / 2 } i=8, print 8, i = 8 / 2 = 4 … . … … … … . . 1
i=4, print 4, i = 4 / 2 = 2 . .
…………….2 i=2, print 2,
i=2/2=1………………3
How many times can we divide by N by 2 until we get below 1? If we
approached this in reverse, we could say: how many times can we multiply 1
by 2 until we get to N? This would be the value x, where 2^x = n. This for loop,
therefore, iterates x times.

Now we just need to solve for x:


2^x = n
log(2^x) = log(n)
x log(2) = log(n)
x = log(n) /
log(2)
This code operates
in O(log(n)) time. 16
n f0(n) = log n f1(n) = n f2(n) =n2 f3(n) =n3

10 7 10 100 1000

100 10 100 10000 1000000

106 20 106 1012 1018

i = N; for x = 1 to n for x = 0 to n-1 for x = 0 to n-1


while i >= 1 constant for y = 0 to n-1 for y = 0 to n-1
{ print i , i = i / 2 } Constant for z = 0 to n-1
constant

17
Program Segments
for x = 0 to n-1 f(n) = n + n2 + n + Log n
constant-time op

for y = 0 to n-1
for z = 0 to n-1
constant-time op

for w = 0 to n-1
constant-time op
= n2 + 2n + Log n
i = N;
while i >= 1
{ print i , i = i / 2 } g(n) = n2
18
No. of Steps Upper bound n large
Algorithm 1 g(n) = 3n
f(n) = 2n+3
O(n)
Algorithm 2 f(n) = 4n+8 g(n) =5n

Algorithm 3 g(n) =n2 O(n2)


f(n) = n +8
2

f(n) = 2n3 O(n3)

O(log n)

Algorithm .. “Asymptotic Analysis”

19
Complexity Analysis
The “Big-Oh” Notation
• Big-oh notation indicates an upper bound.

• How bad things can get – perhaps things are not nearly bad
• Lowest possible upper bound

• Asymptotic analysis is a useful tool to help to structure our


thinking when n gets large enough

• Example: linear search: n2 is an upper bound, but n is the


tightest upper bound.

20
The “Big-Oh” Notation

EG: 3x 3 + 5x 2 – 9 = O (x 3)
Doesn’t mean
“3x 3 + 5x 2 – 9 equals the function O (x 3)”
Which actually means
“3x 3+5x 2 –9 is dominated by x 3”
Read as: “3x 3+5x 2 –9 is big-Oh of x 3”

In fact, 3x3+5x2 –9 is smaller than 5x3 for large enough values of x

21
“Asymptotic
Analysis”
Calculating Running Determine O-notation
Time f(n)
f(n) a polynomial .. f(n) = 3n2 + 4n
•Use the pseudo code
•Ignore constants Special case:
•Consider Coding If is f(n) a polynomial of degree d,
implementation then f(n) is O(nd)

f(n) a function of n … . f(n) = n2 + Log n


Iterative
Ex: Do Loop
f(n) = n
f(n) a function of f(n)
Recurrence Solving Recurrence:
Ex: Merge Sort -Iteration -Substitution
f(n) = n + 2f(n/2) -Recursion trees -Master theorem

22
Determine O-notation f(n) a polynomial
 If is f(n) a polynomial of degree d, then f(n) is O(nd), i.e.,
 Drop lower-order terms

 Drop constant factors

3x 3 + 5x 2 – 9 = O (x 3)
 Use the smallest possible class of functions
 Say “2n is O(n)” instead of “2n is O(n2)”

 Use the simplest expression of the class


 Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”

7 n  2 = O(n)
Program Segments
for x = 0 to n-1 f(n) = n + n2 + n + Log n
constant-time op

for y = 0 to n-1
for z = 0 to n-1
constant-time op

for w = 0 to n-1
constant-time op
= n2 + 2n + Log n
i = N; = O(n2)
while i >= 1
{ print i , i = i / 2 }

24
Which of
the below
expressions
are •O(3 n3) O(n(n2 + 3))
equivalent •O(n3 - 2)
•O(n3 + n lg n) O(n3 – n2 + n) O((n2 + 3)(n+1))
to O(n3)? •All of them!

25
Determine O-notation f(n) a function of n

2n2 <= n3

So g(n) is an asymptotic upper-bound for f(n) as n increases


(g(n) bounds f(n) from above)
26
f(n) is O(g(n)) if f grows at most as fast as g.

cg(n) is an approximation to f(n), bounding from above

27
Example 1: If f(n) = 3n2 then f(n) is in O(n2).

Example 2: f(n) = c1n2 + c2n in average case.


c1n2 + c2n <= c1n2 + c2n2 <= (c1 + c2)n2 for all n > 1.
f(n) <= cn2 for c = c1 + c2 and n0 = 1.
Therefore, f(n) is in O(n2) by the definition.

Example 3: f(n) = c. We say this is in


O(1).
28
7n  O(n)
2 definition, we need to find:
By
• a real constant c > 0
• an integer constant n0 >= 1

Such that 7n - 2 <= c n,


for every integer n >= 0
Possible choice is:
• c=7
• n=1
7n - 2 <= 7n, for n >= 1
29
Example
•Show that f(x) = 4x2 − 5x + 3 is O(x2 ).
•|f(x)| = |4x2− 5x + 3| ≤ |4x2 | + | − 5x| + |3|
• ≤ 4x2 + 5x + 3, for all x > 0
• ≤ 4x2 + 5x2 + 3x2

, for all x > 1


• ≤ 12x2 , for all x > 1

•We conclude that f(x) is O(x2). Observe that C = 12 and k =


1 from the definition of big-O. 30
Example
Show that f(x) = (x + 5) log2 (3x2 + 7) is O(x log2 x).

f(x) = (x + 5) log2 (3x2 + 7)


≤ (x + 5x) log2 (3x2 + 7x2 ) , for all x > 1
≤ 6x log2 (10x2 ) , for all x > 1
≤ 6x log2 (x3 ) , for all x > 10
≤ 18x log2 x , for all x > 10

We conclude that f(x) is O(x log2 x) for C = 18 and k = 10

31
Example

32
20n 3 10n log n  O(n 3 )
5 3
20n 10n log n  5  35n 3 , for n 
1
3log n  log log O(log
n)
n3log n  log log n  4 log n, for n
2
100 O(1) 2100  2100.1, for n  1
2
O(1 n) 5 n  5(1 n), for n 
5n 1
33
The “Big-Oh” Notation
Constant Time: O(1)

O(1) actually means that an algorithm takes constant time to


run; in other words, performance isn’t affected by the size
of the problem.

Linear Time: O(N)

An algorithm runs in O(N) if the number of operations required


to perform a function is directly proportional to the number of
items being processed
Example: waiting line at a supermarket

34
Quadratic Time: O(N2)

An algorithm runs in O(N2) if the number of operations


required to perform a function is directly proportional to the
quadratic number of items being processed
Example: Group shakehand

35
Logarithmic Time: O(log N) and O(N log N)

The running time of a logarithmic algorithm increases with


the log of the problem size.
When the size of the input data set increases by a factor of a
million, the run time will only increase by some factor of
log(1,000,000) = 6.

Factorial Time: O(N!)


It is far worse than even O(N2) and O(N3)
It’s fairly unusual to encounter functions with this kind of
behavior

36
Relations Between Oh,Omega,Theta

37
Meaning: For all data sets big enough (i.e., n > n0),
the algorithm always executes in more than cg(n) steps.

38
f(n) = c1n2 + c2n.

c1n2 + c2n >= c1n2 for all n > 1.


f(n) >= cn2 for c = c1 and n0 = 1.

Therefore, f(n) is in (n2) by the definition.

We want the greatest lower bound.

39
When big-Oh and  meet, we indicate this by using
 (big-Theta) notation.

Definition: An algorithm is said to be (h(n))


if it is in O(h(n)) and it is in (h(n)).

40
41
42
Practical Considerations n= 100
Big O

100! 2100 1000,000 10.000 200 100 2 1

No. of Steps
x100 x50
x2
No such big difference in running time between Θ1(n) and Θ2(nlogn).
There is an enormous difference between Θ1(n2) and Θ2(nlogn).

43
Remarks
• Most statements in a program do not have much effect
on the running time of that program
• There is little point to cutting in half the running time
of a subroutine that accounts for only 1% of the total.
• Focus your attention on the parts of the program that
have
the most impact
• The greatest time and space improvements come
from
a better data structure or

“FIRST TUNE THE ALGORITHM, THEN TUNE THE CODE”


44
What happens when we buy a computer 10 times faster
Let Old Speed (S1)= 10.000 Hz New Speed (S2) = 100,000 = 105 Hz
Time= 1 sec

T(n)=10 n
Old……………. Time= T(n)/S1 = 10*n1/ 10000 =1 sec ………………
n1=1000 New …………… 10*n2/ 10^5 =1 sec
…………… n2=10000 Input size increase = n2/n1 = 10 times

T(n)=5 n log n
Old…………………5n log n / 10000 =1 … n log n = 2000 ……………….. n1=250
New……………… 5n log n / 10^5 = 1 ….n log n = 20000……………....n2=1842
Input size increase = n2/n1 = 1842/250 = 7.37 times

T(n) = 2n2
Old ……… Time= T(n)/S1 = 2n2/10,000 = 1 ……n2 =5000……………. n1=70
New……………… 2n2 / 10^5 = 1 ……n2= 50000…………...…
n2=223 Input size increase = n2/n1 = 223/70 = 3.14 times

45
Faster Computer, or Faster Algorithm?
An algorithm with time equation T(n) = 2n2
does not receive nearly as great an
improvement from the faster machine as an
algorithm with linear growth rate.

Instead of having faster machine by a


factor of ten,
consider the improvement of
performance is only the square root of 10
(≈3.16)

Instead of buying a faster computer,


consider replacement of an algorithm with
quadratic running time
with a new algorithm with n logn running
time

46
Faster Computer, or Faster Algorithm?
Exercise:

Solutions: 8n, 64n2, 10n

47

You might also like