0% found this document useful (0 votes)
11 views42 pages

Analysis of Algorithms: BIG Notation

The document provides an overview of algorithm analysis, focusing on Big O notation and the measurement of algorithm complexity. It discusses various growth functions and their implications for running time, including best, worst, and average cases. Additionally, it explains the significance of Big O, Big Omega, and Big Theta notations in evaluating algorithm efficiency.

Uploaded by

jale.cavus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views42 pages

Analysis of Algorithms: BIG Notation

The document provides an overview of algorithm analysis, focusing on Big O notation and the measurement of algorithm complexity. It discusses various growth functions and their implications for running time, including best, worst, and average cases. Additionally, it explains the significance of Big O, Big Omega, and Big Theta notations in evaluating algorithm efficiency.

Uploaded by

jale.cavus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

1

Dr : Mohammed Al-Hubaishi
Analysis of Algorithms
BIG O notation

DR . MOHAMMED AL-HUBAISHI
2
Key Topics:

► Introduction
► Running Time of growth known functions.
► Big-Oh, Big Oh Omega and Big Theta Notations
► Doing a Timing Analysis
► Analyzing Some Simple Programs.

Dr: Mohammed Al-Hubaishi


3
What is Algorithm Analysis?

► How to estimate the running time required for an algorithm


► Speed of the machine running the program.
► Language in which the program was written.
► The size of the input and organization of the input.
► Techniques that drastically reduce the running time of an algorithm.
► A mathematical framework that more rigorously describes the
running time of an algorithm.

Dr : Mohammed Al-Hubaishi
4
Analysis of running time

► An important criterion for the “goodness” of an algorithm is how


long it takes to run on inputs of various sizes (its “running time”).
► When the algorithm involves recursion, we use a formula called a
recurrence equation, which is an inductive definition that
predicts how long the algorithm takes to run on inputs of different
sizes.

Dr : Mohammed Al-Hubaishi
5
Measurement of Complexity of an Algorithm

1. Best case is the function which performs the minimum


number of steps on input data of n elements.
2. Worst case is the function which performs the maximum
number of steps on input data of size n.
3. Average case is the function which performs an average
number of steps on input data of n elements.

Dr : Mohammed Al-Hubaishi
6
Measuring Speed

► Measure speed does not depending on the values inside an


array, but it is depending on the operation inside an array.
► We measure algorithm speed in terms of :
► operations relative to input size
► Input size is n
► Growth in operations (time) given in terms of n
Here, the input size is the size of array

Dr: Mohammed Al-Hubaishi


7
Function of Growth rate

Dr : Mohammed AL-HUBAISHI
8
Running time for small inputs

Dr : Mohammed Al-Hubaishi
9
Generalizing Running Time

Comparing Running time as the input grows of the growth


known functions.
Input Size: (1) n log n n log n n² n³ 2ⁿ
n
10 1 10 2 23 100 1000 1.024000e+03
20 1 20 3 60 400 8000 1.048576e+06
30 1 30 3 102 900 27000 1.073742e+09
40 1 40 4 148 1600 64000 1.099512e+12
50 1 50 4 196 2500 125000 1.125900e+15
60 1 60 4 246 3600 216000 1.152922e+18
70 1 70 4 297 4900 343000 1.180592e+21
80 1 80 4 351 6400 512000 1.208926e+24

Dr: Mohammed Al-Hubaishi


10
Constant (1)

► Most instructions of most programs are executed once or at


most only a few times.
► If all the instructions of a program have this property, we say
that its running time is constant.
► This is obviously the situation for in algorithm design.

Dr: Mohammed Al-Hubaishi


11
linear (N)

► When the running time of a program is


linear, it generally is the case that a
small amount of processing is done on
each input element.
► Ex: when N is a million, then so is the
running time.

Dr: Mohammed Al-Hubaishi


12
logarithmic (log N)

► When the running time of a program is


logarithmic, the program gets slightly
slower as N grows.
► This running time commonly occurs in
programs which solve a big problem by
transforming it into a smaller problem by
cutting the size with some constant fraction.
► The base of the logarithm changes the
constant, but not by much: if the base is 2;
when N is a million, log N is twice as great.

Dr: Mohammed Al-Hubaishi


13
N log N

► N log N running time arises in


algorithms which solve a problem by
breaking it up into smaller
subproblems, solving them
independently, and then combining the
solutions.
► Ex: when N is a million, N log N is
perhaps twenty million.

Dr: Mohammed Al-Hubaishi


14
Quadratic (N²)

► When the running time of an


algorithm is quadratic, it is practical
for use only on relatively small
problems.
► Quadratic running times typically arise
in algorithms which process all pairs
of data items (perhaps in a double
nested loop).
► Ex: when N is a thousand, the running
time is a million.

Dr: Mohammed Al-Hubaishi


15
Cubic (N³)

► Similarly, an algorithm which


processes triples of data items
(perhaps in a triple-nested loop) has a
cubic running time and is practical
for use only on small problems.
► Ex: when N is a hundred, the running
time is a million.

Dr: Mohammed Al-Hubaishi


16
Big o notation for
Sorting algorithms
Big o notation for graph 17
Algorithms

Graph Algorithm Big O Time Complexity

Breadth-First Search (BFS) O(V + E)

Depth-First Search (DFS) O(V + E)

Dijkstra's Algorithm O(V^2) (dense graph) or O(E log


V) (sparse graph)

Bellman-Ford Algorithm O(VE)

Floyd-Warshall Algorithm O(V^3)

Prim's Minimum Spanning Tree O(E log V)


(MST)

Kruskal's Minimum Spanning O(E log V)


Tree (MST)

Topological Sort O(V + E)


18
Common plots of the growth functions

n)
O(2n)

g
n 2)
O( )

lo
n3

(n
O(

O
O(n)

O(√n)
O(log n)
O(1)
Dr: Mohammed Al-Hubaishi
19
Common plots of the growth functions
Big O, Big Ω, and Big Θ
Relate the growth of functions

Dr: Mohammed Al-Hubaishi


20
Notations in Complexity
Big O, Big Ω, and Big Θ
Relate the growth of functions
► Big O notation: it defines an algorithm’s worst case time, which
determines the set of functions grows slower than or at the same rate as the
expression.
► Big Ω notation : It defines the best case of an algorithm’s time complexity,
which defines whether the set of functions will grow faster or at the same
rate as the expression.
► Big Θ notation : It defines the average case of an algorithm’s time
complexity, when the set of functions lies in
both O(expression) and Omega(expression), then Theta notation is used.

Dr: Mohammed Al-Hubaishi


21
Asymptotic Bounds

► There are three types of Bounds

Big O, Big Ω, and Big Θ


Relate the growth of functions
► The definition we will use come from mathematics and we used to
the relate the growth in one function relative to another simpler
one.
► bounds functions will help us to evaluate difficult algorithms
concept and understanding takes time which are the main ideas
to measure algorithms efficiency.

Dr: Mohammed Al-Hubaishi


22
Three types of Bounds

► Here we have the graph of two real functions : f(x)


and g(x).
► The function f(x) is more complex and we want to
measure its growth in terms of simpler function g(x)

Dr: Mohammed Al-Hubaishi


23
24

https://drive.google.com/file/d/1kru9dOYiaSQBqUzsdAhd8SrutU50ybN7/view
page 80
25
Real vs Integer functions

► In mathematics x is used for real variables


► functions f(x) and g(x).

► In algorithms n is used for integer variables


► both functions f(n) and g(n).

Dr: Mohammed Al-Hubaishi


26
F(n) is Big Oh of g(n):
Bound from above
we can bound f(n) from above, if
we find constant (c) such that
when we multiply with g(n), it will
bound f(n) above from some point
(n0) onward.
In this case we say:
f(n) is Big Oh of g(n)

(1)

In graph, n0 is the minimum possible value to get valid bounds, but any greater value will work. n0 here is 3
Dr: Mohammed Al-Hubaishi
27
F(n) is Big Omega of g(n):
Bound from below
we can bound f(n) from below, if we find
constant (c) such that when we multiply with
g(n), it will bound f(n) below from some point
(n0) onward.
In this case we say:

f(n) is Big Omega of g(n)

(2)

In graph, n0 is the minimum possible value to get valid bounds, but any greater value will work. n0 here is 7
Dr: Mohammed Al-Hubaishi
28
F(n) is Big Theta of g(n):
Bound between
If there are constants such that
g(n) bound f(n) both from below
and above, then we say :
F(n) is Big Theta of g(n)
Of course the constants are
different. So that we label them
as c1 and c2

(3)

In graph, n0 is the minimum possible value to get valid bounds, but any greater value will work. n0 here is 8
Dr: Mohammed Al-Hubaishi
29
Mappings for n2
f(n) is Big Oh Mega
⚫F(n)= n2 Ω (n2 )

⚫O-notation ------------ (“≤”)


⚫Ω-notation ------------ (“≥”)
⚫Θ-notation ----------- (“=“)

Θ(n2)

F(n) Big Theta O(n2 )

Dr: Mohammed Al-Hubaishi


f(n) is Big Oh
30
f = O(g) example

► Suppose: f(n) = 5n and g(n) = n.


• To show that f = O(g), we have to show the existence of a
constant C as given in Definition (1).
• Clearly 5 is such a constant so f(n) = 5 * g(n).
• We could choose a larger C such as 6, because the definition
states that f(n) <= C * g(n), the results keep the definition true.
Therefore, a constant C exists (we only need one) and f =
O(g).
• we usually try and find C that works for all n, as 6>5

Dr: Mohammed Al-Hubaishi


31
Example : Big - Oh
► We use this example to introduce the concept the Big-Oh.

► Example : f(n) = 100 n2, g(n) = n4,


► The following table and figure show that g(n) grows faster than f(n)
when n > 10. We say: f is big-Oh of g.

Dr: Mohammed Al-Hubaishi


32
Big O and T function

► You shouldn't mix up complexity (denoted by big-O) and the


T function.
► The T function is the number of steps the algorithm has to go
through for a given input. So, the value of T(n) is the actual
number of steps, whereas O(something) denotes a
complexity.
► By the conventional abuse of notation, T(n) = O( f(n) ) means
that the function T(n) is at most the same complexity as f(n)
function, which will usually be the simplest possible function
of its complexity class.

Dr: Mohammed Al-Hubaishi


33
Analyzing Running Time: T(n)
1. n = read input from user
T(n), or the running time of a 2. sum = 0
particular algorithm on input of size n, 3. i = 0
is taken to be the number of times 4. while i < n
5. { number = read input from user
the instructions in the algorithm are
6. sum = sum + number
executed. 7. i = i + 1 }
8. mean = sum / n

The computing time Statement Number of times executed


for this algorithm in 1 1
terms on input size 2 1
n is: T(n) = 4n + 5. 3 1
4 n+1
Any thing in loop is 5 n
executed n times 6 n
7 n
Dr: Mohammed Al-Hubaishi 8 1
T(n) = 4n + 5.
34
T(n) = O(n) example
In the previous timing analysis, we see T(n) = 4n + 5, and we concluded intuitively that T(n) =
O(n) because the running time grows linearly as n grows. Now, however, we can prove it
mathematically:

To show that f(n) = 4n + 5 = O(n), we need to produce a constant C such that:


f(n) <= C * n for all n.

If we try C = 4, this doesn't work because 4n + 5 is not less than 4n.


We need C to be at least 8 to cover all n.
If n = 1, C has to be 8, but C can be smaller for greater values of n (if n = 100, C can be 5).
Since the chosen C must work for all n, we must use 8:

4n + 4n = 8n > (4n + 5 )
C=8
Since we have produced a constant C that works for all n, we can conclude:

Dr: Mohammed Al-Hubaishi


T(4n + 5) = O(n)
35
Upper Bound on n2 example
Suppose f(n) = n2 + 3n - 1. We want to show that f(n) = O(n2).
f(n) = n2 + 3n - 1
(subtraction makes things smaller so drop it)
< n2 + 3n
<= n2 + 3n2 (since n <= n2 for all integers n)
= 4n2
Therefore, if C = 4, we have shown that f(n) = O(n2).
Notice, that all we are doing is finding a simple function that is an upper bound
on the original function.
Because of this, we could also say that

f(n) = O(n3) since (n3) is an upper bound on n2


Dr: Mohammed Al-Hubaishi
36
Big-Oh of highest degree example
Show:
f(n) = 2n7 - 6n5 + 10n2 – 5 = O(n7)
f(n) < 2n7 + 6n5 + 10n2
<= 2n7 + 6n7 + 10n7 (worst case)
= 18n7

thus, with C = 18 and we have shown that f(n) = O(n7)


Any polynomial is big-Oh of its term of highest degree.
We are also ignoring constants. Any polynomial (including a general one) can be manipulated
to satisfy the big-Oh definition by doing what we did in the last example: take the absolute
value of each coefficient (this can only increase the function);

nj <= nd if j <= d
Then since we can change the exponents of all the terms to the highest degree (the original
function must be less than this too).
Finally, we add these terms together to get the largest constant C we need to find a function
that is an upper bound on the original one.
Dr: Mohammed Al-Hubaishi
37
For Loop Nested
► If you have a single for loop nested in another for loop, the run time for
the inner for loop is O(n), happens for each iteration of the outer for
loop, which again is O(n).
► The reason each of these individually are O(n) is because they take a
linear amount of time given the size of the input.
► The larger the input the longer it takes on a linear scale, n.
► To work out the math, which in this case is trivial, just multiple the
complexity of the inner loop by the complexity of the outer loop.

n * n = n2.
Because remember, for each n in the outer loop, you must again do n for
the inner. To clarify: n times for each n :: O(n * n) == O(n2)

Dr: Mohammed Al-Hubaishi


38
Example : For loop nested

► Simple equation gives for (N=


0,1,2,3,4,5,6, 7, 8 ...) following series: 0, 0,
1, 4, 10, 20, 35, 56, 84 ... , which is
resolved with following formula:
► T(n) = (n - 1)n(n + 1)/6
► So we will have
► O((N - 1)N(N + 1)/6) time complexity,
which can be simplified to O(N3)

Dr: Mohammed Al-Hubaishi


1/1 * ½ * 1/3 =
1/6
39
For loop nested

https://en.wikipedia.org/wiki/Gear
40
Book References
41

Dr. Mohammed Al-Hubaishi


42

Dr. Mohammed Al-Hubaishi

You might also like