0% found this document useful (0 votes)
47 views26 pages

CSC 305 VTL Lecture 04 20211

The document discusses analyzing the time efficiency of algorithms. It covers analyzing the number of basic operations as a function of input size to determine the running time. Theoretical analysis involves determining best-case, average-case, and worst-case running times. Empirical analysis uses actual time measurements or operation counts. Asymptotic analysis classifies algorithms by order of growth using Big O, Omega, and Theta notation. Common functions like exponential, logarithmic, and linear are examined.

Uploaded by

Bello Taiwo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views26 pages

CSC 305 VTL Lecture 04 20211

The document discusses analyzing the time efficiency of algorithms. It covers analyzing the number of basic operations as a function of input size to determine the running time. Theoretical analysis involves determining best-case, average-case, and worst-case running times. Empirical analysis uses actual time measurements or operation counts. Asymptotic analysis classifies algorithms by order of growth using Big O, Omega, and Theta notation. Common functions like exponential, logarithmic, and linear are examined.

Uploaded by

Bello Taiwo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Algorithm and Complexity

Course C CSC 305 / CSC 225: / Computer Science


Lecture 04: Analysis of Algorithms

Name Prof. A. S. Sodiya

1
Analysis of Algorithms
• Issues:
 Correctness
 Time Efficiency
 Space Efficiency
 Optimality
• Approaches:
 Theoretical Analysis
 Empirical Analysis
Analysis of Algorithms - Issues

• Issues:
 Correctness – Does it work as advertised?
 Time Efficiency – Are time requirements minimized?
 Space Efficiency – Are space requirements minimized?
 Optimality – Do we have the best balance between minimizing time and
space?
• Theoretical
Theoretical Analysis Of Time Efficiency

Time efficiency is analyzed by determining the number of repetitions of


the basic operation as a function of input size
Basic operation: the operation that contributes most towards the running
time of the algorithm
T(n) = Cop * C(n)
Where,
T(n) = Running Time
Cop = Execution Time For Basic Operation
C(n) = Number Of Times Basic Operation Is Executed
Theoretical Analysis Of Time Efficiency

Time efficiency is analyzed by determining the number of repetitions of


the basic operation as a
function of input size
Basic operation: the operation that contributes most towards the running
time of the algorithm
T(n) = Cop * C(n)
Where,
T(n) = Running Time
Cop = Execution Time For Basic Operation
C(n) = Number Of Times Basic Operation Is Executed
Empirical Analysis Of Time Efficiency

• Select a specific (typical) sample of inputs


• Use physical unit of time (e.g., milliseconds)
or
• Count actual number of basic operation’s
executions
• Analyze the empirical data
Best-Case, Average-Case, Worst-Case

• For some algorithms efficiency depends on


form of input:
– Worst case: Cworst(n) – maximum over inputs of
size n
– Best case: Cbest(n) – minimum over inputs of
size n
– Average case: Cavg(n) – “average” over inputs of
size n
Average-Case

• Average case: Cavg(n) – “average” over inputs


of size n
 Number of times the basic operation will be executed on typical input NOT
the average of worst and best case
 Expected number of basic operations considered as a random variable under
some assumption about the probability distribution of all possible inputs
Average-Case
Best-Case
Average-Case
Order of Growth

Most important: Order of growth within a constant multiple as n


Example:
– How much faster will algorithm run on computer that is twice as fast?
– How much longer does it take to solve problem of double input size?
Some values of important function
Order of growth

Asymptotic Notations

The order of growth for a piece of code solving a particular problem is


generally expressed as a polynomial, which represents the number of
`primitive` operations required to be done and expressed as a function
of the input size. 
The Θ-Notation

Θ(g(n)) = { f(n) : ∃c1, c2 > 0, n0 > 0 s.t. ∀n ≥ n0:


c1 · g(n) ≤ f(n) ≤ c2 ⋅ g(n) }

c2 ⋅ g
f
c1 ⋅ g

n0
The O-Notation

O(g(n)) = { f(n) : ∃c > 0, n0 > 0 s.t. ∀n ≥ n0: f(n) ≤ c ⋅ g(n) }

c⋅g

n0
The Ω-Notation

Ω(g(n)) = { f(n) : ∃c > 0, n0 > 0 s.t. ∀n ≥ n0: f(n) ≥ c ⋅ g(n) }

f
c⋅g

n0
The o-Notation

o(g(n)) = { f(n) : ∀c > 0 ∃n0 > 0 s.t. ∀n ≥ n0: f(n) ≤ c ⋅ g(n) }


c3 ⋅ g

c2 ⋅ g

c1 ⋅ g
f

n1 n2 n3
The ω-Notation

ω(g(n)) = { f(n) : ∀c > 0 ∃n0 > 0 s.t. ∀n ≥ n0: f(n) ≥ c ⋅ g(n) }


f
c3 ⋅ g

c2 ⋅ g

c1 ⋅ g

n1 n2 n3
Comparison of Functions

• f(n) = O(g(n)) and Transitivity


g(n) = O(h(n)) ⇒ f(n) = O(h(n))
• f(n) = Ω(g(n)) and
g(n) = Ω(h(n)) ⇒ f(n) = Ω(h(n))
• f(n) = Θ(g(n)) and
g(n) = Θ(h(n)) ⇒ f(n) = Θ(h(n))

• f(n) = O(f(n)) Reflexivity


f(n) = Ω(f(n))
f(n) = Θ(f(n))
Comparison of Functions

• f(n) = Θ(g(n)) ⇐⇒ g(n) = Θ(f(n)) Symmetry

• f(n) = O(g(n)) ⇐⇒ g(n) = Ω(f(n)) Transpose


• f(n) = o(g(n)) ⇐⇒ g(n) = ω(f(n)) Symmetry

• f(n) = O(g(n)) and Theorem 3.1


f(n) = Ω(g(n)) ⇒ f(n) = Θ(g(n))
Asymptotic
Analysis and Limits
Comparison of Functions

• f1(n) = O(g1(n)) and f2(n) = O(g2(n)) ⇒


f1(n) + f2(n) = O(g1(n) + g2(n))

• f(n) = O(g(n)) ⇒ f(n) + g(n) = O(g(n))


Standard Notation and
Common Functions

• Monotonicity
A function f(n) is monotonically increasing if m  n
implies f(m)  f(n) .
A function f(n) is monotonically decreasing if m  n
implies f(m)  f(n) .
A function f(n) is strictly increasing
if m < n implies f(m) < f(n) .
A function f(n) is strictly decreasing
if m < n implies f(m) > f(n) .
Standard Notation and
Common Functions

• Exponentials
For all n and a1, the function an is the exponential function
with base a and is monotonically increasing.
• Logarithms ai
Textbook adopts the following convention
lg n = log2n (binary logarithm),
ln n = logen (natural logarithm),
lgk n = (lg n)k (exponentiation),
lg lg n = lg(lg n) (composition),
lg n + k = (lg n)+k (precedence of lg).
• Consider this peace of code for calculating the factorial of a number:
function factorial(n):
fact = 1
for i in range(0 to n-1)
fact = fact * i
return fact

Now if you sum up all the primitive operations' cost you get

C = c1 + n*c2 + 2*c3*n + c4
   = (c1+c4) + n*(c2+c3)
   = C1+n*C2
C = θ(n)

You might also like