0% found this document useful (0 votes)
0 views7 pages

Topic 2 and 3

The document discusses the fundamentals of analyzing algorithmic efficiency, focusing on time and space efficiency, and introduces a framework for measuring input size and running time. It explains concepts such as worst-case, best-case, and average-case efficiency, as well as asymptotic notations like Big O, Big Omega, and Big Theta for comparing algorithm performance. Additionally, it outlines properties of these notations and provides examples to illustrate the orders of growth and efficiency classes.

Uploaded by

rsviniba96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views7 pages

Topic 2 and 3

The document discusses the fundamentals of analyzing algorithmic efficiency, focusing on time and space efficiency, and introduces a framework for measuring input size and running time. It explains concepts such as worst-case, best-case, and average-case efficiency, as well as asymptotic notations like Big O, Big Omega, and Big Theta for comparing algorithm performance. Additionally, it outlines properties of these notations and provides examples to illustrate the orders of growth and efficiency classes.

Uploaded by

rsviniba96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

CS 8451 – DESIGN AND ANALYSIS OF ALGORITHMS (UNIT - 1) 9

1.4. Fundamentals of the Analysis of Algorithmic Efficiency

Analysis of algorithm – investigation of an algorithm's efficiency with respect to two resources:


i) running time
ii) memory space
Efficiency – determined by measuring time and space, the algorithm uses for executing the program

Time Efficiency :
• how fast the algorithm runs
• The time taken by a program to complete its task depends on the number of steps in an algorithm
Two types:
Compilation time – time for compilation
Run Time – Execution time depends on the size of the algorithm
Space Efficiency :
• The number of units the algorithm requires for memory storage

1.4.1 Analysis framework:

Two kinds of Efficiency:


i) Time Efficiency
ii) Space Efficiency

General Framework:

i) Measuring an input's size


ii) Units for measuring Running Time
iii) Ordres of Growth
iv) Worst-case, Best-case and Average – case Efficiency
v) Recapitulation of the Analysis Framework

i) Measuring an input's size:

• Algorithms run longer on larger inputs


• parameter n – indicating the algorithm's input size (Ex: sorting, searching)

• Ex:
• i) problem of evaluating a polynomial p(x) = anxn + ... +a0 :
◦ input's size – polynomial's degree or number of coefficients
• ii) computing the product of two n-by-n matrices
◦ input's size – total number of elements N in the matrices
• Measuring size of the inputs by the number of bits in the n's binary representation:
• number of bits b; b=լlog2n˩+1

• Ex:
n Log2n լLog2n˩ b
1 0.0000 0 1
9 3.1699 3 4
15 3.9069 3 4

ii) Units for measuring Running Time:


• use standard units of time measurement – seconds, milliseconds
CS 8451 – DESIGN AND ANALYSIS OF ALGORITHMS (UNIT - 1) 10

• count the number of times each of the algorithm's operation is executed


▪ identify the basic operation (most important operation)
▪ number of times the basic operation is executed

• Ex: i) For sorting algorithm, the basic operation is comparison


ii) For matrix multiplication, the basic operation is multiplication

• Estimating the running time:


T(n) ≈ Cop C(n)
Cop – Basic operation's execution time
C(n) – number of times the Basic operation needs to be executed

• 10 times faster machine - 10 times faster


• Double the input – 4 times longer

• Ex:

iii) Orders of Growth:


• Measuring the performance of an algorithm in relation with input size.

• The function growing the slowest is the logarithmic function.


• the exponential function 2n and the factorial function n! grow so fast

iv) Worst-case, Best-case and Average – case Efficiency


• Ex: Sequential Search
◦ searches for a given item (search key K) in a list of n elements by checking successive elements of the list
until either a match with the search key is found or the list is exhausted.

• ALGORITHM SequentialSearch(A[0..n − 1], K)


//Searches for a given value in a given array by sequential search
//Input: An array A[0..n − 1] and a search key K
//Output: The index of the first element in A that matches K
// or −1 if there are no matching elements
CS 8451 – DESIGN AND ANALYSIS OF ALGORITHMS (UNIT - 1) 11

i ←0
while i < n and A[i] ≠ K do
i ←i + 1
if i < n return i
else return −1

• Worst-case Efficiency – The worst-case efficiency of an algorithm is its efficiency for the worst-case input of
size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible
inputs of that size.
C worst (n) = n

• Best-case Efficiency - The best-case efficiency of an algorithm is its efficiency for the best-case input of size
n, which is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs of
that size.
C best (n) = 1

• Average-case Efficiency – make some assumptions about possible inputs of size n


i) successful search- the probability of the first match occurring in the ith position of the list is p/n
ii) unsuccessful search - the number of comparisons will be n with the probability (1− p).

n+ 1
• Successful search: p=1, The average number of key comparisons is 2
• Unsuccessful search: p=0, The average number of key comparisons is n
• the average-case efficiency cannot be obtained by taking the average of the worst-case and the best-case
efficiencies.

• Amortized efficiency:
◦ It applies not to a single run of an algorithm but rather to a sequence of operations performed on the same
data structure.
◦ The total time for an entire sequence of n such operations is always significantly better than the worst-
case efficiency of that single operation multiplied by n.
CS 8451 – DESIGN AND ANALYSIS OF ALGORITHMS (UNIT - 1) 12

1.5 Asymptotic Notations and their properties

• To choose best algorithm, it is needed to check the efficiency of the algorithms


• The efficiency of an algorithm can be measured by computing time complexity of each algorithm
• Using asymptotic notations time complexity can be rated as
1. Fastest Possible
2. Slowest Possible
3. Average Time
• asymptotic notations:
◦ O (Big – oh)
◦ Ω (Big Omega)
◦ Θ (Big - Theta)

• t(n) will be an algorithm’s running time and


• g(n) will be some simple function to compare the count with.

i) Big – oh Notation (Ο)


• Method of representing the upper bound of algorithm's running time

Definition:
• A function t(n) is said to be in O(g(n)) denoted as t(n) ϵ O(g(n)), if t(n) is bounded above by some
constant multiple of g(n) for all large n i.e) if there exists some positive constant C and some non-
negative integer n0 such that

t(n) ≤ Cg(n) for all n ≥n0


• Diagram

Ex:
t(n) = 4n; g(n) = 5n

ii) Big Omega Notation (Ω)


• Method of representing the lower bound of algorithm's running time
• Describes the best case running time of algorithms

Definition:
• A function t(n) is said to be in Ω (g(n)) denoted as t(n) ϵ Ω(g(n)), if t(n) is bounded below by some
positive constant multiple of g(n) for all large n i.e) if there exists some positive constant C and some
non-negative integer n0, such that

t(n) ≥ Cg(n) for all n ≥ no


CS 8451 – DESIGN AND ANALYSIS OF ALGORITHMS (UNIT - 1) 13

• Diagram:

Ex:
t(n) = 5n; g(n) = 4n

iii) Big - Theta Notation - Θ:


• A function t(n) is said to be in Θ(g(n)),denoted by t(n) ϵ Θ(g(n)), if t(n) is bounded both above and
below by some positive constant multiples of g(n) for all large n i.e) if there exists some positive
constants ‘C1’ and’ C2’ and some non-negative integer no such that

C2 g(n) ≤ t(n) ≤ C1g(n) for all n > n0


• Diagram:

Note:
Θ(g(n)) = o(g(n)) ∩ Ω(g(n))
Properties:

Useful Property involving the Asymptotic Notations


• Theorem: If t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}).
• Proof :
Let, four arbitrary real numbers a1, b1, a2, b2: if a1 ≤ b1 and a2 ≤ b2, then a1 + a2 ≤ 2 max{b1, b2}.
t1(n) ∈ O(g1(n)), t1(n) ≤ c1g1(n) for all n ≥ n1 and
t2(n) ∈ O(g2(n)), t2(n) ≤ c2g2(n) for all n ≥ n2.
Consider, c3 = max{c1, c2}; n ≥ max{n1, n2}

t1(n) + t2(n) ≤ c1g1(n) + c2g2(n)


≤ c3g1(n) + c3g2(n) = c3[g1(n) + g2(n)]
≤ c3 2 max{g1(n), g2(n)}.
Hence, t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}),
with the constants c and n0 required by the O definition being 2c3 = 2 max{c1, c2} and max{n1, n2}, respectively.
CS 8451 – DESIGN AND ANALYSIS OF ALGORITHMS (UNIT - 1) 14

• Ex:
1
t1(n) = n(n-1) , t2(n) = n-1
2
t1(n) ∈ O(n2) , t2(n) ∈ O(n) ; i.e) g1(n) = n2, g2(n) = n
t1(n) + t2(n) ∈ O(max{g1(n), g2(n)})
So, t1(n) + t2(n) ∈ O(max{n2, n}) = O(n2)
Using Limits for Comparing Orders of Growth:
Three principal cases

L’Hospital’s rule :

Stirling’s formula:

1
EXAMPLE 1: Compare the orders of growth of n(n-1) and n2.
2

• Limit is equal to a constant, the functions have the same order of growth or, symbolically,
1
n(n-1) ∈ Θ(n2).
2
EXAMPLE 2 Compare the orders of growth of log2 n and √n.

• limit is equal to zero, log2 n has a smaller order of growth than √ n.


log2 n ∈ O( √n).

EXAMPLE 3: Compare the orders of growth of n! and 2n

• n! and 2n have the larger order of growth


n!∈ Ω (2n)

Properties of Big – oh:

1. If there are 2 functions t1(n) and t2(n), such that t1(n) ∈O(g1(n)) and t2(n) ∈ O(g2(n)) then
t1(n) + t2(n) = O(max {g1(n), g2(n)})
2. t(n) ∈O(t(n))
3. If there are 2 functions t1(n) and t2(n), such that t1(n) ∈O(g1(n)) and t2(n)∈O (g2(n)) then
t1(n)* t2(n) = O (g1(n)*g2(n))
4. If t(n) ∈O(g(n)) and g(n) ∈O(h(n)) then t(n) ∈O(h(n))
5. In a polynomial the highest power term dominates other terms i.e) maximum degree is considered
Eg: for 3n3+2n2+10
Time complexity is O(n3)
CS 8451 – DESIGN AND ANALYSIS OF ALGORITHMS (UNIT - 1) 15

6. Any constant values leads to O(1) time complexity. ie, if t(n) = c, then it belongs to O(1) time complexity
7. O(1) < O(log n)< O(n) < O(n2)< O(2n)
8. t(n) = Θ(g(n)) iff t(n) = O(g(n)) and t(n) = Ω(g(n))

• Basic efficiency classes:

Class Name Comments


1 constant - Short of best-case efficiencies,
- an algorithm’s running time typically goes to infinity when its input size grows
infinitely large.
log n logarithmic - a result of cutting a problem’s size by a constant factor on each iteration of the
algorithm
- linear running time.
n linear - Algorithms that scan a list of size n
n log linearithmic - Many divide-and-conquer algorithms in the average case
n
n2 quadratic - characterizes efficiency of algorithms with two embedded loops
- example : n × n matrices
n3 cubic - characterizes efficiency of algorithms with three embedded loops
n
2 exponential - algorithms that generate all subsets of an n-element set
n! factorial - algorithms that generate all permutations of an n-element set.

You might also like