0% found this document useful (0 votes)
24 views45 pages

Module-1 Part-2

The document outlines the objectives and contents of a course on algorithm analysis, focusing on performance evaluation through time and space complexity. It discusses various methods for measuring algorithm efficiency, including asymptotic notations, and provides examples of time and space complexity calculations. Additionally, it covers worst-case, best-case, and average-case efficiencies, as well as the concept of time-space tradeoff in algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views45 pages

Module-1 Part-2

The document outlines the objectives and contents of a course on algorithm analysis, focusing on performance evaluation through time and space complexity. It discusses various methods for measuring algorithm efficiency, including asymptotic notations, and provides examples of time and space complexity calculations. Additionally, it covers worst-case, best-case, and average-case efficiencies, as well as the concept of time-space tradeoff in algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Module-1

Course objectives:
● To learn the methods for analyzing algorithms and evaluating
their performance.
● To demonstrate the efficiency of algorithms using asymptotic
notations.
Contents:
Part-1: Introduction
Part-2: Fundamentals of the Analysis of Algorithm Efficiency
Part-3: Brute Force Approaches
Vandana U
Assistant Professor
Department of AI-DS
MODULE-1
PART-2
FUNDAMENTALS OF THE
ANALYSIS OF ALGORITHM
EFFICIENCY
2.1 Analysis Framework: Computing Best,Worst Measuring Input Size
Measuring Space and Average Case
Complexity Efficiencies Measuring
Running Time
Measuring Time Computing Order of
Analysis of Algorithms
Complexity Growth of Algorithms
Efficiency of an Algorithm can be in terms of Time and Space. Thus, checking whether the algorithm is
efficient or not means analyzing the algorithm. There is a systematic approach that has to be applied for
analyzing any given application. This systematic approach is modelled by a framework called Analysis
Framework.
The efficiency of an algorithm can be divided by measuring the performance of an algorithm. We can
measure the performance of an algorithm by computing two important factors:
1.Amount of time required by an algorithm to execute : Time Complexity or Running Time or Time
Efficiency
2.Amount of storage required by an algorithm or the amount of memory units required by the algorithm,
including the memory needed for Input & Output: Space Complexity or Memory Space or Space
Efficiency
The reason for selecting these two criteria are:Simplicity, Generality, Speed, Memory
Point 1. Space Complexity: Amount of memory required by an algorithm to run.

To compute the space complexity, we use two factors: Constant and Instance
characteristics
The space requirement S(P) can be given as: S(P)=C+Sp
where,
•C is a constant, i.e., fixed part, and it denotes: The space of inputs and outputs and
Amount of space taken by instructions, variables, and identifiers
•Sp is a space-dependent part based on instance characteristics, i.e., a variable part,
where space requirements depend on a particular problem instance.
Example: The control statements Such as for, do, while, choice, switch, Recursion
stack for handling recursive calls.
For example: add(a+b) return a+b
S(P)=C+ Sp =C+0
=2 + 0 //a,b occupy one word size then total size comes to 2
=2
Point 2: Time Complexity Amount of time required by an algorithm to run to
completion.
•It is difficult to compute time complexity in terms of physically clocked time.
•For instance, in a multitasking system, execution time depends on many factors, such as:
System load, Number of other programs running, Instruction set used and Speed of
underlying hardware
•Therefore, time complexity is given in terms of frequency count.
•Frequency count is a count that denotes how many times a particular statement is
executed or the number of times a statement runs.
Example 1 : To print a number
void fun()
{
int a; -------------------------------------------- (1)
a=10; -------------------------------------------- (1)
printf("%d",a); -------------------------------------------- (1)
} Therefore, The Frequency Count is 3
Example 2 : To print a number
void fun() Example 3 : Calculating Sum of ‘n’ numbers
{ Frequency
Statement
Count
int a; --------------------------- (1)
for(i=0;i<n;i++) i=0 1
a=0; --------------------------- (1)
{ i<n n+1
for(i=0;i<n;i++) -------------------- (n+1)
sum=sum+a[i]; i++ n
a=a+i; ------------------ (n)
} Sum=sum+a[i] n
printf("%d",a); -------------------------- (1)
Total 3n+2
} Frequency count is 1+1+(n+1)+n+1 =
2n+4
Note: Time complexity normally denotes
Note: The ‘for’ loop in the snippet is executed
in terms of Oh Notation(O). Hence if we
‘n’ times when the condition is true and one
neglect the constants then we get the
more time when the condition is false. Hence
time complexity to be O(n)
the frequency count for ‘for’ loop is n+1.
Example 4 : Matrix Addition Statement Frequency Count
for(i=0;i<n;i++) i=0 1
{
i<n n+1
for(j=0;j<n;j++)
i++ n
{
j=0 n*1=n
c[i][j]=a[i][j]+b[i][j];
Initialization of ‘j’
}} Outer Loop

j<n n * (n+1) = n2 + n

Outer Loop Inner Loop

j++ n * n = n2

c[i][j]=a[i][j]+b[i][j]; n * n = n2

Total 3n2 + 4n+2 => O(n2)


Point 3: Measuring an Input Size
•Efficiency measure of an algorithm is directly propotional to the input size or range. So an
algorithm efficiency could be measured as a function of ‘n’ , where ‘n’ is the parameter
indicating the algorithm input size.
•For eg, when multiplying two matrices, the efficiency of an algorithm depends on the no.of
multiplication performed, not on the order of the matrices. The input given can be a square
or a non-square matrix.
•Some algorithms need more than one parameter to indicate the size of their inputs. In such
situations , the size is measured by the number of bits in the n’s binary representation.
Therefore , B=floor(log2n+1)
• For example, in Sorting . Naïve Algorithm : n2 and Best Algorithm – nlogn
Point 4: Units of measuring Running time
The running time of an algorithm usually depends on:
1. Speed of the Computer 2. Quality of a program 3. Compiler used to run the program
• Since we are after a measure of an algorithm’s efficiency we like to have a metric that
does not depend on extraneous factors as mentioned above.
• The possible approach is to count the number of times each of the algorithm’s operations
are executed.
To measure the Algorithm efficiency:
• Identify the most important question(core logic) of the algorithm called the Basic
operation. The operation contributing the most to the total running time and compute the
number of time the basic operation is executed.
• It is not difficult to identify the basic operation of an algorithm.It is usually the most
time consuming operation in the algorithm’s inner-most loop.
Out of the 4 arithmetic operations – Division has the highest preference followed by
Multiplication, Addition and Subtraction.
Problem Statement Input Size Basic Operation
1. Searching a key element from a list of ‘n’ elements List of n Comparison of the key with
elements every element in the list
2. Perform Matrix Multiplication Two matrices Actual Multiplication of the
of order nxn elements in the matrices
3. Computing GCD of two numbers Two Numbers Division
•Let be the execution time of an algorithm's basic operation on a particular computer, Let
C(n) be the number of times is operation needs to be executed for the algorithm.
•Then we can estimate the running time T(n) of a program implementing this algorithm on
that computer by the formula: T(n) ≈ Cop . C(n) where,
T(n) is Running time of basic operation , Cop is Time taken by the basic operation to execute
and C(n) is the Number of times the operation needs to be executed
•Assuming that C(n) = 𝟏𝟐 𝒏(𝒏 − 𝟏), how much longer will algorithm run if double it input size?
The answer is about 4 times longer. Indeed, for all but very small values of ‘n’. 𝟏 𝟐
(𝟐𝒏 ቁ
C(n) = 𝟐 𝒏(𝒏 − 𝟏) = 𝟐 𝒏𝟐 − 𝟐 𝒏 ≈ 𝟐 𝒏𝟐 and therefore, 𝑪𝒐𝒑 . 𝑪 𝟐𝒏
𝟏 𝟏 𝟏 𝟏 𝑻 𝟐𝒏
≈ ≈ 𝟐 =𝟒
𝑻 𝒏 𝑪𝒐𝒑 . 𝑪 𝒏 𝟏 𝟐
𝒏
𝟐
The efficiency analysis ignores multiplication constants and concentrates on the count’s order of growth
to within a constant multiple for large size input.
Point 5: Order of Growth
Measuring the performance of an algorithm in relation with the input size 'n' is called Order
of Growth.
A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones. For large values of 'n', it is the function's order of growth
that counts, which contains values of a few functions particularly important for analysis of
algorithms.
The magnitude of the numbers in the below table has significance for the analysis of
algorithms. There are 7 efficiency classes listed

Table: Values (some approximate)


of several functions important for
analysis of algorithms
•The function growing the slowest among these is a logarithmic function.It grows so
slowly,in fact, we should input a program implementing and algorithm with the logarithmic
based function count to run instantaneously on inputs of all realistic sizes.
•Although specific values of such a count depend on the logarithmics based the formula
logan = logab . logbn
makes it possible to switch from one base to another, leaving the count logarithmic , but
with new multiplication constant.
• This is why we omit a logarithmic base and write a simple log n in situations where we
are interested just in a function's order of growth to within a multiplication constant.
• The exponential function 2n and the factorial function vvvboth is functions grow so fast that
they values become astronomically large enough for rather small values of 'n'. There is a
tremendous difference between the order of growth of the functions : 2n , yet both are often
referred to as "exponential growth functions".
Algorithms that require an exponential number of operations are practical for solving only
problems of very small sizes.
Another other way to appreciate the qualitative difference among the order of growth of the
functions 2n in table is to consider how they react to,say, a twofold increase in the values of
their arrangement 'n'
•The function log2n increases in value by just 1. Therefore,
•The linear function increases twofold (n).
•The linear function nlog2n increases slightly more then twofold.
•The quadratic equation n2 and cubic function n3 increases fourfold and eightfold respectively.
Therefore, (2n)2 =4n2 and (2n)3 =8n3
• The value of 2n exponentially gets squared.
•The factorial function n! much more than 2n exponentials.

Diagram: Rate of Growth of


Common Computing Time Function
Point 6: Worst case, Best case, and Average case Efficiencies

There are many algorithms for which running time depends not only on an input size, but
also on the specifics of a particular input.
•The Worst case efficiency of an algorithm is its efficiency for the worst-case input of size n,
which is an input (or inputs) of size n, for which the algorithm runs the longest among all
possible inputs of that size.
•If an algorithm takes the maximum amount of time to run to completion for a specific set of
inputs, then it is called worst-case time complexity.
Example: While searching for an element using the linear search method, if the desired
element is placed at the end of the list, then we get worst-case time complexity. Cworst(n)=n
•The best-case efficiency of an algorithm is its efficiency for the best-case input of size n,
which is an input (or inputs) of size n, for which the algorithm runs the fastest among all
possible inputs of that size. OR
If an algorithm takes the minimum amount of time to run to completion for a specific set of
inputs, then it is called best-case time complexity.
Example: While searching for an element using linear search, if the desired element is
placed at the first position itself, then we get best-case time complexity. Cbest(n)=1
•The Average Case Efficiency: Neither the worst-case nor the best-case analysis provides
complete information about an algorithm's behavior on a "typical" or "random" input. To
analyze an algorithm's average-case efficiency, we must make some assumptions about
possible inputs of size n.
ie., Cavg(n) = Probability of successful search + Probability of unsuccessful search
•If P=1(The Search is successful), the average number of key comparisons made by sequential
search is (n+1)/2 , ie, the algorithm will input, on average about half of the elements.
•If P=0 (The search must be unsuccessful, the average number of key comparisons made will e
n, because the algorithm will input all n elements on all such inputs.

Point 7: Time Space Tradeoff

•Time space tradeoff is basically a situation where either space efficiency can be achieved at
the cost of time or time efficiency can be achieved at the cost of memory
Asymptotic Notations and Basic Efficiency Classes

• Asymptotic Notations are mathematical tools used to analyze the performance of


algorithms by understanding how their efficiency changes as the input size grows.
• Asymptotic analysis allows for the comparison of algorithms’ space and time complexities
by examining their performance characteristics as the input size varies.
• By using asymptotic notations, such as Big O, Big Omega, and Big Theta, we can categorize
algorithms based on their worst-case, best-case, or average-case time or space complexities,
providing valuable insights into their efficiency.
• There are 3 asymptotic notations : O(big oh), Ω(big omega), and Θ(big theta)
Big Omega(O) Asymptotic Notation:

DEFINITION : A function t(n) is said to be in O(g(n)), denoted t(n) ∈ O(g(n)), if t(n)


is bounded above by some constant multiple of g(n) for all large n, i.e., if there
exist some positive constant c and some nonnegative integer n0 such that t(n) ≤
c.g(n) for all n≥n0. The definition is illustrated in Figure given below where, for
the sake of visual clarity, n is extended to be a real number.

•t(n) represents the actual function describing


the time complexity of an algorithm (or some
resource usage as a function of input size n).
•g(n) is a comparison function that provides
an upper bound (asymptotic bound) for t(n), up
to a constant factor c.
•c is the constant and Therefore, it must be
greater than or equal to 0
Big Oh(O) Asymptotic Notation:
It is the measure of longest time taken by an algorithm (Worst case).
It is asymptotically tight upper bound.
O(1) : Computational time is constant. O(n) : Computational time is linear.
O(n2) : Computational time is quadratic. O(n3) : Computational time is cubic.
O(2n) : Computational time is exponential.
Big Oh(Ω) Asymptotic Notation:

DEFINITION : A function t(n) is said to be in (g(n)), denoted t(n)∈(g(n)),if t(n) is


bounded below by some positive constant multiple of g(n) for all large n, i.e., if
there exist some positive constant c and some nonnegative integer n0 such that
t(n)≥c.g(n) for all n≥ n0..
•t(n) represents the actual function describing
the time complexity of an algorithm (or some
resource usage as a function of input size n).
•g(n) is a comparison function that provides
an lower bound (asymptotic bound) for t(n), up
to a constant factor c.

It is the measure of smallest amount of


time taken by an algorithm (Best case).
It is asymptotically tight lower bound.
Big Theta(Θ) Asymptotic Notation:

DEFINITION : A function t(n) is said to be in (g(n)), denoted t(n)∈(g(n)), if t(n)is


bounded both above and below by some positive constant multiples of g(n) for all
large n, i.e., if there exist some positive constants c1 and c2 and some non negative
integer n0 such that c2 g(n)≤t(n)≤ c1 g(n) for all n≥ n0
Mathematical Analysis of
Non-recursive Algorithms

Vandana U
Assistant Professor
Dept. of AI-DS
1. Important Rules of Sum Manipulation:

2. Summation Formulas
General Plan for Analyzing the Time Efficiency of Non-recursive Algorithms
1. Decide on parameters indicating input size.[Input’s size].
2. Identify the basic operation. [Basic Operation].
3. Check whether the number of times the basic operation is executed depends
only on the size of an input. If it also depends on some additional property, the
worst-case, average-case, and, if necessary, best-case efficiencies have to be
investigated separately. [Check if basic operation depends on specifics of the
input. If so, check for best, worst and average time efficiency.]
4. Setup a sum expressing the number of times the algorithm’s basic operation
is executed. [Sum of algorithm’s basic operations].
5. Using standard formulas and rules of sum manipulation, either find a closed
form formula for the count or, at the very least, establish its order of growth.
[Use formulas and sums to establish order of growth].
Let us start with a very simple example that demonstrates all the
principal steps typically taken in analyzing such algorithms.

EXAMPLE 1 : Consider the problem of finding the value of the


largest element in a list of n numbers. For simplicity, we
assume that the list is implemented as an array. The
following is pseudocode of a standard algorithm for solving
the problem.
Now we will analyse the efficiency of an
algorithm :

1. Decide on parameters indicating input size:

In this algorithm , the input size is ‘n’

2. Identify the basic operation :

Here, basic means the instruction that gets


executed maximum number of times.
In this case, the loop statement has comparison and assignment statement, but
comparison takes place ‘n’ number of times. So, we consider Loops
3. Check whether the number of times the
basic operation is executed depends only
on the size of an input. If it also depends
on some additional property, the worst-
case, average-case, and, if necessary, best-
case efficiencies have to be investigated
separately.
4. Setup a sum expressing the number of
times the algorithm’s basic operation is
executed.
5. Using standard formulas and rules of
c(n) = (n-1) – 1 + 1 = n - 1 sum manipulation, either find a closed
= form formula for the count or, at the very
least, establish its order of growth.
∈ O(n)
EXAMPLE 2: Consider the element uniqueness problem: check whether all the
elements in a given array of n elements are distinct. This problem can be solved
by the following straightforward algorithm
1. Decide on parameters indicating input size:
In this algorithm , the input size is ‘n’
2. Identify the basic operation :
Here, basic operation is comparison .
3. Check whether the number of times the basic
operation is executed depends only on the size of
an input. If it also depends on some additional
property, the worst-case, average-case, and, if
necessary, best-case efficiencies have to be
investigated separately.
The worst case input is an array for which the number of element comparisons
Cworst(n) is the largest among all arrays of size n. An inspection of the innermost loop
reveals that there are two kinds of worst-case inputs—inputs for which the algorithm
does not exit the loop prematurely: arrays with no equal elements and arrays in which
the last two elements are the only pair of equal elements.
3. Check whether the number of times the basic operation
is executed depends only on the size of an input. If it
also depends on some additional property, the worst-case,
average-case, and, if necessary, best-case efficiencies
have to be investigated separately.
The worst case input is an array for which the number of
element comparisons Cworst(n) is the largest among all
arrays of size n. An inspection of the innermost loop
reveals that there are two kinds of worst-case inputs—
inputs for which the algorithm does not exit the loop
prematurely: arrays with no equal elements and arrays in
which the last two elements are the only pair of equal
elements.
EXAMPLE 3: Given two n×n matrices A and B,find the time efficiency of the definition-
based algorithm for computing their product C = AB. By definition, C is an n×n matrix
whose elements are computed as the scalar (dot) products of the rows of matrix A and
the columns of matrix B:
EXAMPLE 3: Given two n×n matrices A and B,find the time efficiency of the definition-
based algorithm for computing their product C = AB. By definition, C is an n×n matrix
whose elements are computed as the scalar (dot) products of the rows of matrix A and
the columns of matrix B:
Now we will analyse the efficiency of an
algorithm :
1. Decide on parameters indicating input size:
In this algorithm , the input size is matrix of
order‘n’
2. Identify the basic operation :
Here computation step in the inner most loop
executes the maximum number of times,
and that has statement has both addition and
multiplication, so we can say ,computation is
the most basic step.

3. Check whether the number of times the basic operation is executed depends only on the size
of an input. If it also depends on some additional property, the worst-case, average-case, and, if
necessary, best-case efficiencies have to be investigated separately.
Since no matter what the product of the matrix will be done for all the indexes, we can say
that there are no best, average and worst case efficiency for this algorithm. So third step can be
omitted
Now we will analyse the efficiency of an
algorithm :
4. Setup a sum expressing the number of times
the algorithm’s basic operation is executed.
5. Using standard formulas and rules of sum
manipulation, either find a closed form formula
for the count or, at the very least, establish its
order of growth.
EXAMPLE 4 The following algorithm finds the number of binary digits in the binary
representation of a positive decimal integer.
1. Decide on parameters indicating input size:
In this algorithm , the input size is ‘n’
2. Identify the basic operation :
Here, basic operation is comparison .
First, notice that the most frequently executed
operation here is not inside the while loop but rather
the comparison n>1 that determines whether the
loop’s body will be executed.Since the number of
times the comparison will be executed is larger than
the number of repetitions of the loop’s body by
exactly 1, the choice is not that important.
Point 3,4 and 5 cant be applied directly since all values will not be accepted.
Therefore, we have to calculate it in an alternate way.
Mathematical Analysis of
Recursive Algorithms

Vandana U
Assistant Professor
Dept. of AI-DS
1. Important Rules of Sum Manipulation:

2. Summation Formulas
General Plan for Analyzing the Time Efficiency of Non-recursive Algorithms
1. Decide on parameters indicating input size. [Input’s size].
2. Identify the basic operation. [Basic Operation].
3. Check whether the number of times the basic operation is executed depends only on the size
of an input. If it also depends on some additional property, the worst-case, average-case, and, if
necessary, best-case efficiencies have to be investigated separately. [Check if basic operation
depends on specifics of the input. If so, check for best, worst and average time efficiency.]
4. Set up a recurrence relation, with an appropriate initial condition, for the number of times
the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.
EXAMPLE 1 : Compute the factorial functionF(n)=n! foran arbitrary non-
negative integer n.
Since n!=1,…..,(n −1).n = (n−1)!.n for n≥1 and 0!=1 by definition,
we can compute F(n)=F(n−1).n with the following recursive algorithm.
Now we will analyse the efficiency of an
algorithm :
1. Decide on parameters indicating input size:
In this algorithm , the input size is ‘n’
2. Identify the basic operation :
Here, basic means the instruction that gets
executed maximum number of times.
The basic operation of the algorithm is multiplication
3. Check whether the number of times the basic operation is executed depends only on the size
of an input. If it also depends on some additional property, the worst-case, average-case, and,
if necessary, best-case efficiencies have to be investigated separately. [Check if basic
operation depends on specifics of the input. If so, check for best, worst and average time
efficiency.]
Since the algorithm works for every iteration until ‘n’, there is no need to check
best,average and best case efficiency.
Now we will analyse the efficiency of an algorithm :

4. Set up a recurrence relation, with an


appropriate initial condition, for the number of
times the basic operation is executed.
Since the function F(n) is computed according to
the formula F(n) =F(n−1).n for n>0, the number of
multiplications M(n) needed to compute it
must satisfy the equality:
Now we will analyse the efficiency of an
algorithm :

Using the method of backward


substitution:
EXAMPLE 2 : As our next example, we consider another educational work horse of recursive
algorithms: the Tower of Hanoi puzzle.

1. Decide on parameters indicating input size:

In this algorithm , the input size is ‘n’

2. Identify the basic operation :

Here, basic operation is moving the discs. .


THANK YOU

You might also like