0% found this document useful (0 votes)
12 views

analysisofalgorithms

Design and Analysis of Algorithms 1

Uploaded by

nvnmarcos5
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

analysisofalgorithms

Design and Analysis of Algorithms 1

Uploaded by

nvnmarcos5
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 52

MODULE 1

ANALYSIS OF ALGORITHMS
What is an algorithm ?
• Algorithm is a finite set of instructions that if followed accomplished a
specific task.
• An algorithm must satisfies the following criteria.
1. Input -Must take zero or more input.
2. Output- Must give one or more output
3. Definiteness -each instruction is clear and unambiguous
4. Finiteness - algorithm terminates after a finite number of
steps.
5. Effectiveness -every instruction must be basic i.e. simple
instruction
What is an algorithm ?

• An algorithm is a sequence of
unambiguous instructions for
solving a problem, i.e., for obtaining
a required output for any
legitimate input in a finite
amount of time.
Problem, Algorithm, Program
• For each problem or class of problems, there may be many different
algorithms.
• For each algorithm, there may be many different implementations
(programs).
Efficiency of Algorithms
“Analysis of algorithms” mean an investigation of an
algorithm’s efficiency with respect to two resources:
Running time and Memory space.
Analysis can be done at two different stages

1. Before implementation (A Priori Analysis or


Performance Analysis )

2. After implementation (A Posteriori Analysis or


Performance Measurement)
Efficiency of Algorithms

Apriori Analysis

• This is a theoretical analysis of an algorithm.

• Efficiency of an algorithm is measured by assuming


that all factors such as, processor speed, language,
compiler are constant and have no effect on the
implementation.
Efficiency of Algorithms

Posteriori Analysis

• This is an empirical analysis of an algorithm.

• The selected algorithm is implemented using


programming language.

• This is then executed on target computer.

• In this analysis, actual statistics like running time


and space required, are collected.
Efficiency of Algorithms : Space
Complexity

• Space efficiency (Space complexity) :- The


amount of memory units required by the
algorithm in addition to the space needed for its
input and output.

• Space is measured by counting the maximum


memory space required by the algorithm.
Space Complexity

space required = Fixed Part + Variable Part

• fixed part - space required to store certain data and


variables, that are independent of the size of the problem.

eg: simple variables and constants used, program size, etc.

• variable part - space required by variables whose size


depends on the size of the problem instance being
solved.

eg: dynamic memory allocation, recursion stack space, etc.


Space Complexity

• Space complexity S(A) of any algorithm A is defined


as

S(A) = S(C )+ SA(IC)


where,
S(C) - the space required at compile time which
remains fixed (constant).

SA(IC) - the space required at the run time(Instant


Characteristic).
When analyzing space we must estimate SA(IC)
Efficiency of Algorithms : Time
Complexity
• Time efficiency (Time Factor or Time complexity) is
the measurement of time required for the
execution of an algorithm.

ie T(n) = run time +compile time

• In time complexity we consider only run time.

• The Execution time has been expressed as a function


of problem size which is known as Time complexity
of an algorithm.
Efficiency of Algorithms : Time Complexity

• Time complexity is the estimate of basic


operations executed by an algorithm.

• Basic operation of an algorithm is usually the most


time-consuming operation in the algorithm.

• To calculate the running time, identify the most


important operation (basic operation) of the
algorithm and compute the number of times
the basic operation is executed on inputs of
size n.
How to measure the running time of an
algorithm?
1. Identify the important basic operations performed
by the algorithm
2. Compute the total time taken by the basic
operations

total cost of basic operation= cost of basic


operation performed once x
No. of times the basic
operations are performed

3. Total time T(n) = Sum of total cost of all the basic


operations
Analysis of Algorithms

The efficiencies of some algorithms may differ


significantly for the input of the same size. For
such algorithms we need to distinguish between

1) Worst Case Efficiency


2) Best Case Efficiency
3) Average Case Efficiency
Worst Case Efficiency
• Worst Case efficiency of an algorithm is its
efficiency for the worst case input of size ‘n’,
which is an input (or inputs )of size n for which
the algorithm runs the longest among all
possible inputs of that size.

• In the worst case analysis, we calculate upper


bound on running time of an algorithm.
Best Case Analysis
The best-case efficiency of an algorithm is
its efficiency for the best-case input of size
n, which is an input (or inputs) of size n for
which the algorithm runs the fastest
among all possible inputs of that size.

• In the best case analysis, we calculate


lower bound on running time of an
algorithm.
Average Case Analysis
• In average case analysis, we take all
possible inputs and calculate computing time
for all of the inputs. Sum all the calculated
values and divide the sum by total number of
inputs.
AMORTIZED EFFICIENC
• Amortized analysis is a worst case analysis of a sequence
of operations.

• Amortized analysis refers to determining the time-


averaged running time for a sequence of operations.

• It applies not to a single run of an algorithm but to a


sequence of operations performed on the same data
structure.
Amortized efficiency

• Using Amortized Analysis, we can show that the


average cost of operation is small even single
operation within sequence might be expensive.

• Techniques are:-
• 1) Aggregate Method
• 2) Accounting Method
• 3) Potential Method
TIME COMPLEXITY OF SIMPLE ALGORITHMS

 For analysis of an algorithm we consider only run


time
 Run time is calculated by counting the number of
program steps.
 A program step is a meaningful segment of a
program that has an execution time independent of
instant characteristics.
TIME COMPLEXITY OF SIMPLE ALGORITHMS

 Step count of different types of statements:


 1. comments : count as zero steps
 2. Assignment statement : Assignment statements
which does not involve any calls to other algorithms
is counted as one step ie, O(1)
 3. if ...else : In if …else statements either if or else
part is execute. So the worst step count is taken.
 4. Iterative statement : Consider step counts only
for the control part of the statement.
 eg: for i=1 to n ----step count is n+1
TIME COMPLEXITY OF SIMPLE ALGORITHMS

 Methods to find Step count


 1. Step count Method
 2. Step table or Tabular Method.

 1. Step count method : In this method we introduce


a global variable say ‘count’ with initial value zero in
the program. Each time a statement in the program
executed ‘count’ is incremented by the step count
of that statement.
Eg
Eg
Algorithm sum(a,n) Count =0
{ Algorithm sum(a,n)
S=0 {
For i= 1 to n Count= count +1
s=s+a[i] S=0
For i= 1 to n
Return s {
} count =count +1
s=s+a[i]
count= count +1
}
Count=count+1
T(n) = 2n+3
Return s
Count=count +1
}
Eg
Eg Count =0
Algorithm Rsum(a,n)
Algorithm Rsum(a,n) {
{ Count=count+1
If (n<=0) If (n<=0)
return 0 {
Else Count=count+1
{ return 0
return Rsum(a,(n-1))+a[n] }
} Else
{
Count=count+1
return Rsum(a,(n-1))+a[n]
}
TRsum(n)= 2 ; n=0
=2+
TRsum(n-1)
2. Step table or Tabular Method

Step count is determined by build a table in which we list the


total number of steps contributed by each statement.
Determine the number of steps/ execution (s/e)
Find the frequency each statement is executed.
By combining these two quantities, total contribution of each
statement id obtained.
By adding the contributions of all statements, the step count of
entire algorithm is obtained.
Eg : Tabular method.

s/e frequenc Total


y steps
Algorithm sum(a,n) 0 0 0

{ 0 0 0

S=0 1 1 1

1 n+1 n+1
for i= 1 to n

s=s+a[i] 1 n n

Return s 1 1 1
}
0 0 0

Total 2n+3
Eg : Tabular method.

s/e Frequency Total steps


n=0 n>0 n=0 n>0
Algorithm Rsum(a,n) 0 0 0 0 0

{ 0 0 0 0 0 Where x= TRsum (n-1)

1 1 1 1 1 Total = 2 ; n=0
If (n<=0) 2+TRsum(n-1) ;
1 1 0 1 0 n>0
return 0

Else 0 0 0 0 0

{ 0 0 0 0 0

return Rsum(a,(n-1)) 1+x 0 1 0 1+x


+a[n]
0 0 0 0 0
}

Total 2 2+x
TIME COMPLEXITY OF ITERATIVE ALGORITHMS

s/e frequenc Total


y steps
Algorithm addmat(A,B,C,m,n) 0 0 0
{
for(i=1; i<=m; i=i+1) 1 m+1 m+1

for(j=1;j<=n;j=j+1) 1 m(n+1) m(n+1)

C[I,j]=A[i.j]+B[i.j] 1 m*n m*n


}
0 0 0

Total 2mn+2m+1
TIME COMPLEXITY OF ITERATIVE ALGORITHMS

s/e frequenc Total


y steps
Algorithm mulmat(A,B,C,n,n) 0 0 0
{
for(i=1; i<=n; i=i+1) 1 n+1 n+1

for(j=1;j<=n;j=j+1) 1 n(n+1) n(n+1)

{ 0 0 0
C[I,j]=0;
1 n(n+1) n(n+1)

for(k=1; k<=n; k=k+1) 1 n*n(n+1) n*n(n+1)

1 n*n*n n*n*n
C[I,j]=A[i.j]+B[i.j]
}

Total
Analysis of Algorithms - Asymptotic
Analysis
• In Asymptotic Analysis, we evaluate the performance
of an algorithm in terms of input size.

• We calculate, how does the time (or space) taken by


an algorithm increases with the input size.

• Asymptotic Analysis is not perfect, but that’s the


best way available for analyzing algorithms.

• In Asymptotic analysis, we always talk about input


sizes larger than a constant value.
Asymptotic Notations
• Asymptotic notations are mathematical tools to represent
time (or space) complexity of algorithms in terms of order of
growth.

The asymptotic notations used to represent time complexity


of algorithms are.
1. Big O (O)
2. Big Omega (Ω)
3. Big Theta (θ)
4. Little o (o)
5. Little omega (ω)
Asymptotic Notations

In the following definitions,


• f(n) and g(n) can be any non-negative functions defined on the
set of natural numbers.
• f(n) will be an algorithm’s running time indicated by its basic
operation count and g(n) will be some function to compare the
count with.
Big O Notation

A function f(n) is said to be in O(g(n)) denoted by


f(n)ϵ O(g(n)) if f(n) is bounded above by some
positive constant multiple of g(n) for all large n.

ie, A function f(n) is said to be in O(g(n)) iff there


exists some positive constants c and some
nonnegative integer n0 such that
f(n) ≤ c g(n) for all n≥n0

g(n) is an upper bound on the value of f(n)


Big Ω Notation

A function f(n) is said to be in Ω (g(n)) denoted by


f(n)ϵ Ω (g(n)) if f(n) is bounded below by some
positive constant multiple of g(n) for all large n.

ie, A function f(n) is said to be in Ω (g(n)) iff there


exists some positive constants c and some
nonnegative integer n0 such that
f(n) ≥ c g(n) for all n≥n0

g(n) is a lower bound on the value of f(n)


Θ Notation:

A function f(n) is said to be in θ(g(n)), denoted


by f(n)ϵ Θ(g(n)) if f(n) is bounded both above
and below by some positive constant multiple
of g(n) for all large n.

ie, A function f(n) is said to be in θ(g(n)) iff there


exists some positive constants c1 and c2 and
some nonnegative integer n0 such that
c1 g(n) ≤ f(n) ≤ c2 g(n) for all n≥n0
Little o (o) Notation:

The f(n) = o(g(n)) iff

Eg: f(n) = 3n +2
g(n) = n2
Little omega(ω) Notation:

The f(n) = ω(g(n)) iff

Eg: f(n) = 3n2 +2n


g(n) = n
𝑓 ( 𝑛) 𝑓 ( 𝑛)
lim =0𝑜𝑟 lim =𝑐 then
𝑛→∞ 𝑔 (𝑛 ) 𝑛→∞ 𝑔 (𝑛 )
f(n)= O(g(n))
𝑓 ( 𝑛) 𝑓 (𝑛 )
lim =∞𝑜𝑟 lim =𝑐
𝑛→∞ 𝑔 (𝑛 ) 𝑛→∞ 𝑔 ( 𝑛 )
then f(n)= Ω(g(n))

then f(n)= θ(g(n))


PROPERTIES OF ASYMPTOTIC NOTATIONS

 Assume that f(n) and g(n) are asymptotically


positive
 1. General Property
 If f(n) is O(g(n)) then a*f(n) is also O(g(n)), where a
is a constant.
 f(n)= 3n2 + 2 and g(n)=n2, a=5
PROPERTIES OF ASYMPTOTIC NOTATIONS

 2. Reflexivity
 f(n)= O(f(n))
 f(n)= Ω (f(n))
 f(n)=θ(f(n))

 f(n)= 3n2 + 2
PROPERTIES OF ASYMPTOTIC NOTATIONS

 3. Transitivity
 f(n)= O(g(n)) and g(n)= O(h(n)) imply f(n)=O(h(n))
 f(n)= Ω (g(n)) and g(n)= Ω (h(n)) imply f(n)= Ω (h(n))
 f(n)=θ(g(n)) and g(n)=θ(h(n)) imply f(n)=θ(h(n))
 f(n)= o(g(n)) and g(n)= o(h(n)) imply f(n)=o(h(n))
 f(n)= ω(g(n)) and g(n)= ω(h(n)) imply f(n)= ω(h(n))

 Eg: - f(n)= n, g(n)=n2 , h(n)= n3


PROPERTIES OF ASYMPTOTIC NOTATIONS
 4. Symmetry
 f(n)=θ(g(n)) iff g(n)=θ(f(n))
 Eg: - f(n)= n2, g(n)=n2

 Transpose symmetry

 f(n)= O(g(n)) iff g(n)= Ω (f(n))


 f(n)= o(g(n)) iff g(n)= ω (f(n))

 Eg: - f(n)= n, g(n)=n2


ORDER OF GROWTH OF AN ALGORITHM
• Order of growth in algorithm means how the time for computation increases when the
input size increases.

• Commonly used complexity functions


O(1) or 1 growth is independent of the problem size n.
(constant):

log log n Double logarithmic growth increases very slowly (Interpolation search)

log2N growth increases slowly compared to the problem size


(logarithmic)
(binary search)
N (linear) directly proportional to the size of the problem (linear search)
N * log2N typical of some divide and conquer approaches (merge sort)
(n log n)

N2 (quadratic): typical in nested loops (bubble sort)


N3 (cubic) more nested loops (matrix multiplication)
2N (exponential) Exponential -growth is extremely rapid and possibly impractical.
(TSP)
ORDER OF GROWTH OF AN ALGORITHM
• Order of growth in algorithm means how the time for computation increases when the
input size increases.

• Commonly used complexity functions

Between two algorithms it is considered that the one having a smaller order of
growth is more efficient (true only for large input size).
1 < log n < n1/2 < n < n log n < n2 < n3< 2n
TIME EFFICIENCY OF RECURSIVE ALGORITHMS.

Recursion is the ability of an algorithm to call itself until met


certain condition-base condition.
Recursive algorithm must satisfies
1. Base condition – the value of algorithm which does not call
itself and can be evaluated without recursion.
2. Each recursive call must be to a case that eventually leads
towards a base condition
TIME EFFICIENCY OF RECURSIVE ALGORITHMS.
• Running time of a recursive algorithm can be
expressed by means of recurrence relation.
• Recurrence relation is a recursive function of the size of
the problem generally denoted by T(n), n the size of input .
• The equation must satisfy both the base case and the
recursive case.
• The portion of recurrence relation that does not contain T is
the base case and the portion that contains T is the
recursive case.
Eg:
T(n) = d ; n=1 (base case)
c + T(n-1) ; n>1 (recursive case)
TIME EFFICIENCY OF RECURSIVE ALGORITHMS.

Different methods to solve recurrences.


1. Iteration Method
2. Recursion Tree
3. Master theorem
TIME EFFICIENCY OF RECURSIVE ALGORITHMS.

1. Iteration Method

• In this method, iteratively “unfold” the recurrence until


“see the pattern”.
• Solve the recurrence using back substitution.
Convert the recurrence into a summation and solve it
using a known series. ( Expand the substitution until
we reach the base condition)
TIME EFFICIENCY OF RECURSIVE ALGORITHMS.
• Recursion Tree
• In this method recurrence is converted into a tree.
• In the tree each node represents the cost incurred at
various levels of recursion ie, cost of sub problems in the
set of recursive function invocations.
• Sum of cost of various nodes at each level is calculated
(prelevel cost).
• The total cost of all the levels of the recursion tree (sum of
prelevel cost) gives the time.
TIME EFFICIENCY OF RECURSIVE ALGORITHMS.
Recursion Tree
If the recurrence is of the form
T(n) = a T(n/b) + f(n)
Then, a - the number of sub problem
n/b - size of sub problem
f(n) - cost incurred for dividing and combining (root of
tree)
Each node of the tree is expanded until we reach at the base
condition.
TIME EFFICIENCY OF RECURSIVE ALGORITHMS.
Recursion Tree
Eg: T(n) = 3T(n/4) + cn2
TIME EFFICIENCY OF RECURSIVE ALGORITHMS.
Recursion Tree
Eg: T(n) = 3T(n/4) ssume
+ cn2 4k = n

k = log4 n
Assume 4k = n
k = log4 n
Total Cost = cn2 + 3 c(n/4)2 + 9 c(n/16)2 +
27c(n/64)2 + ......+3k c(n/4k)2
Total Cost = cn2 + 3 c(n/4)2 +=9cnc(n/16)
2
( 30/1620++27c(n/64) 2
3 1/16 1 +3 + ......+3k
c(n/4k)2 2
/162 + 33/163 + ... +3k/16k)
= cn2( 30/160 + 3=1/16 1
cn2 { +3 2
/162 +
(1-(3/16) k+133/163 + ...
) /(1 -
+3k/16k) 3/16) }
= cn2 { (1-(3/16)=k+1 cn) 2/(1{ -(1-
3/16) } (k+1))/
0.1875
0.8125}
= cn2 ;{ (1-
0.1875 ) tends
(k+1)
0.1875 (k+1)to zero when n ->infinity
)/ 0.8125} ; 0.1875 (k+1)
) tends to

zero when n ->infinity


T(n) = O (n )2

T(n) = O (n2)

You might also like