0% found this document useful (0 votes)
30 views116 pages

Module 1 - DAA

The document provides an overview of algorithms, including definitions, design goals, and the distinction between algorithms and programs. It outlines the characteristics of algorithms, various design techniques, and the importance of analyzing their efficiency in terms of time and space complexity. Additionally, it discusses applications of algorithms and various problem types, along with asymptotic notations for performance evaluation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views116 pages

Module 1 - DAA

The document provides an overview of algorithms, including definitions, design goals, and the distinction between algorithms and programs. It outlines the characteristics of algorithms, various design techniques, and the importance of analyzing their efficiency in terms of time and space complexity. Additionally, it discusses applications of algorithms and various problem types, along with asymptotic notations for performance evaluation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

CSE3004

Design and Analysis of Algorithm


– by
Dr. Chilukamari Rajesh
Assistant Professor, VIT-AP
Ph.D. (NIT Warangal)
M.Tech. (NIT Sikkim)
Algorithm
● Al-Khwarizmi, persian scholar
● Father of Algorithms and Algebra
● Systematically studied how to solve linear & quadratic
equations
Algorithm Definitions
● An algorithm is a sequence of computational steps that transform the input into the
output.
● An algorithm is a sequence of unambiguous instructions for obtaining a required
output for any legitimate input in a finite amount of time.
● A finite set of instructions that specify a sequence of operations to be carried out in
order to solve a specific problem or class of problems is called an algorithm.
● An algorithm is an abstraction of a program to be executed on a physical machine.
● An algorithm is defined as set of instructions to perform a specific task within
finite number of steps.
● Algorithm is defined as a step by step procedure to perform a specific task within
finite number of steps.
Algorithm Design Goals
● To design fast, efficient and effective solution to a problem domain.
● Make it easy to understand, code and debug.
● To use computer's resources efficiently.
● It does not only limit to reduce cost and time but to enhance
scalability, reliability and availability.
● If we have an algorithm for a specific problem, then we can
implement it in any programming language, meaning that the
algorithm is independent from any programming languages.
Algorithm vs Program
● The design phase produces an algorithm and the implementation
phase produces a program.
● A program is the concrete expression of an Algorithm in a
programming language.
Algorithmic Thinking
Notion of an Algorithm
In addition, all algorithms must satisfy the following characteristics:
1. Input: Zero or more quantities which are externally supplied.
2. Output: At least one quantity is produced.
3. Definiteness: Each step is clear and unambiguous.
4. Finiteness: Execution terminates after a finite number of steps for all
possible inputs.
5. Effectiveness: Every step must be executable, given enough
information to produce results.
Quality of an Algorithm
Algorithm Example: Finding the largest number among n numbers
Input: Given n numbers and their value.
Output: The largest value number.
Step 1: Let the first value as largest value denoted by MAX
Step 2: Let R denotes the no. of remaining numbers i.e. R=n–1
Step 3: if (R!=0) then it means list is still not exhausted.
Therefore, look the next no. called NEW
Step 4: if NEW is greater than MAX then replace
MAX with the value of NEW
Step 5: Decrement R by 1 (i.e. R=R-1)
Step 6: Repeat Step 3 to Step 5 until R becomes zero.
Step 7: Print MAX
Step 8: Stop
Algorithm Design
and
Analysis Process
1. Understanding the Problem
● This is the first step in designing of algorithm.
● Read the problem’s description carefully to understand the problem
statement completely.
● Ask questions for clarifying the doubts about the problem.
● Identify the problem types and use existing algorithm to find solution.
● Input (instance) to the problem and range of the input get fixed.
2. Decision Making
a. Ascertaining the Capabilities of the Computational Device
● Sequential algorithms (random-access machine (RAM), turing)
● Parallel algorithms
b. Choosing between Exact and Approximate Problem Solving
● Exact algorithm- solves problem exactly and produce correct result
● Approximation algorithm- Approximate answer for complex problems.
E.g., extracting square roots, solving nonlinear equations, and evaluating
definite integrals.
c. Algorithm Design Techniques
● Choice of proper data structure
● Brute Force, D&C, DP, Greedy Technique and etc.
3. Approaches to represent an algorithm
Step-by-step instructions Pseudo Code

Can also be represented using,


● UML diagrams
● FSM diagrams
4. Proving an Algorithm’s Correctness
● An algorithm must yield a required result for every legitimate input in
a finite amount of time.
● Mathematical induction - a common technique for proving
correctness.
● The notion of correctness for approximation algorithms is less
straightforward than it is for exact algorithms.
● The error produced by the algorithm should not exceed a predefined
limit.
5. Analyzing an Algorithm
● For an algorithm the most important is efficiency. They are:
○ Time efficiency: indicating how fast the algorithm runs, and
○ Space efficiency: indicating how much extra memory it uses.
● Other factors to analyze an algorithm are:
○ CPU registers consumption
○ Network consumption
○ Power consumption
○ Simplicity of an algorithm
○ Generality of an algorithm
6. Coding an Algorithm
● The coding / implementation of an algorithm is done by a suitable
programming language like Python, C++, JAVA.
● The transition from an algorithm to a program can be done either
incorrectly or very inefficiently.
DAA APPLICATIONS
● Information Retrieval: Search engines like Google, Bing
● Sorting and Searching: Database management systems
● Network Routing: Routing packets in computer networks
● Data Compression: File compression utilities (e.g., ZIP, RAR)
● Cryptography: Secure communication systems (RSA and AES)
● Machine Learning: pattern recognition, clustering, Training and
testing models
● Computer Graphics, Robotics, Game theory
Important Problem-Types
● Sorting
● Searching
● String processing: pattern matching
● Graph problems: graph traversal, shortest-path and topological sorting
● Combinatorial problems: traveling salesman problem, graph-colouring
problem
● Geometric problems: closest-pair problem, convex-hull problem
● Numerical problems: solving equations computing definite integrals
and evaluating functions
Algorithm Design Techniques
Divide and Conquer approach
● In this approach, we divide the input into smaller problems and then we try to solve
those problems to arrive at final result.
Greedy Algorithm
● In this approach, we take the best solution for the present step without considering in
future there might be more efficient result.

Recursive Algorithm
● In this approach, we call the same function again and again. It is important to have a
base condition or exit condition to come out of recursion loop, else it will go to
infinite loop. Care to be taken while using recursion as it uses more stack space, it
might result in MLE error [Memory limit exceeded] for some problems while doing
competitive programming.
Algorithm Design Techniques
Brute-force Algorithm
● In this approach we find all the ways to solve a problem. As a problem can have
multiple solutions, by using this approach, we might get to the correct result, but
will lose efficiency.

Dynamic Programming
● DP is used for optimization problems. DP algorithms are recursive in nature. In
DP, we store the previous results. Then we use those previous results to find next
result. We usually store the results in the form of matrix.

Backtracking
● This algorithm is similar to DP, but the main difference is we store the result in a
Boolean matrix.
Algorithm Design Techniques
Branch and Bound
● It is similar to the backtracking since it also uses the state space tree. It is used for
solving the optimization problems and minimization problems.

Randomized Algorithm
● Randomized algorithms use random numbers or choices to decide their next step. We
use these algorithms to reduce space and time complexity.

Approximation Algorithm
● These algorithms designed to solve problems that are not solvable in polynomial time
for approximate solutions. These problems are known as NP complete problems.
These problems are significantly effective to solve real world problems, therefore, it
becomes important to solve them using a different approach.
Analysis of Algorithm
● For all input instances, the algorithm should give correct outputs then
only we can say that the Algorithm is correct
● Analysis of algorithm is the process of investigating of an algorithm’s
efficiency respect to two resources:
○ Running time– Time needs for successful execution of algorithm.
○ Memory space– Amount of space needs for successful execution
of algorithm.
Performance analysis of Algorithm
● The efficiency of an algorithm can be decided by measuring the
performance of an algorithm

● Space Complexity: Amount of memory an algorithm needs to run to


completion

● Time Complexity: Amount of computer time an algorithm needs to


run to completion
Performance evaluation
● A Priori Estimation
○ Performance analysis of algorithm
○ Machine, language independent

● A Posteriori Testing
○ Performance measurement of program
○ Machine, language dependent
Analysis
● Efficiency is measured based on time and space complexity.
● Finding efficiency based on running time is not a good approach.
● Measuring the exact running time is not practical at all.
● Running time generally depends on size of input.

If the size of the input is ‘n’, then f(n) is a function of n denotes


the time complexity.
or
f(n) represents the no.of instructions executed for the input ‘n’.
Order of Growth/ Rate of Growth
● Order Of Growth (OOG) means how the time increases when you
increase the input size for computation.
● Order of growth provide the behavior of a process (algorithm)
● Allows us to compare the relative performance of alternative
algorithms.
● The lower order of growth algorithm is the best/efficient algorithm
among available algorithms
Basic orders of growth in increasing order
Order of Growth
● Measuring the performance of an algorithm in relation with the input
size n
Basic orders of growth in increasing order
Order of Growth Basic operation
● Basic operation is the operation/step which require highest operation
count
● Here, we consider only basic operations rather than all operations.
● The basic operation decides the OOG of that algorithm
● Example:
○ Sorting/Searching problem - comparison operation
○ Matrix multiplication - multiplication of two numbers
○ GCD - division operation
Order of Growth Example
● In this, we remove lower order values, constans in given T(n)

T(n) = n2+3n
● The term 3n becomes insignificant compared to n2 when n is very
large.
● The function T(n) is said to be asymptotically equivalent to n2, and
OOG=n2 and T(n) ≈ n2.
Order of Growth Example

(i) T(n) = 200 (ii) T(n) = 7logn+5


Order of Growth Example

(i) T(n) = 200 (ii) T(n) = 7logn+5


T(n) = 200*n0 T(n) ≈ logn
T(n) ≈ 1
Finding running time of an Algorithm
● Running time depends upon
○ Input Size
○ Nature of Input
● Time grows with size of input, so running time of an algorithm is
usually measured as function of input size n.
● Running time is measured in terms of number of steps/primitive
operations (expression evaluation, assign a value to a variable,
indexing into an array, calling a method, returning from a method)
performed.
● Independent from machine and OS.
Types of Methods to Measure the Running Time
★ Step/Frequency Count
★ Operations Count
★ Asymptotic Notations
★ Recursive Relations
★ Amortized Analysis
Step/Frequency Count Example
Step/Frequency Count Example
Space Complexity Analysis
● Space complexity is a function describes the amount of memory (space)
consumed by an algorithm based on the amount of input the algorithm.
● Space complexity calculates the space used for input values, and also the
space used by the variables of an algorithm.
● Auxiliary Space is the extra space or temporary space used by an
algorithm during execution
Space Complexity = Auxiliary Space + Space used for input values
● So space is not considered for them but to make an algorithm more
efficient that can run in less amount of space.
Space Complexity Analysis – Example

● Input value n is constant which will take the space of O(1).


● Auxiliary space is also O(1) because i and sum are also constants.
● Hence total space complexity is O(1).
Step/Frequency Count Example2
Step/Frequency Count Example2

T(n) = 2n2+2n+1 = O(n2)


S(n) = 3n2+1 = O(n2)
Asymptotic Analysis
● An algorithm may not have the same performance for different types of
inputs. With the increase in the input size, the performance will change.
● The study of change in performance of the algorithm with the change in
the order of the input size is defined as asymptotic analysis.
● Asymptotic analysis evaluates the growth rate of an algorithm's time
complexity or space complexity as the input size 𝑛 becomes very large.
● The idea is to analyze how the running time or space consumption
changes relative to the input size 𝑛, ignoring constants and lower-order
terms because they become insignificant for large inputs.
Asymptotic Notations
● Asymptotic refers to the behavior of a function or
algorithm as its input size approaches a very large
value (towards infinity).
● Notation shows a class of a function (time, space
etc)
● Asymptotic notations are the mathematical notations used to describe the
running time of an algorithm when the input tends towards a particular
value or a limiting value.
● Used to represent a bound for a function (time/space etc)
Asymptotic Notations
● Let f(n) and g(n) are monotonically increasing functions and n is size of
the input, then we have different notations like
Big-Oh Notation

➔ The Big Oh (O) notation defines an


upper bound of running time of an
algorithm.
➔ O(g(n)) = {f(n): there exist positive
constants c and n0 such that
0 <= f(n) <= c*g(n) for all n >= n0}

1.3n+2=O(n) as 3n+2≤4n for all n≥2


2.3n+3=O(n) as 3n+3≤4n for all n≥3
Big-Omega Notation

➔ Big Omega (Ω) notation represents the


lower bound of the running time of an
algorithm.
➔ Ω(g(n)) = {f(n): there exist positive
constants c and n0 such that 0 <=
c*g(n) <= f(n) for all n >= n0}.

3n+2=Ω(n) as 3n+2≥3n for n


c=3, and n0>=2
Big-Theta Notation

➔ The theta (Θ) notation bounds a


functions from above and below, so it
defines exact asymptotic behavior
➔ Θ(g(n)) = {f(n): there exist positive
constants c1, c2 and n0 such that 0 <=
c1*g(n) <= f(n) <= c2*g(n) for all n >=
n0}

3n+2=θ(n) as 3n+2≥3n and 3n+2≤ 4n, for n


c1=3,c2=4, and n0>=2
Big-Theta Examples
To prove above statement, we have to find c1, c2 and n0 such that, 0 ≤ c1g(n)
≤ f(n) ≤ c2g(n) for all n ≥ n0

(i) 3n + 2 = Θ(n)
● 0 ≤ c1g(n) ≤ 3n+2 ≤ c2g(n)
● 0 ≤ 2n ≤ 3n + 2 ≤ 5n, for all n ≥ 1
● So, f(n) = Θ(g(n)) = Θ(n) for c1 = 2, c2 = 5 n0 = 1

(ii) 6*2n + n2 = Θ(2n)


● 0 ≤ c1g(n) ≤ 6*2n + n2 ≤ c2g(n)
● 0 ≤ 6.2n ≤ 6*2n + n2 ≤ 7*2n, for all n ≥ 1
Small / Little-Oh Notation
➔ Little-οh (ο) notation is used to describe an upper-bound that
cannot be tight
➔ Let f(n) and g(n) be functions that map positive integers to positive
real numbers.
➔ We say that f(n)=ο(g(n)), if for any real constant c > 0, there exists
an integer constant n0 ≥ 1 such that 0 ≤ f(n) < c*g(n).
Small / Little-Omega Notation
➔ Little omega (ω) used to denote a lower bound that is not
asymptotically tight.
➔ Let f(n) and g(n) be functions that map positive integers to positive
real numbers.
➔ f(n) = ω(g(n)), if for any real constant c > 0, there exists an integer
constant n0 ≥ 1 such that f(n)> c*g(n) ≥ 0 for every integer n ≥ n0.
Example
OOG of 8n3+9n2+10n+20 is n3, then
8n3+9n2+10n+20 = O(n3) = ο(n3)
= O(n4)
= O(n2) = ο(n4)
= Ω(n3) = ω(n3)
= Ω(n)
= Ω(n4) = ω(n2)
= Θ(n3)
= Θ(n4)
Example
OOG of 8n3+9n2+10n+20 is n3, then
8n3+9n2+10n+20 = O(n3) ✓ = ο(n3) x
= O(n4) ✓
= O(n2) x = ο(n4) ✓
= Ω(n3) ✓ = ω(n3) x
= Ω(n) ✓
= Ω(n4) x = ω(n2) ✓
= Θ(n3) ✓
= Θ(n4) x
Example
Which is smaller
f(n) = n(logn)10 , g(n) = n2logn
➔ Apply log
➔ log(n(logn)10), log(n2logn)
➔ logn+log(logn)10, logn2+loglogn
➔ logn+10loglogn, 2logn+loglogn
➔ logn+10loglogn
Example
Which is smaller
f(n) = 2logn , g(n) = n√n
➔ Apply log
➔ logn log2, √nlogn
➔ logn, √nlogn
➔ logn
Example
Which is smaller
f(n) = nlogn , g(n) = 2√n
➔ Apply log
➔ logn logn, √nlog2
➔ log2n, √n
➔ Apply log
➔ log(log2n), log(n1/2)
➔ 2loglogn, ½ logn
Example
Which is smaller
f(n) = nlogn , g(n) = n√n
➔ Apply log
➔ logn logn, √nlogn
➔ logn, √n
➔ Apply log
➔ log(logn), log(n1/2)
➔ loglogn, ½ logn
Example
Which are smaller
➔ 2n , n2
➔ nlogn, nlogn
➔ √logn, loglogn
➔ n√n, nlogn
➔ (logn)200, √n
➔ n10000, 1.001n
Example
Which are smaller
➔ 2n , n2
➔ nlogn, nlogn
➔ √logn, loglogn
➔ n√n, nlogn
➔ (logn)200, √n
➔ n10000, 1.001n
Example
Find order of growth
➔ 2n , n3/2, nlogn, nlogn
Example
Find order of growth
➔ 2n , n3/2, nlogn, nlogn
➔ 2n , nlogn, n3/2, nlogn
Intuition for Asymptotic Notation
● f(n) = O(g(n)) if f(n) is asymptotically less than or equal to g(n)
● f(n) = Ω(g(n)) if f(n) is asymptotically greater than or equal to g(n)
● f(n) = Θ(g(n)) if f(n) is asymptotically equal to g(n)
● f(n) = o(g(n)) if f(n) is asymptotically strictly less than g(n)
● f(n) = ω(g(n)) if f(n) is asymptotically strictly greater than g(n)
Properties of asymptotic notations
Properties of asymptotic notations
Running time
Running time
● The number of times the basic operation/step is executed on a particular input.
Best case Running time
● The minimum number of times the basic operation is executed on a given
parameters/input.
Average case Running time
● The average number of times the basic operation is executed on a given
parameters/input.
Worst case Running time
● The maximum number of times the basic operation is executed on a given
parameters/input.
Complexities
Worst-case complexity
● The worst-case complexity is the complexity of an algorithm when the input
is the worst possible with respect to complexity.
Best-case complexity
● When the input is the best possible with respect to complexity.
Average Complexity
● The average complexity is the
complexity of an algorithm that is
averaged over all possible inputs
(assuming a uniform distribution over
the inputs).
Worst-Case, Best-Case, and Average-Case Efficiencies
● Consider Sequential (Linear) Search algorithm
Input: An array A[0..n-1] and a search key K
Output: The index of the first element in A that matches K or -1 if there are no
Sequential Search(A[0..n 1], K)
i ←0
while i < n and A[i] ≠ K do
● Clearly, the running time of this algorithm can be quite
i ←i + 1
different for the same list size n.
if i < n
return i ● In the worst case, there is no matching of elements or the
else first matching element can found at last on the list.
return -1 ● In the best case, there is matching of elements at first on
end the list
Worst-Case, Best-Case, and Average-Case Efficiencies
Worst-case efficiency
● The worst-case efficiency of an algorithm is its efficiency for the worst case
input of size n.
● The algorithm runs the longest among all possible inputs of that size.
● For the input of size n, the running time is Cworst(n)=n.
Best case efficiency
● The best-case efficiency of an algorithm is its efficiency for the best case
input of size n.
● The algorithm runs the fastest among all possible inputs of that size n.
● In sequential search, if we search a first element in list of size n. (i.e. first
element equal to a search key), then the running time is Cbest(n)=1
Worst-Case, Best-Case, and Average-Case Efficiencies
Average case efficiency
● The Average case efficiency lies between best case and worst case.
● To analyze the algorithm average case efficiency, we must make some
assumptions about possible inputs of size n.
● The standard assumptions are that
○ The probability of a successful search is equal to p (0 ≤ p ≤ 1) and
○ The probability of the first match occurring in the ith position of the list
is the same for every i.
All possible case time/no.of cases
○ E(x) = (1+2+...+n)/n = (n+1)/2
Why Worst-case Analysis?
● Worst case running time: It is the longest running time for any input of size
n. We usually concentrate on finding only the worst-case running time, that
is, the longest running time for any input of size n, because of the following
reasons:
○ The worst-case running time of an algorithm gives an upper bound on
the running time for any input. Knowing it provides a guarantee that the
algorithm will never take any longer.
○ For some algorithms, the worst case occurs fairly often. For example, in
searching a database for a particular piece of information, the searching
algorithms worst case will often occur when the information is not
present in the database.
● The “average case” is often roughly as bad as the worst case.
Some Examples
● Logarithmic algorithm – O(logn) – Binary Search.
● Linear algorithm – O(n) – Linear Search.
● Super linear algorithm – O(nlogn) – Heap Sort, Merge Sort.
● Polynomial algorithm – Strassen’s Matrix Multiplication [O(n3)],
Bubble Sort [O(n2)], Selection Sort [O(n2)], Insertion Sort [O(n2)],
Bucket Sort [O(n2)].
● Exponential algorithm – O(2n) – Tower of Hanoi.
● Factorial algorithm – O(n!) – Determinant Expansion by Minors,
Brute force Search algorithm for Traveling Salesman Problem.
Analysis of Algorithms
1. Iterative / Non-recurrence algorithm
2. Recursive algorithm
Analysis of Iterative/Non-recursive Algorithms
Analyzing the Time Complexity of non-recursive Algorithms
1. Decide the input size n.
2. Identify basic operation in the algorithm.
3. Determine worst, average, and best cases for input of size n.
4. Setup a sum expressing the number of times basic operations is
executed.
5. Determine the OOG of the basic operation count
Time complexity of Iterative algorithms
for(i=1;i*i<n;i++){} for(i=1;i≤n;i++){
for(j=1;j≤i;j++){
for(k=1;k≤100;k++){}}}
for(i=n;i≥1;i=i/2){}
for(i=n/2;i≤n;i++){
for(j=1;j≤n/2;j++){
for(i=1;i<n;i=i*k){} for(k=1;k≤n;k=*2){}}}

for(i=1;i≤n;i++){
for(j=1;j≤n;j=j+i){ }}
Time complexity of Iterative algorithms
for(i=1;i*i<n;i++){} for(i=1;i≤n;i++){ n
for(j=1;j≤i;j++){ i
O(√n)
for(k=1;k≤100;k++){}}} 100
O(n2)
for(i=n;i≥1;i=i/2){}
O(log2n) for(i=n/2;i≤n;i++){ n/2
for(j=1;j≤n/2;j++){ (n/2) * (n/2)
for(i=1;i<n;i=i*k){} for(k=1;k≤n;k=*2){}}} n*n*log2n
O(n2log2n)
O(logkn)
for(i=1;i≤n;i++){ n
for(j=1;j≤n;j=j+i){ }} n*logn
O(nlogn)
Time complexity of Iterative algorithms

while(s≤n) { while(m!=n) {
i++ while(i<n) if(m>n)
s=s+i } i=i*2 m=m-n
else
n=n-m
while(n>1) }
n=n/2
Time complexity of Iterative algorithms

while(s≤n) { while(m!=n) {
i++ while(i<n) if(m>n)
s=s+i } i=i*2 m=m-n
O(√n) O(log2n) else
n=n-m
while(n>1) }
n=n/2 O(n) GCD
O(log2n)
Maximum Element

Basic operation is A[i] > max


Time Complexity is O(n)
Space Complexity is O(n)
Element Uniqueness Problem
Element Uniqueness Problem

Basic operation is A[i] = A[j]


Time Complexity is O(n2)
Space Complexity is O(n)
Matrix Multiplication
Matrix Multiplication

Basic operation is A[i,k] . B[k,j]


Time Complexity is O(n3)
Space Complexity is O(n2)
Counting Binary Bits

Basic operation is
Time Complexity is
Space Complexity is
Counting Binary Bits

Basic operation is n>1


Time Complexity is O(log n)
Space Complexity is O(1)
Analysis of Recursive Algorithms
● The process in which a function calls itself directly or indirectly is
called recursion and the corresponding function is called as recursive
function.
Analysis of Recursive Algorithms
Analyzing the Time Complexity of recursive Algorithms
1. Decide on parameter(s) indicating input size.
2. Identify the basic operation.
3. Check whether the number of times the basic operation is executed depends
only on the input size.
4. Set up a recurrence relation expressing the number of times algorithm's basic
operation is executed.
5. Determine the order of growth of the basic operation count.
Recursion Examples
Running time analysis of fact() Algorithm
function fact(n)
● Input size parameter - n
if (n == 0)
● Basic operation - multiplication
return 1
● Let T(n) be the number of times basic operation is
else
performed when input size is n.
return fact(n-1)*n
● Clearly basic operation count T(n), depends only on
input size n.
T(n) = T(n-1) +1; n>0
= 1 ; otherwise ● T(n) = T(n-1)+1, n>= 1 and T(0)=1.
Recursion Examples
Running time analysis of sample() Algorithm
function sample(n)
● Input size parameter -
if (n>0)
● Basic operation -
for(i=0;i<n;i++)
● Let T(n) be the number of times basic operation is
printf(n);
performed when input size is n.
sample(n-1);
Recursion Examples
Running time analysis of sample() Algorithm
function sample(n)
● Input size parameter - n
if (n>0)
● Basic operation - print
for(i=0;i<n;i++)
● Let T(n) be the number of times basic operation is
printf(n);
performed when input size is n.
sample(n-1);
● Clearly basic operation count T(n), depends only on
input size n.
T(n) = T(n-1)+n; n>0
● T(n) = T(n-1)+n, n>=1 and T(0)=1.
= 1 ; otherwise
Recursion Examples

function sample1(n) function sample2(n)


if (n>0) if (n>0)
for(i=1;i<n;i=i*2) printf(n);
printf(n); sample2(n-1);
sample1(n-1); sample2(n-1);

T(n) = T(n) =
Recursion Examples

function sample1(n) function sample2(n)


if (n>0) if (n>0)
for(i=1;i<n;i=i*2) printf(n);
printf(n); sample2(n-1);
sample1(n-1); sample2(n-1);

T(n) = T(n-1)+logn; n>0 T(n) = 2T(n-1)+1; n>0


= 1 ; otherwise = 1 ; otherwise
Analysis of Recursive Algorithms
1. The iteration / back substitution method
Expand (iterate) the recurrence and express it as a summation of
terms depending only on n and the initial conditions.
2. Recurrence Tree Method
uses a recurrence tree to trace the time complexity of an algorithm
3. Master’s Theorem
Back substitution method Example
T(n) = T(n-1)+1; n>=1
= 1; n=0
T(n) = T(n-1)+1 – (1)
T(n-1) =T(n-2)+1 – (2) If n-k=0 then k=n
T(n-2) =T(n-3)+1 – (3) T(n) = T(n-(n))+(n)
(2) in (1) = T(0)+n
T(n) = (T(n-2)+1) +1 = T(n-2)+2 – (4)
T(n) = 1+n
(3) in (4)
T(n) = (T(n-3)+1) +2 = T(n-3)+3 By solving, we get T(n) = 1+n and the order
:: . of growth of T(n) is O(n).
T(n) =T(n-k)+k
Back substitution method Example
T(n) = T(n-1)+logn ; n>=2, T(1)=0
T(n) = T(n-1)+logn T(n-1) = T(n-2)+log(n-1)
T(n) = T(n-2)+logn-1+logn T(n-2) = T(n-3)+log(n-2)
T(n) = T(n-3)+logn-2+logn-1+logn
:::
T(n) = T(n-i)+log(n-(i-1))+log(n-(i-2))+..+logn

At i=n-1
T(n) = T(n-(n-1))+log(n-((n-1)-1))+log(n-((n-1)-2))+..+logn
T(n) = T(1)+log2+log3+..+logn
T(n) = log1+log2+log3+..+logn = logn!
T(n) = θ(nlogn)
Back substitution method Example
T(n) = 2T(n-1)+1 ; n>=1, T(0)=1
T(n) = 2T(n-1)+1
T(n) = 2(2T(n-2)+1)+1 T(n-1) = 2T(n-2)+1
T(n) = 22T(n-2)+2+1
T(n) = 22(2T(n-3)+1)+2+1
T(n-2) = 2T(n-3)+1
T(n) = 23T(n-3)+22+21+20
:::
T(n) = 2kT(n-k)+2k-1+2k-2+...+22+21+20
At k=n
T(n) = 2nT(n-(n))+2n-1+2n-2+...+22+21+20 GP Series
T(n) = 1+2+22+23+...+2n a+ar+ar2+ar3..+ark = a(rk+1-1)/(r-1)
T(n) = 1(2n+1-1)/(2-1) = 2n+1-1
T(n) = O(2n)
Back substitution method Example
T(n) = 2T(n-1)+n ; n>1, T(1)=1

T(n) = 2T(n-1)+n T(n-1) = 2T(n-2)+(n-1)


T(n) = 2(2T(n-2)+(n-1))+n
T(n) = 22T(n-2)+2(n-1)+n T(n-2) = 2T(n-3)+(n-2)
T(n) = 22(2T(n-3)+(n-2))+2(n-1)+n
T(n) = 23T(n-3)+22(n-2)+21(n-1)+20(n-0)
:::
T(n) = 2kT(n-k)+2k-1(n-(k-1))+2k-2(n-(k-2))+...+20(n-0)
T(n) = 2kT(n-k)+2k-1(n-k+1)+2k-2(n-k+2)+...+20n
Back substitution method Example (Contd.)
T(n) = 2T(n-1)+n ; n>1, T(1)=1
T(n) = 2kT(n-k)+2k-1(n-k+1)+2k-2(n-k+2)+...+20n
At k=n-1
T(n)=2n-1T(n-(n-1))+2n-2(n-(n-1)+1)+2n-3(n-(n-1)+2)+...+20(n-0)
T(n) = 2n-1T(1)+2n-2(1+1)+2n-3(1+2)+....+22(n-2)+21(n-1)+n
T(n) = 2n-1.1+2n-2.2+2n-3.3+....+22(n-2)+21(n-1)+n
Back substitution method Example (Contd.)
T(n) = 2T(n-1)+n ; n>1, T(1)=1
T(n) = 2n-1.1+2n-2.2+2n-3.3+....+22(n-2)+21(n-1)+n
2*T(n) =2n.1+2n-1.2+2n-2.3+....+23(n-2)+22(n-1)+2n
Subtracting 2T(n)-T(n)
T(n)=2n.1+2n-1.2+2n-2.3+....+22(n-1)+2n-[2n-1.1+2n-2.2+2n-3.3+....+22(n-2)+21(n-1)+n
]
T(n) = 2n+2n-1(2-1)+2n-2(3-2)+....+22(n-1-(n-2))+2(n-(n-1))+(-n)
T(n) = 2n+2n-1+2n-2+....+22+2-n a+ar+ar2+ar3..+ark=a(rk+1-1)/(r-1)
GP Series where r=2, a=2
T(n) = 2(2n-1)/(2-1) - n
T(n) = 2(2n-1) - n
n+1
Recursion Tree method Example
T(n) = 2T(n/2)+c; n>1
=c ; n=1

T(n/n) T(n/n)...
T(1)=T(n/n)
c+2c+4c+...+nc assume n=2k
c(1+2+4+...+2k)
c((2k+1-1)/(2-1))
n/n n/n c(2k+1-1) = c(2n-1) = O(n)
Recursion Tree method Example
T(n) = 2T(n-1)+1 ; n>=1, T(0)=1

T(n) = 1+2+22+23+...+2n
GP Series
T(n) = 1(2n+1-1)/(2-1) = 2n+1-1
a+ar+ar2+ar3..+ark = a(rk+1-1)/(r-1)
T(n) = O(2n)
Analysis of Recursive Algorithm Examples
★ T(n)=T(n-1)+1 O(n)
★ T(n)=T(n-1)+n O(n2)
★ T(n)=T(n-1)+logn O(nlogn)
★ T(n)=2T(n-1)+1 O(2n)
★ T(n)=3T(n-1)+1 O(3n)
★ T(n)=2T(n-1)+n O(2n) 2n+1-n-2
Master’s theorem
● Master method provides a cookbook method (direct way) for solving
algorithmic recurrences of the form T(n) = aT(n/b)+f(n) ; a≥1, b>1
● A master recurrence describes the running time of a
divide-and-conquer algorithm that divides a problem of size n into a
subproblems, each of size n/b < n.
● The algorithm solves the a subproblems recursively, each in T(n/b)
time.
● The driving function f(n) encompasses the cost of dividing the
problem before the recursion, as well as the cost of combining the
results of the recursive solutions to subproblems.
Master’s theorem

T(n) = aT(n/b)+f(n)

Where,
● n is the size of the function or input size
● a is the number of subproblems in recursion (a≥1)
● n/b is the size of each subproblem (b>1)
● f(n) is work done outside the recursive call (always positive)
Master’s theorem
nlogba — Watershed function
T(n) = aT(n/b)+f(n) a≥1, b>1, f(n)>0
f(n) — Driving function
Case 1: If there exists a constant ε > 0 such that
f(n)=O(nlogba-ε), then T(n) = θ(nlogba)
Case 2: If there exists a constant k ≥ 0 such that
f(n)=θ(nlogbalogkn), then T(n) = θ(nlogba *logk+1n)
Case 3: If there exists a constant ε > 0
f(n)=Ω(nlogba+ε ), then and if f(n) additionally satisfies the regularity
condition [a f(n/b) ≤ c f(n) for some constant c<1 and all sufficiently
large n], then T(n) = θ(f(n))
Master’s Theorem
Master’s Theorem Examples

T(n)=3T(n/2)+n2 T(n)=9T(n/3)+n2 T(n)=2T(n/2)+√n

T(n)=2nT(n/2)+n2 T(n)=64T(n/8)-n2logn T(n)=2T(n)+logn


Master’s Theorem Examples

T(n)=3T(n/2)+n2 T(n)=9T(n/3)+n2 T(n)=2T(n/2)+√n


a=3, b=2, k=2, p=0 a=9, b=3, k=2, p=0 a=2, b=2, k=½, p=0
logba = log23 = 1.1 logba = log39 = 2 logba = log22 = 1
logba < k (Case3a) logba = k (Case2a) logba > k (Case1)
θ(n2log0n) = θ(n2) θ(n2logn) θ(n)

T(n)=2nT(n/2)+n2 T(n)=64T(n/8)-n2logn T(n)=2T(n)+logn


a has to be constant ≥1 f(n) should be positive b should be >1
Can't applicable Can't applicable Can't applicable
Master’s Theorem Examples

T(n)=27T(n/3)+7n3+8n2-9n+4 T(n)=2T(n/2)+√n

T(n)=0.6T(n/2)+n T(n)=4T(n/8)-n+3 T(n)=4T(n-1)+logn


Master’s Theorem Examples

T(n)=27T(n/3)+7n3+8n2-9n+4 T(n)=2T(n/2)+√n
a=27, b=3, f(n)=7n3+8n2-9n+4=n3, k=3, p=0 a=2, b=2, k=1/2
logba = log327 = 3 logba = log22 = 1
logba = k (Case2a) logba > k (Case1)
θ(n3logn) θ(n)

T(n)=0.6T(n/2)+n T(n)=4T(n/8)-n+3 T(n)=4T(n-1)+logn


a has to be constant ≥1 f(n) should be positive b should be >1
Can't applicable Can't applicable Can't applicable
Master’s Theorem Examples

T(n)=16T(n/4)+n2(logn)3 T(n)=16T(n/4)+n2/logn T(n)=2T(n/2)+nlogn


Master’s Theorem Examples

T(n)=16T(n/4)+n2(logn)3 T(n)=16T(n/4)+n2/logn T(n)=2T(n/2)+nlogn


a=16, b=4, k=2, p=3 a=16, b=4, f(n)=n2/logn, a=2, b=2, k=1, p=1
logba = log416 = 2 k=2, p=-1 logba = log22 = 1
logba = k (Case 2a) logba = log416 = 2 logba = k (Case 2a)
θ(n2.log4n) logba = k (Case 2b) θ(nlog1+1n)
θ(n2.log.logn) θ(nloglogn)
Master’s Theorem Examples
T(n)=T(√n)+1 T(n)=2T(√n)+logn
Master’s Theorem Examples
T(n)=T(√n)+1 T(n)=2T(√n)+logn
Let n=2c => c = logn Let n=2c => c = logn
T(2c) = T(2c/2)+1 T(2c) = 2T(2c/2)+c
Let S(c) = T(2c) Let S(c) = T(2c)
S(c) = S(c/2)+1 S(c) = 2S(c/2)+c
a=1, b=2, f(c)=1, k=0, p=0 a=2, b=2, f(c)=c, k=1, p=0
logba = log21= 0 logba = log22= 1
logba = k (Case 2a) logba = k (Case 2a)
S(c) = θ(logc) S(c) = θ(c.logc)
T(2c) = θ(logc) T(2c) = θ(c.logc)
T(n) = θ(log.logn) T(n) = θ(logn.log.logn)
Master’s Theorem Examples
★ T(n)=3T(n/2)+n2
★ T(n)=4T(n/2)+n2
★ T(n)=√2T(n/2)+logn
★ T(n)=2T(n/4)+n0.51
★ T(n)=3T(n/3)+√n
★ T(n)=2T(n/2)+n/logn
★ T(n)=0.5T(n/2)+1/n
Master’s Theorem Examples
★ T(n)=3T(n/2)+n2 θ(n2)
★ T(n)=4T(n/2)+n2 θ(n2logn)
★ T(n)=√2T(n/2)+logn θ(√n)
★ T(n)=2T(n/4)+n0.51 θ(n0.51)
★ T(n)=3T(n/3)+√n θ(n)
★ T(n)=2T(n/2)+n/logn θ(n.log.logn)

★ T(n)=0.5T(n/2)+1/n NA
Recursion
● Many useful algorithms are recursive in structure: to solve a given
problem, they recurse (call themselves) one or more times to handle
closely related subproblems.
● Factorial, Fibonacci
● Tree traversal (Inorder, Preorder, Postorder)
● Graph traversal (DFS)
● D&C
● Dynamic Programming

You might also like