Module 1 - DAA
Module 1 - DAA
Recursive Algorithm
● In this approach, we call the same function again and again. It is important to have a
base condition or exit condition to come out of recursion loop, else it will go to
infinite loop. Care to be taken while using recursion as it uses more stack space, it
might result in MLE error [Memory limit exceeded] for some problems while doing
competitive programming.
Algorithm Design Techniques
Brute-force Algorithm
● In this approach we find all the ways to solve a problem. As a problem can have
multiple solutions, by using this approach, we might get to the correct result, but
will lose efficiency.
Dynamic Programming
● DP is used for optimization problems. DP algorithms are recursive in nature. In
DP, we store the previous results. Then we use those previous results to find next
result. We usually store the results in the form of matrix.
Backtracking
● This algorithm is similar to DP, but the main difference is we store the result in a
Boolean matrix.
Algorithm Design Techniques
Branch and Bound
● It is similar to the backtracking since it also uses the state space tree. It is used for
solving the optimization problems and minimization problems.
Randomized Algorithm
● Randomized algorithms use random numbers or choices to decide their next step. We
use these algorithms to reduce space and time complexity.
Approximation Algorithm
● These algorithms designed to solve problems that are not solvable in polynomial time
for approximate solutions. These problems are known as NP complete problems.
These problems are significantly effective to solve real world problems, therefore, it
becomes important to solve them using a different approach.
Analysis of Algorithm
● For all input instances, the algorithm should give correct outputs then
only we can say that the Algorithm is correct
● Analysis of algorithm is the process of investigating of an algorithm’s
efficiency respect to two resources:
○ Running time– Time needs for successful execution of algorithm.
○ Memory space– Amount of space needs for successful execution
of algorithm.
Performance analysis of Algorithm
● The efficiency of an algorithm can be decided by measuring the
performance of an algorithm
● A Posteriori Testing
○ Performance measurement of program
○ Machine, language dependent
Analysis
● Efficiency is measured based on time and space complexity.
● Finding efficiency based on running time is not a good approach.
● Measuring the exact running time is not practical at all.
● Running time generally depends on size of input.
T(n) = n2+3n
● The term 3n becomes insignificant compared to n2 when n is very
large.
● The function T(n) is said to be asymptotically equivalent to n2, and
OOG=n2 and T(n) ≈ n2.
Order of Growth Example
(i) 3n + 2 = Θ(n)
● 0 ≤ c1g(n) ≤ 3n+2 ≤ c2g(n)
● 0 ≤ 2n ≤ 3n + 2 ≤ 5n, for all n ≥ 1
● So, f(n) = Θ(g(n)) = Θ(n) for c1 = 2, c2 = 5 n0 = 1
for(i=1;i≤n;i++){
for(j=1;j≤n;j=j+i){ }}
Time complexity of Iterative algorithms
for(i=1;i*i<n;i++){} for(i=1;i≤n;i++){ n
for(j=1;j≤i;j++){ i
O(√n)
for(k=1;k≤100;k++){}}} 100
O(n2)
for(i=n;i≥1;i=i/2){}
O(log2n) for(i=n/2;i≤n;i++){ n/2
for(j=1;j≤n/2;j++){ (n/2) * (n/2)
for(i=1;i<n;i=i*k){} for(k=1;k≤n;k=*2){}}} n*n*log2n
O(n2log2n)
O(logkn)
for(i=1;i≤n;i++){ n
for(j=1;j≤n;j=j+i){ }} n*logn
O(nlogn)
Time complexity of Iterative algorithms
while(s≤n) { while(m!=n) {
i++ while(i<n) if(m>n)
s=s+i } i=i*2 m=m-n
else
n=n-m
while(n>1) }
n=n/2
Time complexity of Iterative algorithms
while(s≤n) { while(m!=n) {
i++ while(i<n) if(m>n)
s=s+i } i=i*2 m=m-n
O(√n) O(log2n) else
n=n-m
while(n>1) }
n=n/2 O(n) GCD
O(log2n)
Maximum Element
Basic operation is
Time Complexity is
Space Complexity is
Counting Binary Bits
T(n) = T(n) =
Recursion Examples
At i=n-1
T(n) = T(n-(n-1))+log(n-((n-1)-1))+log(n-((n-1)-2))+..+logn
T(n) = T(1)+log2+log3+..+logn
T(n) = log1+log2+log3+..+logn = logn!
T(n) = θ(nlogn)
Back substitution method Example
T(n) = 2T(n-1)+1 ; n>=1, T(0)=1
T(n) = 2T(n-1)+1
T(n) = 2(2T(n-2)+1)+1 T(n-1) = 2T(n-2)+1
T(n) = 22T(n-2)+2+1
T(n) = 22(2T(n-3)+1)+2+1
T(n-2) = 2T(n-3)+1
T(n) = 23T(n-3)+22+21+20
:::
T(n) = 2kT(n-k)+2k-1+2k-2+...+22+21+20
At k=n
T(n) = 2nT(n-(n))+2n-1+2n-2+...+22+21+20 GP Series
T(n) = 1+2+22+23+...+2n a+ar+ar2+ar3..+ark = a(rk+1-1)/(r-1)
T(n) = 1(2n+1-1)/(2-1) = 2n+1-1
T(n) = O(2n)
Back substitution method Example
T(n) = 2T(n-1)+n ; n>1, T(1)=1
T(n/n) T(n/n)...
T(1)=T(n/n)
c+2c+4c+...+nc assume n=2k
c(1+2+4+...+2k)
c((2k+1-1)/(2-1))
n/n n/n c(2k+1-1) = c(2n-1) = O(n)
Recursion Tree method Example
T(n) = 2T(n-1)+1 ; n>=1, T(0)=1
T(n) = 1+2+22+23+...+2n
GP Series
T(n) = 1(2n+1-1)/(2-1) = 2n+1-1
a+ar+ar2+ar3..+ark = a(rk+1-1)/(r-1)
T(n) = O(2n)
Analysis of Recursive Algorithm Examples
★ T(n)=T(n-1)+1 O(n)
★ T(n)=T(n-1)+n O(n2)
★ T(n)=T(n-1)+logn O(nlogn)
★ T(n)=2T(n-1)+1 O(2n)
★ T(n)=3T(n-1)+1 O(3n)
★ T(n)=2T(n-1)+n O(2n) 2n+1-n-2
Master’s theorem
● Master method provides a cookbook method (direct way) for solving
algorithmic recurrences of the form T(n) = aT(n/b)+f(n) ; a≥1, b>1
● A master recurrence describes the running time of a
divide-and-conquer algorithm that divides a problem of size n into a
subproblems, each of size n/b < n.
● The algorithm solves the a subproblems recursively, each in T(n/b)
time.
● The driving function f(n) encompasses the cost of dividing the
problem before the recursion, as well as the cost of combining the
results of the recursive solutions to subproblems.
Master’s theorem
T(n) = aT(n/b)+f(n)
Where,
● n is the size of the function or input size
● a is the number of subproblems in recursion (a≥1)
● n/b is the size of each subproblem (b>1)
● f(n) is work done outside the recursive call (always positive)
Master’s theorem
nlogba — Watershed function
T(n) = aT(n/b)+f(n) a≥1, b>1, f(n)>0
f(n) — Driving function
Case 1: If there exists a constant ε > 0 such that
f(n)=O(nlogba-ε), then T(n) = θ(nlogba)
Case 2: If there exists a constant k ≥ 0 such that
f(n)=θ(nlogbalogkn), then T(n) = θ(nlogba *logk+1n)
Case 3: If there exists a constant ε > 0
f(n)=Ω(nlogba+ε ), then and if f(n) additionally satisfies the regularity
condition [a f(n/b) ≤ c f(n) for some constant c<1 and all sufficiently
large n], then T(n) = θ(f(n))
Master’s Theorem
Master’s Theorem Examples
T(n)=27T(n/3)+7n3+8n2-9n+4 T(n)=2T(n/2)+√n
T(n)=27T(n/3)+7n3+8n2-9n+4 T(n)=2T(n/2)+√n
a=27, b=3, f(n)=7n3+8n2-9n+4=n3, k=3, p=0 a=2, b=2, k=1/2
logba = log327 = 3 logba = log22 = 1
logba = k (Case2a) logba > k (Case1)
θ(n3logn) θ(n)
★ T(n)=0.5T(n/2)+1/n NA
Recursion
● Many useful algorithms are recursive in structure: to solve a given
problem, they recurse (call themselves) one or more times to handle
closely related subproblems.
● Factorial, Fibonacci
● Tree traversal (Inorder, Preorder, Postorder)
● Graph traversal (DFS)
● D&C
● Dynamic Programming