Ada 4
Ada 4
Ada 4
1000 n2 = O (n3-ε )
If we choose ε=1, we get: 1000 n2 = O (n3-1) = O (n2)
Since this equation holds, the first case of the master theorem applies to the given recurrence
relation, thus resulting in the conclusion:
T(n) = Θ(nlogba)
Therefore: T (n) = Θ (n3)
Example-2-
𝑛
𝑇(𝑛) = 9𝑇 +𝑛
3
apply master theorem on it.
Solution:
Solution:
Solution:
a= 2, b =2, f (n) = n2
n2 = Ω(n1+1) = Ω(n2)
T (n) = Θ(n2)
Example-5-
𝑛
𝑇(𝑛) = 2𝑇 +𝑛
4
apply master theorem on it.
Solution:
a= 2, b =4, f (n) = n2
T (n) = Θ(n2)
---------------------------------------------------------------------------------------------------------------------------
Recursion-tree Method:-
To solve a recurrence relation using the recursion tree method, a few steps must be followed.
They are,
Example-1-
𝑛
𝑇(𝑛) = 2𝑇 +𝑛
2
Solution-
Step-1: Drawing the recursion tree for the given recurrence relation.
n2
(n/2)2 (n/2)2
n = 2^k
log(n) = log(2k)
log(n) = k * log(2)
k = log(n) / log(2)
Step-6 Sum up the cost of all the levels in the recursion tree.
Total Cost = Cost of all levels except last level + Cost of last level
T(n) = 2 n2 + θ(n)
Thus, T(n) = O(n2)
Example-2-
𝑛
𝑇(𝑛) = 3𝑇 + 𝑏𝑛 if n > 1 otherwise T(n) = 1
3
Solution-
Step-1: Drawing the recursion tree for the given recurrence relation.
bn
bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2
7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7
n = 3^k
log(n) = log(3k)
log(n) = k * log(3)
k = log(n) / log(3)
Step-6 Sum up the cost of all the levels in the recursion tree.
Total Cost = Cost of all levels except last level + Cost of last level
T(n) = bn log3n + bn
Example-3-
𝑛 𝑛
𝑇(𝑛) = 𝑇 +𝑇 +𝑛
4 2
Solution-
Step-1: Drawing the recursion tree for the given recurrence relation.
A problem of size n will get divided into 2 sub-problems of size n/4 and n/2.
Then, each sub-problem of size n/4 and n/2 will get divided into 2 sub-problems of size n/4
and n/2 and so on.
At the bottom most layer, the size of sub-problems will reduce to 1.
n2
(n/4)2 (n/2)2
The tree here is not balanced, so we consider the longest path which is the rightmost one.
𝑛 → → → ⋯ … … … … ..
n = 2^k
log(n) = log(2k)
log(n) = k * log(2)
k = log(n) / log(2)
Step-6 Sum up the cost of all the levels in the recursion tree.
Total Cost = Cost of all levels except last level + Cost of last level
Example-4-
𝑛 2𝑛
𝑇(𝑛) = 𝑇 +𝑇 +𝑛
3 3
Solution-
Step-1: Drawing the recursion tree for the given recurrence relation.
A problem of size n will get divided into 2 sub-problems of size n/3 and 2n/3.
Then, each sub-problem of size n/3 and 2n/3 will get divided into 2 sub-problems of size n/3
and 2n/3 and so on.
At the bottom most layer, the size of sub-problems will reduce to 1.
n/3 2n/3
The tree here is not balanced, so we consider the longest path which is the rightmost one.
2𝑛 4𝑛 8𝑛
𝑛→ → → → ⋯ … … … … ..
3 9 27
Size of sub-problem at level-0 = n/(3/2)0
Size of sub-problem at level-1 = n/(3/2)1
Size of sub-problem at level-2 = n/(3/2)2
At level-k (last level), size of sub-problem becomes 1.
Then
n = (3/2)^k
log(n) = k * log(3/2)
k = log(n) / log(2)
Total Cost = Cost of all levels except last level + Cost of last level
Example-5-
𝑛
𝑇(𝑛) = 2𝑇 + 𝑛 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 > 1
3
Solution-
Step-1: Drawing the recursion tree for the given recurrence relation.
n2
(n/3)2 (n/3)2
n = 3^k
log(n) = log(3k)
log(n) = k * log(3)
k = log(n) / log(3)
Step-6 Sum up the cost of all the levels in the recursion tree.
Total Cost = Cost of all levels except last level + Cost of last level
T(n) = n2 * (1 / (1 – 2/3)) + θ(n ^ log32) [Hence the sum of GP is given by S(N) = a / (1-r)]
---------------------------------------------------------------------------------------------------------------------------
Substitution Method:-
We can use the substitution method to establish either upper or lower bounds on a recurrence.
The Subs tu on Method Consists of two main steps:
Example-1-
𝑛
𝑇(𝑛) = 𝑇 +𝑛
2
Solution:-
Example-2-
𝑛
𝑇(𝑛) = 2𝑇 +𝑛 𝑛 >1
2
Solution:-
---------------------------------------------------------------------------------------------------------------------------
Answer-
Greedy Method:
Dynamic Programming:
The approach of Dynamic programming is similar to divide and conquer.
The difference is that whenever we have recursive function calls with the same result,
instead of calling them again we try to store the result in a data structure in the form of a
table and retrieve the results from the table.
Thus, the overall time complexity is reduced. “Dynamic” means we dynamically decide,
whether to call a function or retrieve values from the table.
Example: 0-1 Knapsack, subset-sum problem.
Linear Programming:
In Linear Programming, there are inequalities in terms of inputs and maximizing or
minimizing some linear functions of inputs.
Example: Maximum flow of Directed Graph
Reduction(Transform and Conquer):
In this method, we solve a difficult problem by transforming it into a known problem for
which we have an optimal solution.
Basically, the goal is to find a reducing algorithm whose complexity is not dominated by
the resulting reduced algorithms.
Example:
Selection algorithm for finding the median in a list involves first sorting the list and then
finding out the middle element in the sorted list.
These techniques are also called transform and conquer.
Backtracking:
This technique is very useful in solving combinatorial problems that have a single unique
solution.
Where we have to find the correct combination of steps that lead to fulfilment of the task.
Such problems have multiple stages and there are multiple options at each stage.
This approach is based on exploring each available option at every stage one-by-one.
While exploring an option if a point is reached that doesn’t seem to lead to the solution, the
program control backtracks one step, and starts exploring the next option.
In this way, the program explores all possible course of actions and finds the route that
leads to the solution.
Example: N-queen problem, maize problem.
Answer-
The analysis of loops for the complexity analysis of algorithms involves finding the number
of operations performed by a loop as a function of the input size.
This is usually done by determining the number of iterations of the loop and the number of
operations performed in each iteration.
Constant Time Complexity O (1):
O(1) refers to constant time complexity, which means that the running time of an algorithm
remains constant and does not depend on the size of the input.
The time complexity of a function is considered as O(1) if it doesn’t contain a loop,
recursion, and call to any other non-constant time function. i.e., set of non-recursive and
non-loop statements.
Example:
swap() function has O(1) time complexity.
A loop or recursion that runs a constant number of times is also considered O(1).
// Here c is a constant
for (int i = 1; i <= c; i++)
{
// some O(1) expressions
}
The Time Complexity of a loop is considered as O(n) if the loop variables are
incremented/decremented by a constant amount.
Linear time complexity, denoted as O(n), is a measure of the growth of the running time of
an algorithm proportional to the size of the input.
In simple words, for an input of size n, the algorithm takes n steps to complete the
operation.
Example-1
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
Example-2
Example-1
Example-2
Example-3
// Recursive function
void recurse(n)
{
if (n == 0)
return;
else {
// some O(1) expressions
}
recurse(n - 1);
}
Logarithmic Time Complexity O(Log Log n):
The Time Complexity of a loop is considered as O(LogLogn) if the loop variables are
reduced/increased exponentially by a constant amount.
Example-1
Example-2
Explanation:
The time complexity can be calculated by counting the number of times the expression “count =
count + 1;” is executed. The expression is executed 0 + 1 + 2 + 3 + 4 + …. + (n-1) times.
Time complexity = Theta(0 + 1 + 2 + 3 + .. + n-1) = Theta (n*(n-1)/2) = Theta(n2)
---------------------------------------------------------------------------------------------------------------------------