0% found this document useful (0 votes)
5 views19 pages

Ada 4

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 19

 Put all the values in: f (n) = O(nlogba- ε )

1000 n2 = O (n3-ε )
 If we choose ε=1, we get: 1000 n2 = O (n3-1) = O (n2)
 Since this equation holds, the first case of the master theorem applies to the given recurrence
relation, thus resulting in the conclusion:

T(n) = Θ(nlogba)
Therefore: T (n) = Θ (n3)
Example-2-
𝑛
𝑇(𝑛) = 9𝑇 +𝑛
3
apply master theorem on it.

Solution:

 Compare 𝑇(𝑛) = 9𝑇 + 𝑛 with 𝑇(𝑛) = 𝑎𝑇 + 𝑓(𝑛) 𝑤ℎ𝑒𝑟𝑒 𝑎 ≥ 1 𝑎𝑛𝑑 𝑏 > 1

 We get a = 9, b=3, f (n) = n


 Therefore logba = log39 = 2
 Put all the values in: f (n) = O(nlogba- ε )
n = O (n2-ε )
 If we choose ε=1, we get: n = O (n2-1) = O (n)
 Since this equation holds, the first case of the master theorem applies to the given recurrence
relation, thus resulting in the conclusion:
T(n) = Θ(nlogba)
Therefore: T (n) = Θ (n2)
Example-3-
2𝑛
𝑇(𝑛) = 𝑇 +1
3
apply master theorem on it.

Solution:

 Compare 𝑇(𝑛) = 𝑇 + 1 with 𝑇(𝑛) = 𝑎𝑇 + 𝑓(𝑛) 𝑤ℎ𝑒𝑟𝑒 𝑎 ≥ 1 𝑎𝑛𝑑 𝑏 > 1

 We get a = 1, b=3/2, f (n) = 1


 Therefore log3/21 = 0
 Put all the values in: f (n) = O(nlogba)
1 = O (n0 ) = O(1)
 Since this equation holds, the second case of the master theorem applies to the given
recurrence relation, thus resulting in the conclusion:
T(n) = Θ(nlogba log n)
Therefore: T (n) = Θ (log n)
Example-4-
𝑛
𝑇(𝑛) = 2𝑇 +𝑛
2
apply master theorem on it.

Solution:

 Compare 𝑇 (𝑛) = 2𝑇 + 𝑛 with 𝑇(𝑛) = 𝑎𝑇 + 𝑓(𝑛) 𝑤ℎ𝑒𝑟𝑒 𝑎 ≥ 1 𝑎𝑛𝑑 𝑏 > 1

 a= 2, b =2, f (n) = n2

 Therefore logba = log22 =1

 Put all the values in f (n) = f(n) = Ω (nlogba + ε ).................... (1)

 If we insert all the value in equation (1), we will get

n2 = Ω(n1+ε) put ε =1, then the equality will hold.

n2 = Ω(n1+1) = Ω(n2)

 Now we will also check the second condition:


𝑛
2 ≤ 𝑐𝑛
2
1
𝑛 ≤ 𝑐𝑛
2
If we will choose c =1/2, it is true:
1 1
𝑛 ≤ 𝑛 ∀𝑛 ≥1
2 2
So it follows: T (n) = Θ ((f (n))

T (n) = Θ(n2)

Example-5-
𝑛
𝑇(𝑛) = 2𝑇 +𝑛
4
apply master theorem on it.

Solution:

 Compare 𝑇 (𝑛) = 2𝑇 + 𝑛 with 𝑇(𝑛) = 𝑎𝑇 + 𝑓(𝑛) 𝑤ℎ𝑒𝑟𝑒 𝑎 ≥ 1 𝑎𝑛𝑑 𝑏 > 1

 a= 2, b =4, f (n) = n2

 Therefore logba = log42 = log441/2 = 1/2 log44 = 1/2

 Put all the values in f(n) = Ω (nlogba + ε ).................... (1)

 If we insert all the value in equation (1), we will get

n2 = Ω(n1/2 +ε) put ε =3/2, then the equality will hold.

n2 = Ω(n1/2 + 3/2) = Ω(n2)

 Now we will also check the second condition:


𝑛
2 ≤ 𝑐𝑛
4
1
𝑛 ≤ 𝑐𝑛
8
If we will choose c =1/8, it is true:
1 1
𝑛 ≤ 𝑛 ∀𝑛 ≥1
8 8
So it follows: T (n) = Θ ((f (n))

T (n) = Θ(n2)

---------------------------------------------------------------------------------------------------------------------------

Recursion-tree Method:-

 A recursion tree models the costs (time) of a recursive execution of an algorithm.


 It is a pictorial representation of an iteration method which is in the form of a tree where at
each level nodes are expanded.
 In general, we consider the second term in recurrence as root.
 The recursion tree method is good for generating guesses for the substitution method.
 It is useful when the divide & Conquer algorithm is used.

To solve a recurrence relation using the recursion tree method, a few steps must be followed.
They are,

a. Draw a recursion tree based on the given recurrence relation.


b. Determine cost of each level
c. Determine total number of levels in the recursion tree
d. Determine number of nodes in the last level
e. Determine cost of the last level
f. Add cost of all the levels of the recursion tree and simplify the expression so obtained
in terms of asymptotic notation.

Example-1-

𝑛
𝑇(𝑛) = 2𝑇 +𝑛
2

Solution-

Step-1: Drawing the recursion tree for the given recurrence relation.

 A problem of size n will get divided into 2 sub-problems of size n/2.


 Then, each sub-problem of size n/2 will get divided into 2 sub-problems of size n/4 and so on.
 At the bottom most layer, the size of sub-problems will reduce to 1.

n2

(n/2)2 (n/2)2

(n/4)2 (n/4)2 (n/4)2 (n/4)2

(n/8)2 (n/8)2 (n/8)2 (n/8)2 (n/8)2 (n/8)2 (n/8)2 (n/8)2

Step-2: Determine cost of each level

 Cost at Level-0 = n2, two sub-problems are merged.


 Cost at Level-1 = n2/4 + n2/4 = n2/2, two sub-problems are merged two times.
 Cost at Level-2 = n2/16 + n2/16 + n2/16 + n2/16 = n2/4, two sub-problems are merged four
times. and so on....
Step-3: Determine total number of levels in the recursion tree

 Size of sub-problem at level-0 = n/20


 Size of sub-problem at level-1 = n/21
 Size of sub-problem at level-2 = n/22
 At level-k (last level), size of sub-problem becomes 1.
 Then

n = 2^k

log(n) = log(2k)

log(n) = k * log(2)

k = log(n) / log(2)

k = log2(n) [logm a / logm b = logba]

So total number of levels in the recursion tree = log2 n + 1

Step-4 Determine number of nodes in the last level-

 Level-0 has 20 nodes i.e. 1 node


 Level-1 has 21 nodes i.e. 2 nodes
 Level-2 has 22 nodes i.e. 4 nodes

Continuing in similar manner, we have-

 Level- log2n has 2^ log2n nodes i.e. n nodes

Step-5 : Determine cost of last level-

 Cost of last level = n x T(1) = θ(n)


 The cost of the last level is calculated separately because it is the base case and no merging is
done at the last level so, the cost to solve a single problem at this level is some constant value.

Step-6 Sum up the cost of all the levels in the recursion tree.

Total Cost = Cost of all levels except last level + Cost of last level

T(n) = n2 + n2/2 + n2/4 + n2/8 + …… log(n) times + θ(n)

T(n) = n2 (1 + 1/2 + 1/4 + 1/8 + …… log(n) times) + θ(n)

T(n) = n2 * (1 / (1 – 1/2)) [Hence the sum of GP is given by S(N) = a / (1-r)]

T(n) = 2 n2 + θ(n)
Thus, T(n) = O(n2)

Example-2-

𝑛
𝑇(𝑛) = 3𝑇 + 𝑏𝑛 if n > 1 otherwise T(n) = 1
3

Solution-

Step-1: Drawing the recursion tree for the given recurrence relation.

 A problem of size n will get divided into 3 sub-problems of size n/3.


 Then, each sub-problem of size n/3 will get divided into 3 sub-problems of size n/9 and so on.
 At the bottom most layer, the size of sub-problems will reduce to 1.

bn

bn/3 bn/3 bn/3

bn/9 bn/9 bn/9 bn/9 bn/9 bn/9 bn/9 bn/9 bn/9

bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2 bn/2
7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7

Step-2: Determine cost of each level

 Cost at Level-0 = bn, three sub-problems are merged.


 Cost at Level-1 = bn/3 + bn/3 + bn/3 = bn, three sub-problems are merged three times.
 Cost at Level-2 = bn/9 + bn/9 + bn/9+ bn/9+ bn/9+ bn/9+ bn/9+ bn/9+ bn/9 = bn, three sub-
problems are merged nine times. and so on....

Step-3: Determine total number of levels in the recursion tree

 Size of sub-problem at level-0 = n/30


 Size of sub-problem at level-1 = n/31
 Size of sub-problem at level-2 = n/32
 At level-k (last level), size of sub-problem becomes 1.
 Then

n = 3^k

log(n) = log(3k)

log(n) = k * log(3)
k = log(n) / log(3)

k = log3(n) [logm a / logm b = logba]

So total number of levels in the recursion tree = log3 n + 1

Step-4 Determine number of nodes in the last level-

 Level-0 has 30 nodes i.e. 1 node


 Level-1 has 31 nodes i.e. 3 nodes
 Level-2 has 32 nodes i.e. 9 nodes

Continuing in similar manner, we have-

 Level- log3n has 3^ log3n nodes i.e. n nodes

Step-5 : Determine cost of last level-

 Cost of last level = n x T(1) = bn


 The cost of the last level is calculated separately because it is the base case and no merging is
done at the last level so, the cost to solve a single problem at this level is some constant value.

Step-6 Sum up the cost of all the levels in the recursion tree.

Total Cost = Cost of all levels except last level + Cost of last level

T(n) = bn + bn + bn + bn + bn + bn + ……………log3(n) times + bn

T(n) = bn log3n + bn

Thus, T(n) = O(log3n)

Example-3-

𝑛 𝑛
𝑇(𝑛) = 𝑇 +𝑇 +𝑛
4 2

Solution-

Step-1: Drawing the recursion tree for the given recurrence relation.

 A problem of size n will get divided into 2 sub-problems of size n/4 and n/2.
 Then, each sub-problem of size n/4 and n/2 will get divided into 2 sub-problems of size n/4
and n/2 and so on.
 At the bottom most layer, the size of sub-problems will reduce to 1.
n2

(n/4)2 (n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2

Step-2: Determine cost of each level

 Cost at Level-0 = n2, two sub-problems are merged.


 Cost at Level-1 = n2/16 + n2/4 = 5n2/16, two sub-problems are merged two times.
 Cost at Level-2 = (n/16)2 + (n/8)2 + (n/8)2 + (n/4)2 = 25n2/256, two sub-problems are merged
four times. and so on....

Step-3: Determine total number of levels in the recursion tree

 The tree here is not balanced, so we consider the longest path which is the rightmost one.

𝑛 → → → ⋯ … … … … ..

 Size of sub-problem at level-0 = n/20


 Size of sub-problem at level-1 = n/21
 Size of sub-problem at level-2 = n/22
 At level-k (last level), size of sub-problem becomes 1.
 Then

n = 2^k

log(n) = log(2k)

log(n) = k * log(2)

k = log(n) / log(2)

k = log2(n) [logm a / logm b = logba]

So total number of levels in the recursion tree = log2 n + 1

Step-4 Determine number of nodes in the last level-

 Level-0 has 20 nodes i.e. 1 node


 Level-1 has 21 nodes i.e. 2 nodes
 Level-2 has 22 nodes i.e. 4 nodes
Continuing in similar manner, we have-

 Level- log2n has 2^ log2n nodes i.e. n nodes

Step-5 : Determine cost of last level-

 Cost of last level = n x T(1) = θ(n)


 The cost of the last level is calculated separately because it is the base case and no merging is
done at the last level so, the cost to solve a single problem at this level is some constant value.

Step-6 Sum up the cost of all the levels in the recursion tree.

Total Cost = Cost of all levels except last level + Cost of last level

T(n) = n2 + 5n2/16 +25n2/256 + …… log2(n) times + θ(n)

T(n) = n2 (1 + 5/16 + (5/16)2 + (5/16)3 + …… log2(n) times) + θ(n)

T(n) = n2 * (1 / (1 – 5/16)) + θ(n) [Hence the sum of GP is given by S(N) = a / (1-r)]

T(n) = 16/11 n2 + θ(n)

Thus, T(n) = O(n2)

Example-4-

𝑛 2𝑛
𝑇(𝑛) = 𝑇 +𝑇 +𝑛
3 3

Solution-

Step-1: Drawing the recursion tree for the given recurrence relation.

 A problem of size n will get divided into 2 sub-problems of size n/3 and 2n/3.
 Then, each sub-problem of size n/3 and 2n/3 will get divided into 2 sub-problems of size n/3
and 2n/3 and so on.
 At the bottom most layer, the size of sub-problems will reduce to 1.

n/3 2n/3

n/9 2n/9 2n/9 4n/9


Step-2: Determine cost of each level

 Cost at Level-0 = n, two sub-problems are merged.


 Cost at Level-1 = n/3 + 2n/3 = n, two sub-problems are merged two times.
 Cost at Level-2 = n/9 + 2n/9 + 2n/9 + 4n/9 = n, two sub-problems are merged four times. and
so on....

Step-3: Determine total number of levels in the recursion tree

 The tree here is not balanced, so we consider the longest path which is the rightmost one.
2𝑛 4𝑛 8𝑛
𝑛→ → → → ⋯ … … … … ..
3 9 27
 Size of sub-problem at level-0 = n/(3/2)0
 Size of sub-problem at level-1 = n/(3/2)1
 Size of sub-problem at level-2 = n/(3/2)2
 At level-k (last level), size of sub-problem becomes 1.
 Then

n = (3/2)^k

log(n) = log (3/2)k

log(n) = k * log(3/2)

k = log(n) / log(2)

k = log3/2 n [logm a / logm b = logba]

So total number of levels in the recursion tree = log3/2 n + 1

Step-4 Determine number of nodes in the last level-

 Level-0 has 20 nodes i.e. 1 node


 Level-1 has 21 nodes i.e. 2 nodes
 Level-2 has 22 nodes i.e. 4 nodes

Continuing in similar manner, we have-

 Level- log3/2n has 2^ log3/2n nodes i.e. n ^ log3/22

Step-5 : Determine cost of last level-

 Cost of last level = n ^ log3/22 x T(1) = θ(n ^ log3/22)


 The cost of the last level is calculated separately because it is the base case and no merging is
done at the last level so, the cost to solve a single problem at this level is some constant value.
Step-6 Sum up the cost of all the levels in the recursion tree.

Total Cost = Cost of all levels except last level + Cost of last level

T(n) = n + n + n + n + …… log3/2(n). times + θ(n ^ log3/22)

T(n) = n * log3/2(n). + θ(n ^ log3/22)

Thus, T(n) = O(n log(n))

Example-5-

𝑛
𝑇(𝑛) = 2𝑇 + 𝑛 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 > 1
3

Solution-

Step-1: Drawing the recursion tree for the given recurrence relation.

 A problem of size n will get divided into 2 sub-problems of size n/3.


 Then, each sub-problem of size n/3 will get divided into 2 sub-problems of size n/3 and so on.
 At the bottom most layer, the size of sub-problems will reduce to 1.

n2

(n/3)2 (n/3)2

(n/9)2 (n/9)2 (n/9)2 (n/9)2

(n/27)2 (n/27)2 (n/27)2 (n/27)2 (n/27)2 (n/27)2 (n/27)2 (n/27)2

Step-2: Determine cost of each level

 Cost at Level-0 = n2, two sub-problems are merged.


 Cost at Level-1 = n2/3 + n2/3 = 2n2/3, two sub-problems are merged two times.
 Cost at Level-2 = n2/9 + n2/9 + n2/9 + n2/9 = 4n2/9, two sub-problems are merged four times.
and so on....

Step-3: Determine total number of levels in the recursion tree

 Size of sub-problem at level-0 = n/30


 Size of sub-problem at level-1 = n/31
 Size of sub-problem at level-2 = n/32
 At level-k (last level), size of sub-problem becomes 1.
 Then

n = 3^k

log(n) = log(3k)

log(n) = k * log(3)

k = log(n) / log(3)

k = log3(n) [logm a / logm b = logba]

So total number of levels in the recursion tree = log3 n + 1

Step-4 Determine number of nodes in the last level-

 Level-0 has 20 nodes i.e. 1 node


 Level-1 has 21 nodes i.e. 2 nodes
 Level-2 has 22 nodes i.e. 4 nodes

Continuing in similar manner, we have-

 Level- log2n has 2^ log3n nodes i.e. n ^ log32 nodes

Step-5 : Determine cost of last level-

 Cost of last level = n ^ log32 x T(1) = θ(n ^ log32)


 The cost of the last level is calculated separately because it is the base case and no merging is
done at the last level so, the cost to solve a single problem at this level is some constant value.

Step-6 Sum up the cost of all the levels in the recursion tree.

Total Cost = Cost of all levels except last level + Cost of last level

T(n) = n2 + 2n2/3 + 4n2/9 + 8n2/27 + …… log3(n) times + θ(n ^ log32)

T(n) = n2 (1+ 2/3 + (2/3)2 + (2/3)3 + …… log3(n) times) + θ(n ^ log32)

T(n) = n2 * (1 / (1 – 2/3)) + θ(n ^ log32) [Hence the sum of GP is given by S(N) = a / (1-r)]

T(n) = 3 n2 + θ(n ^ log32)

Thus, T(n) = O(n2)

---------------------------------------------------------------------------------------------------------------------------

Substitution Method:-
 We can use the substitution method to establish either upper or lower bounds on a recurrence.
 The Subs tu on Method Consists of two main steps:

1. Guess the Solution.


2. Use the mathematical induction to find the boundary condition and shows that the guess
is correct.

 Unfortunately, there is no general way to guess the correct solutions to recurrences.


 Guessing a solution takes experience and, occasionally, creativity.
 Fortunately, though, we can use some heuristics to help you become a good guesser.
 We can also use recursion trees to generate good guesses.

Example-1-

𝑛
𝑇(𝑛) = 𝑇 +𝑛
2

Solution:-

 We guess that the solution is:


T(n) =O(log n)
 We have to prove that
T(n) ≤ clog n
 Putting this on recurrence relation
T(n) ≤ c log (n/2) + 1
≤ c log n - c log 2 + 1
≤ c log n - c + 1 [log 2 =1]
≤ c log n for c ≥ 1
Thus, T(n) = O(log n)

Example-2-

𝑛
𝑇(𝑛) = 2𝑇 +𝑛 𝑛 >1
2

Solution:-

 We guess that the solution is:


T(n) =O(nlog n)
 We have to prove that
T(n) ≤ cn log n
 Putting this on recurrence relation
T(n) ≤ 2c(n/2) log (n/2) + n
≤ cn log (n/2) + n
≤ cn log n - cn log 2 + n
≤ cn log n - cn + n [log 2 =1]
≤ cn log n for c ≥ 1
Thus, T(n) = O(n log n)

---------------------------------------------------------------------------------------------------------------------------

9. What are the algorithm design techniques?

Answer-

Greedy Method:

o The greedy algorithm doesn't always guarantee the optimal solution.


o However it generally produces a solution that is very close in value to the optimal.
o In the greedy method, at each step, a decision is made to choose the local optimum,
without thinking about the future consequences.
Example: Fractional Knapsack, Activity Selection.

Divide and Conquer:


o The Divide and Conquer strategy involves dividing the problem into sub-problem,
recursively solving them, and then recombining them for the final answer.
Example: Merge sort, Quicksort.

Dynamic Programming:
 The approach of Dynamic programming is similar to divide and conquer.
 The difference is that whenever we have recursive function calls with the same result,
instead of calling them again we try to store the result in a data structure in the form of a
table and retrieve the results from the table.
 Thus, the overall time complexity is reduced. “Dynamic” means we dynamically decide,
whether to call a function or retrieve values from the table.
Example: 0-1 Knapsack, subset-sum problem.

Linear Programming:
 In Linear Programming, there are inequalities in terms of inputs and maximizing or
minimizing some linear functions of inputs.
Example: Maximum flow of Directed Graph
Reduction(Transform and Conquer):
 In this method, we solve a difficult problem by transforming it into a known problem for
which we have an optimal solution.
 Basically, the goal is to find a reducing algorithm whose complexity is not dominated by
the resulting reduced algorithms.
Example:
 Selection algorithm for finding the median in a list involves first sorting the list and then
finding out the middle element in the sorted list.
 These techniques are also called transform and conquer.
Backtracking:
 This technique is very useful in solving combinatorial problems that have a single unique
solution.
 Where we have to find the correct combination of steps that lead to fulfilment of the task.
 Such problems have multiple stages and there are multiple options at each stage.
 This approach is based on exploring each available option at every stage one-by-one.
 While exploring an option if a point is reached that doesn’t seem to lead to the solution, the
program control backtracks one step, and starts exploring the next option.
 In this way, the program explores all possible course of actions and finds the route that
leads to the solution.
Example: N-queen problem, maize problem.

Branch and Bound:


 This technique is very useful in solving combinatorial optimization problem that
have multiple solutions and we are interested in find the most optimum solution.
 In this approach, the entire solution space is represented in the form of a state space tree.
 As the program progresses each state combination is explored, and the previous solution is
replaced by new one if it is not the optimal than the current solution.
Example: Job sequencing, Travelling salesman problem.
---------------------------------------------------------------------------------------------------------------------------

10. How to Analyse of Loops

Answer-

 The analysis of loops for the complexity analysis of algorithms involves finding the number
of operations performed by a loop as a function of the input size.
 This is usually done by determining the number of iterations of the loop and the number of
operations performed in each iteration.
Constant Time Complexity O (1):

 O(1) refers to constant time complexity, which means that the running time of an algorithm
remains constant and does not depend on the size of the input.
 The time complexity of a function is considered as O(1) if it doesn’t contain a loop,
recursion, and call to any other non-constant time function. i.e., set of non-recursive and
non-loop statements.
Example:
 swap() function has O(1) time complexity.
 A loop or recursion that runs a constant number of times is also considered O(1).

// Here c is a constant
for (int i = 1; i <= c; i++)
{
// some O(1) expressions
}

Linear Time Complexity O(n):

 The Time Complexity of a loop is considered as O(n) if the loop variables are
incremented/decremented by a constant amount.
 Linear time complexity, denoted as O(n), is a measure of the growth of the running time of
an algorithm proportional to the size of the input.
 In simple words, for an input of size n, the algorithm takes n steps to complete the
operation.
Example-1
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
Example-2

for (int i = n; i > 0; i -= c) {


// some O(1) expressions
}

Quadratic Time Complexity O(nc):

 The time complexity is defined as an algorithm whose performance is directly proportional


to the squared size of the input data, as in nested loops it is equal to the number of times the
innermost statement is executed.

for (int i = 1; i <= n; i += c) {


for (int j = 1; j <= n; j += c) {

// some O(1) expressions

Logarithmic Time Complexity O(Log n):


 The time Complexity of a loop is considered as O(Logn) if the loop variables are
divided/multiplied by a constant amount.
 And also for recursive calls in the recursive function, the Time Complexity is considered as
O(Logn).

Example-1

for (int i = 1; i <= n; i *= c) {

// some O(1) expressions

Example-2

for (int i = n; i > 0; i /= c) {

// some O(1) expressions

Example-3

// Recursive function
void recurse(n)
{
if (n == 0)
return;
else {
// some O(1) expressions
}
recurse(n - 1);
}
Logarithmic Time Complexity O(Log Log n):
 The Time Complexity of a loop is considered as O(LogLogn) if the loop variables are
reduced/increased exponentially by a constant amount.

Example-1

// Here c is a constant greater than 1


for (int i = 2; i <= n; i = pow(i, c)) {
// some O(1) expressions
}

Example-2

// Here fun is sqrt or cuberoot or any other constant root


for (int i = n; i > 1; i = fun(i)) {
// some O(1) expressions
}

How to combine the time complexities of consecutive loops?


 When there are consecutive loops, we calculate time complexity as a sum of the time
complexities of individual loops.
Example-
for (int i = 1; i <= m; i += c) {
// some O(1) expressions
}
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}

// Time complexity of above code is O(m) + O(n) which is O(m + n)


// If m == n, the time complexity becomes O(2n) which is O(n).

What is the time complexity of fun()?


int fun(int n)
{
int count = 0;
for (int i = 0; i < n; i++)
for (int j = i; j > 0; j--)
count = count + 1;
return count;
}
(A) Theta (n)
(B) Theta (n2)
(C) Theta (n*log(n))
(D) Theta (n*(log(n*log(n))))

Answer: (B) Theta (n2)

Explanation:
The time complexity can be calculated by counting the number of times the expression “count =
count + 1;” is executed. The expression is executed 0 + 1 + 2 + 3 + 4 + …. + (n-1) times.
Time complexity = Theta(0 + 1 + 2 + 3 + .. + n-1) = Theta (n*(n-1)/2) = Theta(n2)
---------------------------------------------------------------------------------------------------------------------------

You might also like