0% found this document useful (0 votes)
23 views32 pages

Vision 2023 Algorithm Chapter 1 Asymptotic Analysis 65

Uploaded by

finertia.bd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views32 pages

Vision 2023 Algorithm Chapter 1 Asymptotic Analysis 65

Uploaded by

finertia.bd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

byjusexamprep.

com

1
byjusexamprep.com

ALGORITHM

1 ASYMPTOTIC ANALYSIS

● Combination of a sequence of Finite sets of steps to solve a specific problem is called an


algorithm.
● Properties of algorithm:
1. finite time to produce output
2. should produce correct output
3. independent of programming language.
4. every step should perform some tasks.
5. steps should be unambiguous.
6. number of input can be zero or more and output should be atleast one.
● Steps means instructions which contains fundamental operators i.e. (+, *, ÷, % , =, etc.)
● Analysis of algorithm:
How to check the available algorithm , which is the best?
1. time: time complexity of the algorithm.
T(A)=C(A)+R(A)
C(A): compile time of A, depends on the compiler and software
R(A): Runtime of A, depends on processor and hardware.
2. space: space complexity of algorithm.
● Criteria for algorithm-
1. Finiteness:
Algorithms must terminate in a finite amount of time.

Example:
e.g.
i=1
while (1)
{
i=i+1 → Since it is executing infinitely. So, this kind of solution. Not allowed in
algo.

2
byjusexamprep.com
2. Definiteness:
Deterministic Algo:

Each step of the algorithm must have only one unique solution called as deterministic
algorithm.

Non Deterministic Algo – Each step of algo consists of a finite no. of solution and algo
should choose the correct solution on the 1st attempt.
(Not possible to implement in computer)

Steps to solve any problem:


1. Identifying problem statement:
Example: Arrange 4 Queens Q1 , Q2 , Q3 , Q4 into 4x4 chess board.

2. Identifying constraints :
e.g. No two queens on same ecs & on same column & on same diagonal.
3. Design logic :
Depending on the characteristics of the problem we can choose any one at the following design
strategy for design logic.
(i) Divide & Conquer
(ii) Greedy method

3
byjusexamprep.com
(iii) Dynamic programming
(iv) Branch & Bound.
(v) Back tracking etc.....
4. Validation :
Most algorithms are validated by using mathematical indexical.
5. Analysis :
Process of comparing two algo w.r.t time, space, no. of register, network bandwidth etc. is
called analysis.

Priory Analysis Posterior Analysis


Analysis done before executing. → Analysis done after executing.

e.g. x = x + 1 e.g. x = x + 1;

→ Principle : frequency count of fundamental


Insn.
Since x = x + 1 being carried out only 1 time
So it’s complexity is 0 (1) [order of 1]
→ It provides estimated values. → It provides exact values.
→ It provides uniform values. → It provides non uniform values .

→ It is independent of CPU, O/S & system


architecture.
6. Implementation.
7. Testing & Debugger.

Apriori Analysis :
It is a determination of the order of magnitude of a statement.
Example 1:
main(){
int x,y,z; ->order of magnitude of this statement is 1, this statement executes once when
the program executes.
x=y+z;
}
Time complexity O(1).

4
byjusexamprep.com
Example-2:

Now, time complexity


i --------→ 1
n -------→ 1
∴ SC = constant = θ(1)

Example-3:
For (i = n; i ≥ l; i - - ) -----------→ 2(n+1)
Printf (“GRADE UP”) -----------→ n times
∴ T.C = θ (n)
S.C = θ (1)
i --------→ 1
n -------→ 1
constant

Example-4:
For (i = l; i ≤ n; i = i + 5)

printf (“GRADE UP”) ; ---------→

1 + K.5 ≤ n
K.5 ≤ n – 1

5
byjusexamprep.com
Example-5
For (i = n; i ≥ l; i= 1 – 5)

n − 1
print (“GRADEUP”) -------------------→   + 1 - times exactly same as precious loop
 5 

T.C = θ (n)
For (i = l; i ≤ n; i = i * 2)
printf (“GRADEUP”); ----------→ [log2n] + 1 times

2K ≤ n
log22K ≤ log2n (Taking log)
K log22 ≤ log2n

∴ loop runs [log2n] + 1 times


T.C. = θ (log2 n)

Example-6
6) For (i = n; i ≥ l; i = i/2)
printf (“ME”); -----------------→ [log2 n] + 1 times
same as above

6
byjusexamprep.com
Example-7
For (i = l; i ≤ n; i = i * 5)
printf (“ME”) ; --------------------→ [log5 n] + 1 ← (case 5)

5K ≤ n
Log5 (5K) ≤ log5n
K log55 ≤ log5n
K = [log5n]

Example 8:
For (i =2; i ≤ n; i = i2)
printf (“ME”) ;

Again take log

Example 9:

(
For i = n; i  2; i = i )
printf (“GRADE UP”) --------------→ [log2 log2 n] + 1 times
same as above

7
byjusexamprep.com

Apply log

Again take log

if we have i = i3

∴ [log3 log3n] ← (case 3)

Example 10:
main() {
int x,y,z,i,j,k,n;
for(i=1 to n) { ---------------------> order of magnitude =n
for(j=1 to i){ -----------------------> 1,2,3,...n
for(k=1 to 135){ -------------------> order of magnitude =135
x=y+z;
}
}
}
}

i=1 i=2 i=3 ... i=n

j=1 to 1 j=1 to 2 j=1,2,3 ... j=1,2,3,...n

k=135 135, 135 135,135,135 ... 135,135,135...135


T= 1*135+2*135+3*135+....+n*135
=135(1+2+3+....+n)
=135*n*(n+1)/2
=O(N^2)

8
byjusexamprep.com
Shortcut

(1)

(2)

(3)

Example 11:
1) For (i =2 ; i ≤ 2n; i = i2)
printf (“GRADE UP”);
1) for (i = 2 ; i ≤ 2n ; i = i2)
printf (“GRADE UP”);

i→
(Shortcut)
Compare,
for (i = 2 ; i ≤ 2n, i = i2)
log2log2n

log2 log2 2n

(log n)

TC = θ (log n)

2) for (i = n/2 ; i ≤ n ; i = i * 2)

9
byjusexamprep.com
printf (“GRADE UP”);

i→
×
2 times ← constant
T.C = θ (1)

3) for ( i = 1 ; i ≤ 2n ; i = i * 2)
printf (“GRADE UP”)
i → 1, 2, 4, 23 ........... 2K ≤ 2n
22
2K ≤ 2 n

TC = θ (n)

Nested Loops –
(1) Independent Nested Loop :
the inner loop variable is independent of the outer loop.
E.g.-
for (i = 1; i ≤ n ; i + +)
for (j = 1 ; j ≤ n2 ; j * 2)

j value does not depend on i
E.g.-

Overall time complexity is no. of times each loop runs.


⇒ n * 2 log2n * log2 log2 n
⇒ n log2n * log2 log2 n
TC = θ (n log n. log log n)

10
byjusexamprep.com

2) for (i = 1 ; i ≤ n2 ; i = i * 2) θ (log2 n2) → 2 log n

for (j = 1 ; i ≤ n2 ; j + +) θ (n2) → n2

for (K = n2 ; K ≥ 1 ; K = K/2) θ (log2 n2) → 2 log n


printf (“M.E”) ;
Now,
TC = 2 log n * n2 * 2 log n
TC = θ (n2 . log2 n)
Time complexity of loop
Loop only depend on i.
⇒ i does not depend on j
∴ for (i = n ; i ≥ 1 ; i = i/2)
Time complexity = θ (log n)
(2) Depending Loop
Inner loop variable depends on the outer loop.
x=0
E.g.-
for (i = 1 ; i ≤ n ; i + +)
{
for (j = 1 ; j ≤ i ; j + +)
for ( K = 1 ; K ≤ j ; K + + )
X=X+1
{
a) What is the frequency count of the loop?
b) If n = 10, what is the final value of n?
Expansion of loop

11
byjusexamprep.com
Above is in arithmetic progression.

n the term

()
if n = 10

Asymptotic analysis:

Why performance analysis?


There are many important things that should be taken care of, like user friendliness, modularity,
security, maintainability, etc. Why worry about performance?
The answer to this is simple, we can have all the above things only if we have performance. So
performance is like currency through which we can buy all the above things. Another reason for
studying performance is – speed is fun!
To summarize, performance == scale. Imagine a text editor that can load 1000 pages, but can spell
check 1 page per minute OR an image editor that takes 1 hour to rotate your image 90 degrees left
OR … you get it. If a software feature can not cope with the scale of tasks users need to perform – it
is as good as dead.

Given two algorithms for a task, how do we find out which one is better?
One naive way of doing this is – implement both the algorithms and run the two programs on your
computer for different inputs and see which one takes less time. There are many problems with this
approach for analysis of algorithms.
1) It might be possible that for some inputs, the first algorithm performs better than the second. And
for some inputs the second performs better.
2) It might also be possible that for some inputs, the first algorithm perform better on one machine
and the second works better on other machine for some other inputs.

12
byjusexamprep.com
* Asymptotic Notation :
To compare two algorithms' rate of growth with respect to time & space we need asymptotic
notation.

Big-O Analysis of Algorithms


We can express algorithmic complexity with the big-O notation. For a problem of size N:
● A constant-time function is is “order 1” : O(1)
● A linear-time function is “order N” : O(N)
● A quadratic-time function/method is “order N squared” : O(N 2 )
Definition: g and f be functions from the set of natural numbers to itself. The function f is said to be
O(g) (read big-oh of g), if there is a constant c and a natural n 0 such that f (n) ≤ cg(n) for all n >
n0 .
Note: O(g) is a set!
Abuse of notation: f = O(g) does not mean f ∈ O(g).
The Big-O Asymptotic Notation gives us the Upper Bound Idea,
f(n) = O(g(n)) if there exists a positive integer n 0 and a positive constant c, such that f(n)≤ c.g(n) ∀
n≥n0

Diagram for Big oh notation:

Note :
Even Though n2 > n3 > n4 are upper bound’s to the
t(n) = n2 + n + 1 ; we have to take the least upper base only.
∴ t(n) = O(n2)

13
byjusexamprep.com
Shortcut :
If t(n) a0 + a1n + a2n2 + ...... + amnm (am ≠ O)
then t(n) = O(nm)

The steps for Big-O runtime analysis is as follows:


1. find what the input is and what n represents.
2. get the maximum number of operations that the algorithm performs in terms of n.
3. remove all the highest order terms.
4. eliminate all the constant factors.

Example 1:
f(n) = n2 log n ; g(n) = n (log n)10, which of the following is true?
A. f(n) = (g(n)), g(n) ≠ 0 (f(n))
B. f(n) ≠ 0(g(n)), g(n) = 0(f(n))
C. f(n) = 0(g(n)), g(n) = 0 (f(n))
D. f(n) ≠ 0(g(n)), g(n) ≠ 0 f(n)
Ans. B

Example 2:
→ f(n) = n2 log n g(n) = n (log n)10
= n log × n = n log n (log n)9
Since n > (log n)n
f(n) > g(n)
⇒ g(n) < f(n)
Low = 0 (high) ⇒ g(n) = 0(f(n))

Big – Omega (Ω) :


f(n) is Ω (g(n)) iff ∃ some C > 0 and K ≥ 0 such that t(n) ≥ C. g(n) ; ∀ n ≥ K.

Ex:
If f(n) = n2 + n + 1, then f(n) = Ω ( )
→ n 2 ≥ n2
n2 + n ≥ n2
n2 + n + 1 ≥ 1 n2 ; ∀ n ≥ 0
n2 + n + 1 = Ω (n2)

14
byjusexamprep.com

Fig. Omega notation


→ n ≥n
2

n2 + n ≥ n
n2 + n + 1 ≥ 1. n ; ∀n ≥ 0
n2 + n + 1 = Ω (n)
Always take higher value from the lower frequency.

Note :
Even though n2, n are lower bonus to f(n) you have to take the greatest lower bound only.
Shortcut :
If f(n) = a0 + a1n + a2n2 + ...... + amnm (am ≠ 0)
then f(n) = Ω (nm)

Little ω asymptotic notation


Definition : Let f(n) and g(n) are the two functions that maps + integers to + real numbers. We
say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for any real constant c > 0, there exists an integer
constant n0 ≥ 1 such that f(n) > c * g(n) ≥ 0 for every integer n ≥ n0.
f has a higher growth rate than g so difference between Ω and ω lies in between the definitions. In
the case of Big Omega f(n)=Ω(g(n)) and the bound is 0<=cg(n)<=f(n), but in case of little omega,
it is true for 0<=c*g(n)<f(n).
The relationship between Big Omega (Ω) and Little Omega (ω) is similar to that of Big-Ο and Little o
except that now we are looking at the lower bounds. Little Omega (ω) is a rough estimate of the
order of the growth whereas Big Omega (Ω) may represent the exact order of growth. We use ω
notation to denote a lower bound that is not asymptotically tight.
And, f(n) ∈ ω(g(n)) if and only if g(n) ∈ ο((f(n)).

15
byjusexamprep.com
In mathematical relation,
if f(n) ∈ ω(g(n)) then,
lim f(n)/g(n) = ∞
n→∞

Example-1
F(n)= n
G(n)=n2
F(n)= Ω(G(n))
n>= c.n2 ∀𝑛 , n>=n0

Example-2
f(n)=n-10
g(n)=n+10
f(n)= Ω(g(n))
n-10>= c.n+10, ∀𝑛 , n>=n0
Note :
Even though n2, n are lower bonus to f(n) you have to take the greatest lower bound only.
Shortcut :
If f(n) = a0 + a1n + a2n2 + ....... + amnm (am ≠ 0)
then f(n) = Ω (nm)

Theta (θ) :
f(n) is θ (g(n)) iff f(n) is 0 (g(n)) and f(n) is Ω (g(n)).

f(n) = θ (g(n)) ⇔ ∃ C1, C2 > 0 and K1, K2 ≥ 0 and


K1 > 0 such that
C1g(n) ≤ + (n) ≤ C2. g(n) ; ∀ n ≥ K1
Example-1
If f(n) = n2 + n + 1 then f(n) = θ ( )
→ n2 + n + 1 = 0(n2) ; ∀ n ≥ 1 and for C2 = 3
&
n + n + 1 = Ω (n2) ; ∀n ≥ 0 and for C1 = 1
2

1 n 2 ≤ n2 + n + 1 ≤ 3 . n 2 ; ∀ n ≥ 1
↑ ↑ ↑
C1 C2 K1
n + n + 1 = θ (n )
2 2

16
byjusexamprep.com

Fig. theta notation

Example-2
F(n)=n-10
G(n)=n+10
F(n)<= c1G(n)
n-10<= c1(n+10) ∀𝑛 , n>=n0
f(n)>= c2g(n)
n-10 >=c2(n+10) ∀𝑛 , n>=n0
n-10 =𝜃(n+10) for c1 =1, c2 =1/2, n0 =30

17
byjusexamprep.com
Properties of Asymptotic:
1. Reflexivity:
If f(n) is given then,f(n) = O(f(n))
Example:
If f(n) = n3 ⇒ O(n3)
Similarly,
f(n) = Ω(f(n))
f(n) = Θ(f(n))
2. Symmetry:
f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n))
Example:
If f(n) = n2 and g(n) = n2 then f(n) = Θ(n2) and g(n) = Θ(n2)
3. Transitivity:
f(n) = O(g(n)) and g(n) = O(h(n)) ⇒ f(n) = O(h(n))
Example:
If f(n) = n, g(n) = n2 and h(n) = n3
⇒ n is O(n2) and n2 is O(n3) then n is O(n3)
4. Transpose Symmetry:
f(n) = O(g(n)) if and only if g(n) = Ω(f(n))
Example:
If f(n) = n and g(n) = n2 then n is O(n2) and n2 is Ω(n)
5. Since these properties hold for asymptotic notations, analogies can be drawn
between functions f(n) and g(n) and two real numbers a and b.
• g(n) = O(f(n)) is similar to a ≤ b
• g(n) = Ω(f(n)) is similar to a ≥ b
• g(n) = Θ(f(n)) is similar to a = b
• g(n) = o(f(n)) is similar to a < b
• g(n) = ω(f(n)) is similar to a > b
6. max(f(n), g(n)) = Θ(f(n) + g(n))
7. O(f(n)) + O(g(n)) = O(max(f(n), g(n)))

18
byjusexamprep.com
Difference Between Big oh, Big Omega and Big Theta :
S.NO BIG OH BIG OMEGA BIG THETA
.
1. It is like <= It is like >= It is like ==
rate of growth of an rate of growth is meaning the rate of
algorithm is less than or greater than or equal growth is equal to a
equal to a specific value to a specified value specified value
2. The upper bound of The algorithm lower The bonding of function
algorithm is represented bound is represented from above and below is
by Big O notation. Only by Omega notation. represented by theta
the above function is The asymptotic lower notation. The exact
bounded by Big O. bond is given by asymptotic behavior is
asymptotic upper bond is Omega notation done by this theta
it given by Big O notation. notation.

3. Big oh (O) – Worst case Big Omega (Ω) – Best Big Theta (Θ) – Average
case case
4. Big-O is a measure of the Big- Ω takes a small Big- Θ is take very short
longest amount of time it amount of time as amount of time as
could possibly take for compared to Big-O it compare to Big-O and
the algorithm to could possibly take for Big-? it could possibly
complete. the algorithm to take for the algorithm to
complete. complete.

5. Mathematically – Big Oh Mathematically – Big Mathematically – Big


is 0 <=f(n) <= c g(n) for Omega is O<= C g(n) Theta is O<=C 2
all n>=n0 <= f(n) for all n>=n 0 g(n)<=f(n)<=C 1 g(n)
for n>=n 0

Recurrence Relation
A recurrence is an equation or inequality that describes a function in terms of its values on smaller
inputs. To solve a Recurrence Relation means to obtain a function defined on the natural numbers
that satisfy the recurrence.
A. Substitution Method
B. Recurrence Tree Method
C. Master’s theorem

19
byjusexamprep.com
Recursive Algorithm :
int fact (int n)
{
if (n = = 0 || n = = 1)
return 1 ; // Base condition
else
return n * fact (n – 1) ;
Here if we i/p n = 5 then,

Notes :
1. Time complexity of recursive algorithm = No. of function call
∴ Time complexity of fact (n) = 0(n)
2. Space complexity of recursive algorithm = Depth of recursive tree
or
= No. of activation record.
3. Space complexity of fact (n) = 0 (n – 1)
= 0 (n)
Ex : Find time complexity of recursive tann. of Fibonacci sequence.
int fib (in + n)
{

return 1
else
I.C. {return tib (n – 1) + fib (n – 2);
}

20
byjusexamprep.com
A. 0 (n2)
B. 0 (2n)
C. 0 (n)
D. 0 (n log n)
Ans. B
Here we take n = 5
Note :
For small values of n fib (n) = 0 (n2) & for large values of n fib (n) = (2 n)

Since our analysis is only for large values of n. So time complexity of

Note :

No. of tunn. calls on i/p size n is Fibonacci sequence

e.g. : n = 5, tunn. call = 15


= 16 – 1
=2×8–1
= 2f (6) – 1
Note :
No. of addition perform on input size n in fib (n) = fib (n + 1) – 1
e.g.
n = 5 , addition =7
=8–1
= fib (6) – 1

Function call = 15 (Total no. of nodes)


Total addition = 7

21
byjusexamprep.com
A. Substitution Method: We make a guess for the solution and then we use mathematical
induction to prove the guess is correct or incorrect.
Solve the equation by Substitution Method.
Example- 1
Long power (long x, long n)
{ if (n==0) return 1;
if (n==1) return x;
if (( n % 2) == 0)
return power (x*x, n/2);
else
return power (x*x, n/2) * x;
}
T(0) = c1
T(1) = c2
T(n) = T(n/2) + c3
(Assume n is a power of 2)
T(n) = T(n/2) + c3 🡺 T(n/2) = T(n/4) + c3
=T(n/4) + c3 + c3
= T(n/4)2c3 🡺 T(n/4) = T(n/8) + c3
=T(n/8) + c3 + 2c3
= T(n/8)+2c3 🡺T(n/8) = T(n/16) + c3
= T(n/16) + c3 + 3c3
= T(n/32) + c3 + 4c3

= T(n/32) + 5c3
=…..
= T (n/2k) + kc3
T (0) = c1
T (1) = c2
T(n) = T (n / 2) + c3
T(n) = T(n/2k) + kc3
We want to get rid of T(n/2k). We get to a relation we can solve directly when we reach T(1)
lg n = k
T(n) = T (n/2lgn) + lgnc3
= T(1) + c3lgn
= c2 + c3lgn
= Θ (lg n)

22
byjusexamprep.com
Example 2- For the given program find the recurrence relation.
int mid=0;
int S[]={4,6,8,10,14,18,20};
binsearch(int low, int high, int S[], int x)
{ if low ≤ high
{ mid = (low + high) / 2 ;
if x = S[mid]
return mid;
}
else if x < S[mid]
return binsearch(n, low, mid-1, S, x);
else
return binsearch(n, mid+1, high, S, x)
else
return 0
}
For binsearch(n), how many times is binsearch called in the worst case?
T(0) = 1
T(1) = 2
T(2) = T(1) + 1 = 3
T(4) = T(2) + 1 = 4
T(8) = T(4) + 1 = 4 + 1 = 5

So the recurrence relation can be written as-


o T(n) = T(n/2) + c ------- c is constant here.
Solving the recurrence relation is similar as in example 1.
Example 3- Consider the following code
fun(n)
{ if(n>1)
printf(“%d”, n); 🡺 this statement will take constant time
return (fun(n-1)); 🡺 recursive function, every time n value
decremented by 1
}
i). Find the recurrence relation.
ii). Compute the time complexity.
Solution:
i) Recurrence Relation
T (n) = T (n-1) +1 and T (1) = θ (1).

23
byjusexamprep.com
ii) For time complexity-
T (n) = T (n-1) +1
= (T (n-2) +1) +1
=(T (n-3) +1) +1+1
= T (n-4) +4
= T (n-k) + k
Where k = n-1
T (n-k) = T (1) = θ (1)
T (n) = θ (1) + (n-1)
= 1+n-1=n=
T(n)=θ (n).
Example 4- Consider the Recurrence
T (n) = 1 if n=1
T(n) = 2T (n-1) if n>1
Solution:
T (n) = 2T (n-1)
= 2[2T (n-2)] = 22T (n-2)
= 4[2T (n-3)] = 23T (n-3)
= 8[2T (n-4)] = 24T (n-4)
Repeat the procedure for i times
T (n) = 2i T (n-i)
Put n-i=1 or i= n-1
T (n) = 2n-1 T (1)
= 2n-1 .1 {T (1) =1 .....given}
= 2n-1 = O(2n)

B. Recurrence Tree Method: In this method, we draw a recurrence tree and calculate the time
taken by every level of tree. Finally, we sum the work done at all levels. To draw the
recurrence tree, we start from the given recurrence and keep drawing till we find a pattern
among levels.
1. Recursion Tree Method is a pictorial representation of an iteration method which is in the
form of a tree where at each level nodes are expanded.
2. In general, we consider the second term in recurrence as root.
3. It is useful when the divide & Conquer algorithm is used.
4. It is sometimes difficult to come up with a good guess. In Recursion tree, each root and
child represent the cost of a single subproblem.
5. We sum the costs within each of the levels of the tree to obtain a set of pre-level costs and
then sum all pre-level costs to determine the total cost of all levels of the recursion.

24
byjusexamprep.com
6. A Recursion Tree is best used to generate a good guess, which can be verified by the
Substitution Method.
Example-1
Consider T (n) = 2T (n/2) + n2
We have to obtain the asymptotic bound using recursion tree method.
Solution: The Recursion tree for the above recurrence is

T(n) = θ(n2)

Example 2: Consider the following recurrence


T (n) = 4T (n/2) + n
Obtain the asymptotic bound using recursion tree method.

25
byjusexamprep.com
Solution: The recursion trees for the above recurrence

We have n + 2n + 4n + …… log2n times


= n (1 + 2 + 4 + …… log2 n times)

T (n) = θ (n2)
C. Master Method
The Master Method is used for solving the following types of recurrence
𝑛 𝑛
T (n) = a T( ) + f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be interpreted
𝑏 𝑏

as
Let T (n) is defined on non-negative integers by the recurrence.

𝑛
T (n) = a T( )+ f (n)
𝑏

In the function to the analysis of a recursive algorithm, the constants and function take on the
following significance:
o n is the size of the problem.
o a is the number of subproblems in the recursion.
o n/b is the size of each subproblem. (Here it is assumed that all subproblems are essentially
the same size.)
o f (n) is the sum of the work done outside the recursive calls, which includes the sum of
dividing the problem and the sum of combining the solutions to the subproblems.
o It is not possible always bound the function according to the requirement, so we make three
cases which will tell us what kind of bound we can apply on the function.

26
byjusexamprep.com
𝑛
𝑇(𝑛) = {𝛩(𝑛 𝑎 ) 𝑓(𝑛) = 𝑂(𝑛 𝑎−𝜀 )
𝛩(𝑛𝑎 ) 𝑓(𝑛) = 𝛩(𝑛𝑎 ) 𝛩(𝑓(𝑛)) 𝑓(𝑛) = 𝛺(𝑛𝑎 +𝜀 ) 𝐴𝑁𝐷 𝑎𝑓 ( )
𝑏
< 𝑐𝑓(𝑛)𝑓𝑜𝑟 𝑙𝑎𝑟𝑔𝑒 𝑛 } 𝜀 > 0 𝑐 < 1

Case1: If f (n) = O (n logb a — ℰ) for some constant ε >0, then it follows that:
T (n) = Θ (n log
b )
a

Example-1
T (n) = 8 T (n/2) + 1000n2 apply master theorem on it.
Solution:
Compare T (n) = 8 T (n/2) + 1000n2 with
T (n) = a T(n/b) + f(n) with a>= 1 and b>1
a = 8, b=2, f (n) = 1000 n2, logba = log28 = 3
Put all the values in: f (n) = O(n log a — ℰ
b )
1000 n2 = O (n3-ε )
If we choose ε=1, we get: 1000 n2 = O (n3-1) = O (n2)
Since this equation holds, the first case of the master theorem applies to the given recurrence
relation, thus resulting in the conclusion:
T (n) = Θ (n log b a
)
Therefore: T (n) = Θ (n3)
Case 2: If it is true, for some constant k ≥ 0 that:
F (n) = Θ ( nlogb alogk n) then it follows that : T (n) Θ (n b log
log a k+1
n)

Example-2
T(n) = 2 T(n/2) + 10n, solve the recurrence by using the master method.
As compare the given problem with T (n) = a T(n/b) + f (n) with a ≥ 1 and b > 1 a = 2, b =
2, k = 0, f (n) = 10n, logba
Put all the values in f (n) = Θ (n b log
log a k
n), we will get
10 n = Θ (n1) = Θ (n) which is true.
Therefore,
T (n) = Θ (n b log
log a k+1
n)
= Θ (n log n)
Case 3: If it is true f(n) = Ω (n logb a + ℰ) for some constant ε >0 and it also true that:
𝑛
af ( ) ≤ c f (n) for some constant c<1 for large value of n ,
𝑏

then : T (n) = Θ((f (n))

27
byjusexamprep.com
Example-3
𝑛
Solve the recurrence relation: 𝑇 ( ) + 𝑛2
2

Compare the given problem with T (n) = a T (n/b) + f(n) with a ≥ 1 and b > 1 a = 2, b = 2,
f(n) = n2, log b a = log22 = 1
Put all the values in f (n) = Ω (n log a + ℰ
b ) ……. (Eq. 1)
In we insert all the value in (Eq. 1),1 we will get
n2 = Ω (n 1+ℰ
) put ℰ = 1, then the equality will hold.
n2 = Ω (n 1+1
) = Ω (n2)
Now we will also check the second condition:

If we will choose c = ½, it is true:

So it follows :T (n) = Θ ((f ((n))


T (n) = Θ (n2)
Ex:
Linked question
int too (int n, int r)
{
if (n > 0)
return ((n % r) + too (n/r, r))
else
return 0;
}

1. What is the return value of too (345, 10)


A. 345
B. 10
C. 12
D. 9

28
byjusexamprep.com
2. What is the return value of too (513, 2)
A. 2
B. 3
C. 6
D. 8
Ex :
int Do something ( int n)
{
(n ≤ 2)
return 1 ;
else
return (Do some thing (floor (sq rt (n))) + n);
}
1. Find time complexity :
A. O (log2 n)
B. O (log2 log2 n)
C. O (n log2 n)
D. O (n)

Case (i). Examples :


Ex :

a = 16, b = 4, k = 1, p = 0
From case (i) is a > bk,
16 > 41 → yes
⇒ T(n) = θ (n logb a) = θ (n log4 16) = θ (n2)
Ex :

a = 4, b = 2, k = 0, p = 1
Is a > bk, 4 > 2° (yes)
T(n) = θ (n log2 4) = θ (n2)
Ex :

29
byjusexamprep.com

, b = 2, k = 0, p = 1
Is.

(yes)

Ex. :

a = 3, b = 2, k = 1, p = 0
∴ Is a > bk , 3 > 2 (yes)
⇒ T(n) = θ (n log2 3)
Case (ii) examples :
Ex :

a = 2, b = 2, k = 2, p = 0
Is, a < bk, 2 < 22 → yes.
p = 0 (≥ 0) , case (ii) (a)
⇒ T(n) = θ (n2)
Ex :

Is, a < bk, 6 < 32 → yes


p = 1 (≥ 0), case (ii) (a)
⇒ T (n) = θ (n2 log n)

Ex :

a = 4, b = 2, k = 2, p = 0

30
byjusexamprep.com
Is. a = bk, 4 = 22 → yes
p = 0 (> - 1), case (iii) a


= θ (n2 log n)
Ex :

a = 3, b = 3, k = 1, p = 0
Is. a = bk, 3 = 3' → yes.
p = 0 (> - 1), case (iii) a


= θ (n log n)

Ex :

a = 3, b = 2, k = 1, p = - 1
Is. a = bk, 3 = 3' → yes.
P = - 1 (= -1) , case (iii) (b)

= θ (n log lon n)
Ex :

d
a = δ, b = 2, k = 3, p = - 2

Is. a = bk, δ = 23 → yes.


p = - 2 (< - 1) , case (iii) (c)

= θ (n3)

31
byjusexamprep.com
Ex:

which of the following FALSE.


A. T (n) = 0 (n2)
B. T (n) = 0 (n log n)
C. T (n) θ (n log n)
D. T (n) = Ω (n2)

SPECIAL CASES IN MASTER THEOREM :

1)

Since a = 0.5 (< 1)


So, we can’t apply master theorem

2)

Here, ‘a’ can’t be a runn.


So, we can’t apply M.T.

3)

Negative funn. can’t allow in M.T. So, we can’t apply M.T.

4) exponential funn, then put is directly in answer.

Ans : 0 (2n)

5)

Ans : 0 (n !)

****

32

You might also like