0% found this document useful (0 votes)
9 views32 pages

UNIT-I-Divide and Conquer & Binary Search

The document provides an introduction to algorithms, focusing on performance analysis, time and space complexity, and asymptotic notations. It details the divide-and-conquer strategy, including its general method, applications, and pros and cons, while also presenting recurrence relations and methods for solving them. Additionally, it outlines various examples and types of recurrence functions, emphasizing the importance of understanding algorithm complexity.

Uploaded by

22wh1a05b6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views32 pages

UNIT-I-Divide and Conquer & Binary Search

The document provides an introduction to algorithms, focusing on performance analysis, time and space complexity, and asymptotic notations. It details the divide-and-conquer strategy, including its general method, applications, and pros and cons, while also presenting recurrence relations and methods for solving them. Additionally, it outlines various examples and types of recurrence functions, emphasizing the importance of understanding algorithm complexity.

Uploaded by

22wh1a05b6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT – I

Introduction: Algorithm, Performance Analysis-Space complexity, Time complexity, Asymptotic


Notations- Big oh notation, Omega notation, Theta notation and Little oh notation.

Divide and Conquer: General method, Applications-Binary search, Quick sort, Merge sort,
Strassen’s matrix multiplication.

Divide and Conquer:

 Strategy for solving a problem: Approach or Design for solving a computational problem.
 [Strategies like Divide and Conquer, Greedy Method, Backtracking, Dynamic Programming and
Branch and Bound]
 By practice only we can apply which strategy will work for solving a problem.
 Divide-and-conquer, breaks a problem into sub problems that are similar to the original problem,
recursively solves the sub problems, and finally combines the solutions to the sub problems to
solve the original problem.
 Divide-and-conquer solves sub problems recursively, each sub problem must be smaller than the
original problem, and there must be a base case for sub problems.
 Generally, divide-and-conquer algorithms have three parts −
o Divide the problem into a number of sub-problems that are smaller instances of the same
problem.
o Conquer the sub-problems by solving them recursively. If they are small enough, solve
the sub-problems as base cases.
o Combine the solutions to the sub-problems into the solution for the original problem.
 Example:

Steps of a divide-and-conquer algorithm as divide, conquer, combine.

1
Two more recursive steps of Divide and Conquer approach

 Divide-and-conquer creates at least two subproblems, a divide-and-conquer algorithm makes


multiple recursive calls.

Control Abstraction for Divide and Conquer or General Method:


DAC(P){
If(Small(P))
{
return S(P);
}
else
{
divide P into smaller instances P1, P2, …. Pk, k ≥ 1
Apply DAC(P1), DAC(P2), DAC(P3), …. DAC(Pk), to each of these sub problems;
return (COMBINE (DAC(P1), DAC(P2), DAC(P3), …. DAC(Pk));
}
}

 Small(P) is a Boolean valued function which determines whether the input size is small enough so
that the answer can be computed without splitting. If this is so function ‘S’ is invoked, otherwise,
the problem ‘P’ is divided into smaller sub problems.

2
Pros and cons of Divide and Conquer Approach:
 Divide and conquer approach supports parallelism as sub-problems are independent. Hence, an
algorithm, which is designed using this technique, can run on the multiprocessor system or in
different machines simultaneously.
 It efficiently uses cache memory without occupying much space because it solves simple
subproblems within the cache memory instead of accessing the slower main memory.
 In this approach, most of the algorithms are designed using recursion, hence memory management
is very high. For recursive function stack is used, where function state needs to be stored. It may
even crash the system if the recursion is performed rigorously greater than the stack present in the
CPU.

Application of Divide and Conquer Approach:


1. Binary Search
2. Sorting (Quick sort, Merge sort)
3. Strassen’s Matrix Multiplication
4. Finding the maximum and minimum of a sequence of numbers
5. Closest Pair of Points: Finding out the closest pair of points in a metric space, given n points, such
that the distance between the pair of points should be minimal.
6. Cooley-Tukey Fast Fourier Transform (FFT) algorithm
7. Karatsuba algorithm for fast multiplication: Multiplies two n-digit numbers in such a way by
reducing it to at most single-digit.

Recurrence relation of DAC:

𝑔(𝑛) 𝑛 𝑖𝑠 𝑠𝑚𝑎𝑙𝑙
T(n) = {
𝑇(𝑛1 ) + 𝑇(𝑛2 ) + ⋯ 𝑇(𝑛𝑘 ) + 𝑓(𝑛) 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

Where,
 T(n) is the time for DAC on any input of size n
 g(n) is the time to compute the answer directly for small inputs.
 The function f(n) is the time for dividing P and combining the solutions to subproblems.
 For divide and conquer based algorithms that produce subproblems of the same type as the original
problem, it is very natural to first describe such algorithms using recursion

The complexity of many divide-and-conquer algorithms is given by recurrences of the form

𝑇(𝑛) 𝑛=1
T(n) = { 𝐧
𝐚𝐓 (𝐛) + 𝐟(𝐧) 𝑛>1
 Where a and b are known constants. We assume that T(l) is known and n is a power of b
(i.e. n= bk).

One of the methods for solving any such recurrence relation is called the substitution method.

3
Recurrence Relations:
Example – 1:
Algorithm Steps for Time complexity Recursion Tree
execution Frequency count

void Test(int n){


if(n > 0)
{
Write n; 1 n
Test(n-1); 1 – functio n+1
} call f(n) = 2n + 1
} = O(n)

Recursion Code and finding time complexity Recursion Tree / Tracing tree

Preparation of Recurrence relation:


Function name used is T(n)
Algorithm
T(n)  void Test(int n){
if(n > 0)
{
1 Write n;
T(n-1) Test(n-1);
}
T(n) = T(n-1) + 1 }
Why ignores conditional statement?
Which takes constant value, which does not change total value.
1 𝑛=0
T(n) = {
𝑇(𝑛 − 1) + 1 𝑛 > 0

Solve Recurrence relations:


There are four methods for solving Recurrence:
 Substitution Method
 Recursion Tree Method
 Iteration Method
 Master Method

Substitution Method:
T(n) = T(n-1) + 1
= [T(n-2) + 1] + 1
= T(n-2) + 2
= [T(n-3) + 1] + 2
= T(n-3) + 3
….. continue for k times
T(n) = T(n-k) + k
Assume, at some point n-k = 0, i.e., n = k
T(n) = T(n-k) + k
= T(n-n) + n for n = k
T(n) = 1 + n
= O(n) - Linear Class

4
Example – 2: Type of Recurrence function: Decreasing function
Algorithm Time complexity Recurrence Relation
Frequency count
void Test(int n){ T(n)
if(n > 0) 1 1
{
for(i=0; i<n;i++) n+1 n+1
{
Write i; n n
}
Test(n-1); n-1 T(n-1)
}
}
T(n) = T(n-1) + 2n+2
Please don’t consider 2n+2, frame it as
asymptotic notation. Belongs to linear
class.

Rewrite as T(n) = T(n-1) + n

1 𝑛=0
T(n) = {
𝑇(𝑛 − 1) + 𝑛 𝑛 > 0

Recursion tree Method:

Total time = 0 + 1 + 2 + …. + n-1 + n


= n (n+1) / 2
T(n) = n (n+1) / 2
= Ɵ (n2)

5
Back Substitution / Induction Method:

T(n) = T(n-1) + n ---- 1

= [T(n-2) + n – 1] + n

= T(n-2) + (n-1) + n ----- 2 [Don’t add terms]

= T(n-3) + (n-2) + (n-1) + n ----- 3 [Don’t add terms]

….. continue for k times


T(n) = T(n-k) + (n-(k-1)) + …… + (n-2) + (n-1) + n ----- 4 [Don’t add terms]

Assume, at some point n-k = 0, therefore, n = k, substitute in 4

T(n) = T(n-n) + (n-(n-1)) + …… + (n-2) + (n-1) + n


= T(0) + 1 + 2 + 3+ …. + (n-2) + (n-1) + n
T(n) = 1+ (n (n+1) / 2)
= Ɵ (n2)

 Computing time of Divide and Conquer is described by the recurrence relations

6
Example – 3: Type of Recurrence function: Decreasing function
Algorithm Recurrence Relation
void Test(int n){ T(n)
if(n > 0)
{
for(i=0; i<n;i = i*2)
{
Write i;  logn
}
Test(n-1);  T(n-1)
}
}
T(n) = T(n-1) + logn

1 𝑛=0
T(n) = {
𝑇(𝑛 − 1) + 𝑙𝑜𝑔𝑛 𝑛 > 0

Recursion tree Method:

Total time T(n) = logn + log(n-1) + log(n-2) + …. + log(1)


= log [ n * (n-1) * (n-2) * ….. 2 * 1]
= log n!
 No tight bound. Upper bound is there.
 For n! upper bound is O(nn)
 For log n! upper bound is O(nlogn)

O(nlogn)  don’t write = Ɵ (nlogn) [Check Asymptotic notations notes n! and logn!]

7
Back Substitution / Induction Method:

T(n) = T(n-1) + log n ---- 1

= [T(n-2) + log(n – 1)] + logn

= T(n-2) + log(n-1) + logn ----- 2 [Don’t add terms]

= T(n-3) + log(n-2) + log(n-1) + logn ----- 3 [Don’t add terms]

….. continue for k times


T(n) = T(n-k) + log(n-(k-1)) + …… + log(n-2) + log(n-1) + logn ----- 4 [Don’t add terms]

Assume, at some point n-k = 0, therefore, n = k, substitute in 4

T(n) = T(n-n) + log(n-(n-1)) + …… + log(n-2) + log(n-1) + logn


= T(0) + log1 + log2 + log3+ …. + log(n-2) + log(n-1) + logn
T(n) = 1+ log n!
 No tight bound. Upper bound is there.
 For n! upper bound is O(nn)
 For log n! upper bound is O(nlogn)

O(nlogn)  don’t write = Ɵ (nlogn) [Check previous notes]

From all previous examples

T(n) = T(n-1) + 1  O(n)


T(n) = T(n-1) + n  O(n2)
T(n) = T(n-1) + logn  O(nlogn)
From above T(n-1) take n as a degree, and multiplied by second term, so for

T(n) = T(n-1) + n2  O(n3)

T(n) = T(n-2) + 1  n/2  O(n)

T(n) = T(n-100) + n  n/100  O(n2)

T(n) = 2T(n-1) + 1  Here coefficient is there for T(n-1), so how to do?

8
Type of Recurrence function: Decreasing function – Multiple recursion calls

Algorithm Recurrence Relation


void Test(int n){ T(n)
if(n > 0)
{
Write n; 1
Test(n-1);  T(n-1)
Test(n-1);  T(n-1)
}
}
T(n) = 2T(n-1) + 1

1 𝑛=0
Recurrence relation is T(n) = {
2𝑇(𝑛 − 1) + 1 𝑛>0

Recursion tree Method:

Total time T(n) = 1 + 2 + 22 + 23+ …… 2k [Geometric progression Series]


= 2k+1 – 1

 Assume at some point n-k = 0, so n = k


 2n+1 – 1
 Ɵ (2n)

Geometric Progression:
 a + ar + ar2+ ar3+ ar4+ …… + ark = a(rk+1 - 1)/(r-1) [r is common ratio, a is first term]
 1 + 2 + 22 + 23+ …… 2k Here, a= 1, r= 2, so 1*(2k+1 – 1)/(2-1)  2k+1 – 1

9
Back Substitution / Induction Method:

T(n) = 2T(n-1) + 1 ---- 1


= 2[2T(n-2) + 1] + 1
= 22T(n-2) + 2 + 1 ---- 2 [Don’t add terms]
= 22 [2T(n-3) + 1]+2+1
= 23T(n-3) + 22 + 2 + 1 ---- 3 [Don’t add terms]

….. continue for k times


T(n) = 2kT(n-k) + 2k-1+…….+ 22 + 2 + 1 ----- 4 [Don’t add terms]

Assume, at some point n-k = 0, therefore, n = k, substitute in 4

T(n) = 2nT(n-n) + 2n-1+…….+ 22 + 2 + 1


= 2nT(0) + 2n-1+…….+ 22 + 2 + 1 Sine T(0) = 1
= 2n + 2n-1+…….+ 22 + 2 + 1
 T(n) = 2n+1 – 1

 Ɵ (2n) or O(2n) or

Next Masters Theorem


T(n) = T(n-1) + 1  O(n)
T(n) = T(n-1) + n  O(n2)
T(n) = T(n-1) + logn  O(nlogn)
T(n) = 2T(n-1) + 1  O(2n)

Masters Theorem for Decreasing functions:

T(n) = T(n-1) + 1  O(n)


T(n) = T(n-1) + n  O(n2)
T(n) = T(n-1) + logn  O(nlogn)
T(n) = 2T(n-1) + 1  O(2n)
T(n) = 3T(n-1) + 1  O(3n)
T(n) = 2T(n-1) + n  O(n2n)
T(n) = 3T(n-1) + n  O(n3n)

10
General Observation:
 T(n) = aT(n-b) + f(n)  General form of recurrence relation, and a > 0, b>0 and f(n) =
O(nk) where k ≥ 0

Then T(n) has a Solution, with 3 cases

Case Values of a & b Solution


I a < 1, b = 1 O(nk)  O(f(n))
II a = 1, b = 1 O(nk+1) or O(n * nk)  O(n*f(n))
III a > 1, b = 1 O(nkan)  O(f(n)an)
a > 1, b > 1 O(nkan/b)  O(f(n)an/b)

Recurrence relation a b f(n) Solution Format

T(n) = T(n-1) + 1 1 1 1 [n0 k = 0] O(n * n0) = O(n) O(n*f(n))


T(n) = T(n-1) + n 1 1 n [n1 k = 1] O(n * n) = O(n2) O(n*f(n))
T(n) = T(n-1) + logn 1 1 logn O(n * logn) = O(nlogn) O(n*f(n))

T(n) = 2T(n-1) + 1 2 1 1 O(n0 * 2n) = O(2n) O(nk*an)


T(n) = 3T(n-1) + 1 3 1 1 O(n0 * 3n) = O(3n) O(nk*an)
T(n) = 2T(n-1) + n 2 1 n O(n * 2n) = O(n2n) O(nk*an)

T(n) = 2T(n-2) + n 2 2 n O(n * 2n/2) = O(n2n/2) O(nk*an/b)

11
Recurrence Relation for Dividing function:
Algorithm Recurrence Relation In recursion, n values can be
void Test(int n){ T(n)
if(n > 1) n-1 - subtraction
{ n/2 - dividing
Write n; 1 sqrt(n) – square root
Test(n/2); T(n/2)
}
}
T(n) = T(n/2) + 1

1 𝑛=1
Recurrence relation is T(n) = {
𝑇(𝑛/2) + 1 𝑛>1

Recursion tree Method:

Total time T(n) = 1 + 1 + 1 + 1 + …… k [k steps]

 Therefore, n/2k = 1, n = 2k, so k = log 2 𝑛


 Ɵ (log 2 𝑛)

In general ab = c  b = log 𝑎 𝑐

Back Substitution / Induction Method:

T(n) = T(n/2) + 1 ---- 1


= [T(n/22) + 1] + 1
= T(n/22) + 2 ---- 2 [add terms]
= T(n/23) + 3 ---- 3 [add terms]

….. continue for k times


T(n) = T(n/2k) + k ----- 4

Assume, at some point n/2k = 1, therefore, n = 2k, and k = log 2 𝑛 substitute in 4

 T(n) = T(1) + log 2 𝑛  1 + log 2 𝑛  Ɵ (log 2 𝑛)

12
Example:
1 𝑛=1
Recurrence relation is T(n) = { Dividing function
𝑻(𝒏/𝟐) + 𝒏 𝒏>𝟏

Recursion tree Method:

T(n) = n + n/2 + n/22 + …… + n/2k [k steps]


= n [1 + 1/2 + 1/22 + …… +1/2k]
1
= n ∑𝑖=0 𝑡𝑜 𝑘 𝑖 2
1
= n * 1 [Since ∑𝑖=0 𝑡𝑜 𝑘 2𝑖 = 1 ]
= n  Ɵ (n) or O(n)

Back Substitution / Induction Method:

T(n) = T(n/2) + n ---- 1


= [T(n/22) + n/2] + n
= T(n/22) + n/2 + n ---- 2 [ don’t add terms]
= T(n/23) + n/22 + n/2 + n ---- 3 [don’t add terms]

….. continue for k times


T(n) = T(n/2k) + n/2k-1 + ……..+ n/22 + n/2 + n ----- 4

Assume, at some point n/2k = 1, therefore, n = 2k, and k = log 2 𝑛 substitute in 4

 T(n) = T(1) + n/2k-1 + ……..+ n/22 + n/2 + n/1


= 1 + n [1/2k-1 + ……..+ 1/22 + 1/2 + 1]
= 1 + n [ 1 + 1]

= 1 + 2n  Ɵ (n) or O(n)

13
Example: Recurrence Relation for Dividing function (contd…):
Algorithm Recurrence Relation
void Test(int n){ T(n)
if(n > 1) 1
{
for(i=0; i<n;i++)
{ n
Write i;
} T(n/2)
Test(n/2); T(n/2)
Test(n/2);
} T(n) = 2T(n/2) + n
}

1 𝑛=1
T(n) = {
𝟐𝑻(𝒏/𝟐) + 𝒏 𝒏>𝟏
Very Important Recurrence relation
Recursion tree Method:

Writing as time, instead of function

Total n * k times
We assume that n/2k = 1, therefore, n = 2k, and k = log 2 𝑛

So time complexity for this algorithm is Ɵ(n𝐥𝐨𝐠 𝟐 𝒏)

14
Back Substitution / Induction Method:

T(n) = 2T(n/2) + n ---- 1


= 2[2T(n/22) + n/2] + n
= 22T(n/22) + n + n [ add terms]
= 22T(n/22) + 2n ---- 2
= 22 [2T(n/23) + n/22] + 2n
= 23T(n/23) + n + 2n [add terms]
= 23T(n/23) + 3n ---- 3

….. continue for k times


T(n) = 2kT(n/2k) + kn ----- 4

Assume, at some point n/2k = 1, therefore, n = 2k, and k = log 2 𝑛 substitute in 4

 T(n) = 2kT(1) + kn [can also be write as nk]


= n * 1 + nlogn Since k = logn
= n + nlogn  Ɵ(nlogn)

15
Masters Theorem for dividing functions:

General form: T(n) = aT(n/b) + f(n)


 Assume that a ≥ 1, b> 1 and f(n) = Ɵ (nk logpn) or O (nklogpn) where k ≥ 0
(logpn  read as logn whole to the power of p)
Nee to found two values:
1. log 𝑏 𝑎
2. k
Case Values of a & b Solution
I If log 𝑏 𝑎 > k Ɵ(𝑛log𝑏 𝑎 )
II If log 𝑏 𝑎 = k
a. p > -1 Ɵ(nk logp+1n)
b. p = -1 Ɵ(nk log logn)
c. p < -1 Ɵ(nk)
III If log 𝑏 𝑎 < k
a. p ≥ 0 Ɵ(nk logpn)
b. p < 0 Ɵ(nk)

Recurrence relation a b f(n) Find out Solution


finding k and p log 𝑏 𝑎 and k
T(n) = 2T(n/2) + 1 2 2 Ɵ(1) log 𝑏 𝑎  log 2 2  1 Ɵ(𝑛log𝑏 𝑎 )
Rewriting k  0 [Power of n]  Ɵ(𝑛1 )
 Ɵ(n0)  Ɵ(𝑛)
 Ɵ(n0log0n) log 𝑏 𝑎 > k:
k = 0, p = 0 Case – I: 1 > 0
T(n) = 4T(n/2) + n 4 2 Ɵ(n) log 𝑏 𝑎  log 2 4  2 Ɵ(𝑛log𝑏 𝑎 )
Rewriting k=1 [Power of n]  Ɵ(𝑛2 )
 Ɵ(n1) log 𝑏 𝑎 > k:
 Ɵ(n1log0n) Case - I
k = 1, p = 0 2>1
T(n) = 8T(n/2) + n2 8 2 Ɵ(n2) log 𝑏 𝑎  log 2 8  3 Ɵ(𝑛log𝑏 𝑎 )
Rewriting k=2 [Power of n]  Ɵ(𝑛3 )
 Ɵ(n2) log 𝑏 𝑎 > k:
 Ɵ(n2log0n) Case – I: 3 > 2
k = 2, p = 0
T(n) = 9T(n/3) + 1 9 3 Ɵ(1) log 𝑏 𝑎  log 3 9  2 Ɵ(𝑛log𝑏 𝑎 )
Rewriting k= 0 [Power of n]  Ɵ(𝑛2 )
 Ɵ(n0) log 𝑏 𝑎 > k:
 Ɵ(n0log0n) Case – I: 2 > 0
k = 0, p = 0
T(n) = 8T(n/2) + nlogn 8 2 Ɵ(nlogn) log 𝑏 𝑎  log 2 8  3 Ɵ(𝑛log𝑏 𝑎 )
k=1 k=1 [Power of n]  Ɵ(𝑛3 )
p=1 log 𝑏 𝑎 > k:
Case – I: 3 > 1
T(n) = 7T(n/2) + 18n2 7 2 Ɵ(n2) log 𝑏 𝑎  log 2 7 ≈ Ɵ(𝑛log𝑏 𝑎 )
k=2 2.81  Ɵ(𝑛log2 7 )
k=2 [Power of n]
log 𝑏 𝑎 > k:
Case – I: 2.81 > 2

16
As long as log 𝑏 𝑎 > k, write the solution directly as Ɵ(𝑛log𝑏 𝑎 )

17
Case Values of a & b Solution
log𝑏 𝑎
I If log 𝑏 𝑎 > k Ɵ(𝑛 )
II If log 𝑏 𝑎 = k
a. p > -1 Ɵ(nk logp+1n)
b. p = -1 Ɵ(nk log logn)
c. p < -1 Ɵ(nk)
III If log 𝑏 𝑎 < k
a. p ≥ 0 Ɵ(nk logpn)
b. p < 0 Ɵ(nk)

Case – II:
Recurrence relation a b f(n) Find out Solution
finding k and p log 𝑏 𝑎 and k
T(n) = 2T(n/2) + n 2 2 Ɵ(n) log 𝑏 𝑎  log 2 2  1 Ɵ(nk logp+1n)
k = 1, p = 0 k  1 [Power of n]  Ɵ(n1log0+1n)
log 𝑏 𝑎 = k:  Ɵ(nlogn)
Case – II: (a) p > -1
Without calculating, writing direct answer
log 𝑏 𝑎  log 2 2  1, k  1 no need of p.
Answer will be written as f(n) multiplied by logn, Solution: Ɵ(nlogn)

T(n) = T(n/2) + 2
log 𝑏 𝑎  log 2 1  0, k  0 no need of p.

Answer will be written as f(n) multiplied by logn, Solution: Ɵ(logn)


T(n) = 4T(n/2) + n2 4 2 Ɵ(n2) log 𝑏 𝑎  log 2 4  2 Ɵ(nk logp+1n)
Rewriting k=2 [Power of n]  Ɵ(n2log0+1n)
Ɵ(n2log0n) log 𝑏 𝑎 = k:  Ɵ(n2logn)
k = 2, p = 0 Case – II: (a) p > -1
Without calculating, writing direct answer
log 𝑏 𝑎  log 2 4  2, k  2 no need of p.
Answer will be written as f(n) multiplied by logn: Ɵ(n2logn)
T(n) = 4T(n/2) + n2logn 4 2 Ɵ(n2logn) log 𝑏 𝑎  log 2 4  2 Ɵ(nk logp+1n)
k = 2, p = 1 k=2 [Power of n]  Ɵ(n2log1+1n)
log 𝑏 𝑎 = k:  Ɵ(n2log2n)
Case – II: (a) p > -1
Without calculating, writing direct answer
log 𝑏 𝑎  log 2 4  2, k  2 no need of p.
Answer will be written as f(n) multiplied by logn: Ɵ(n2lognlogn)  Ɵ(n2log2n)
If f(n) = n2log2n  Ɵ(n2log2n logn)  Ɵ(n2log3n)
If f(n) = n2log5n  Ɵ(n2log5n logn)  Ɵ(n2log6n)
T(n) = 8T(n/2) + n3 8 2 Ɵ(n3) log 𝑏 𝑎  log 2 8  3 Ɵ(nk logp+1n)
Rewriting k=3 [Power of n]  Ɵ(n3log0+1n)
 Ɵ(n3log0n) log 𝑏 𝑎 > k:  Ɵ(n3log1n)
k = 3, p = 0 Case – II: p > -1
T(n) = 2T(n/2) + n/logn 2 2 Ɵ(n/logn) log 𝑏 𝑎  log 2 2  1 Ɵ(nk log logn)
Rewriting k  1 [Power of n]
 Ɵ(nlog n)-1
p = -1  Ɵ(n log logn)
k = 1, p = -1 log 𝑏 𝑎 = k:
Case – II: (b) p = -1

18
Without calculating, writing direct answer
log 𝑏 𝑎  log 2 2  1, k  1, and p = -1,
Answer will be written as f(n) multiplied by log logn: Ɵ(n log logn)
If f(n) = n / log2n , p = -2, Case – II: (c) p < -1
Answer will be = Ɵ(nk)  Ɵ(n)
Case Values of a & b Solution
log𝑏 𝑎
I If log 𝑏 𝑎 > k Ɵ(𝑛 )
II If log 𝑏 𝑎 = k
a. p > -1 Ɵ(nk logp+1n)
b. p = -1 Ɵ(nk log logn)
c. p < -1 Ɵ(nk)
III If log 𝑏 𝑎 < k
a. p ≥ 0 Ɵ(nk logpn)
b. p < 0 Ɵ(nk)

Case – III:
Recurrence relation a b f(n) Find out Solution
finding k and p log 𝑏 𝑎 and k
T(n) = T(n/2) + n2 1 2 Ɵ(n2) log 𝑏 𝑎  log 2 1  0 Ɵ(nk logpn)
k = 2, p = 0 k  2 [Power of n]  Ɵ(n2log0n)
log 𝑏 𝑎 < k:  Ɵ(n2)
Case – III: (a) p ≥ 0
Without calculating, writing direct answer
log 𝑏 𝑎  log 2 1  0, k  2 no need of p.
Answer will be written as f(n) Solution: Ɵ(n2)

T(n) = 2T(n/2) + n2
log 𝑏 𝑎  log 2 2  1, k  2 no need of p.
Answer will be written as f(n), Solution: Ɵ(n2)

T(n) = 2T(n/2) + n2logn


log 𝑏 𝑎  log 2 2  1, k  2 no need of p.
Answer will be written as f(n), Solution: Ɵ(n2logn)

T(n) = 2T(n/2) + n2log2n


log 𝑏 𝑎  log 2 2  1, k  2 no need of p.
Answer will be written as f(n), Solution: Ɵ(n2log2n)
T(n) = 4T(n/2) + n3 4 2 Ɵ(n3) log 𝑏 𝑎  log 2 4  2 Ɵ(nk logpn)
k = 3, p = 0 k=3 [Power of n]  Ɵ(n3log0n)
log 𝑏 𝑎 < k:  Ɵ(n3)
Case – III: (a) p ≥ 0
T(n) = 2T(n/2) + n2/logn 2 2 Ɵ(n2/logn) log 𝑏 𝑎  log 2 2  1 Ɵ(nk)
k = 2, p = -1 k  2 [Power of n]
p = -1  Ɵ(n2)
log 𝑏 𝑎 < k:
Case – III: (b) p < 0

19
Masters Theorem for Dividing functions:

CASE-I:

T(n) = 2T(n/2) + 1  Ɵ(n)


T(n) = 4T(n/2) + 1  Ɵ(n2)
T(n) = 4T(n/2) + n  Ɵ(n2)
T(n) = 8T(n/2) + n2  Ɵ(n3)
T(n) = 16T(n/2) + n2  Ɵ(n4)

CASE-III:

T(n) = T(n/2) + n  Ɵ(n)


T(n) = 2T(n/2) + n2  Ɵ(n2)
T(n) = 2T(n/2) + n2logn  Ɵ(n2logn)
T(n) = 4T(n/2) + n3log2n  Ɵ(n3log2n)
T(n) = 2T(n/2) + n2 / logn  Ɵ(n2)
T(n) = 9T(n/3) + 4n6  Ɵ(n6)

CASE-II:

T(n) = T(n/2) + 1  Ɵ(logn)


T(n) = 2T(n/2) + n  Ɵ(nlogn)
T(n) = 2T(n/2) + nlogn  Ɵ(nlog2n)
T(n) = 4T(n/2) + n2  Ɵ(n2logn)
T(n) = 4T(n/2) + (nlogn)2  Ɵ(n2log3n)
T(n) = 2T(n/2) + n / logn  Ɵ(nloglogn)
T(n) = 2T(n/2) + n / log2n  Ɵ(n)

Problems Solution
T(n) = 9T(n/3)+n Ɵ(n2)
T(n) = T(2n/3)+1 Ɵ(logn) Here a=1, b= 3/2
T(n) = 3T(n/4) + n log n Ɵ(nlogn)
T(n) = 2T(n/2)+n log n Ɵ(n log2n)
T(n) = 3T(n/4) + cn2 Ɵ(n2)

20
Recurrence Relation for root function:
Algorithm Recurrence Relation In recursion, n values can be
void Test(int n){ T(n)
if(n > 2) n-1 - subtraction
{ n/2 - dividing
Write n; 1 √𝑛– square root
Test(√𝑛); T(√𝑛)
}
}
T(n) = T(√𝑛) + 1
1 𝑛=2
Recurrence relation is T(n) = { Dividing function
T(√𝑛) + 1 𝒏>𝟐

Back Substitution / Induction Method:

T(n) = T(√𝑛)) + 1
1
= T(𝑛2 ) + 1 ---- 1
1
22
= T(𝑛 ) + 1 + 1 [ add terms]
1
22
= T(𝑛 ) + 2 ---- 2
1
23
= T(𝑛 ) + 3 ---- 3

….. continue for k times


1
T(n) = T(𝑛2𝑘 ) + k ----- 4

Here, smallest value of n is 2,


Assume, that n is powers of 2 i.e. n = 2m
𝑚
T(2m) = T(22𝑘 ) + k
𝑚
Assume, T(22𝑘 ) is reduced to 2 i.e. T(21)
𝑚
Therefore, 2𝑘 = 1

m = 2k and k = log 2 𝑚

We want answer in n,
As, n = 2m , m = log 2 𝑛

So, k = log 𝐥𝐨𝐠 𝟐 𝒏  Ɵ(log 𝐥𝐨𝐠 𝟐 𝒏)

21
Solve T(n) = 2T(√n) +1 and T(1) = 1

Introduce a change of variable by letting n = 2m

T(2m) = 2T(2m/2) + 1

Now let S(m) = T(2m). Then

S(m) = 2S(m/2) + 1
= 2(2S(m/4) + 1) + 1
= 22 S(m/22) + 2 + 1
By substituting further,
 2k S(m/2k) + 2k-1 + 2k-2 + ….. + 2 + 1

To simplify the expression, assume m = 2k

S(m/2k) = S(1) = T(2)

Since T(n) denote the number of comparisons, it has to be an integer always.

T(2) = 2T(√2) +1  which is approximately 3.

⇒ S(m) = 3 + 2k − 1 ⇒ S(m) = m + 2

We now have, S(m) = T(2m) = m + 2

Thus, we get T(n) = m+2, Since m = log n, T(n) = log n+2 Therefore, T(n) = θ(log n)

Solve T(n) = 2T(√n) +n and T(1) = 1  T(n) = O(n log n).

Solve T(n) = 2T(√n) + log n and T(1) = 1

Let m = log n, i.e. n = 2m. Then T(2m) = 2T(2m/2) + m.

Now let S(m) = T(2m). Then S(m) = 2S(m/2) + m.

This recurrence has the solution S(m) = O(m log m).

So T(n) = T(2m) = S(m) = O(m log m) = O(log n log log n).

22
Consider the recurrence T(n) = T(n/3) + T(2n/3) + O(n), Let c represent the constant factor in the
O(n) term.

Note that the leaves are between the levels log3n and log3/2n

The longest simple path from the root to a leaf (the cost is at most) is n → (2/3)n → (2/3)2n → · · · → 1.

Since, (2/3)kn = 1, when k = log3/2n, the height of the tree is log3/2n

We get that each level costs at most cn, but as we go down from the root, more and more internal nodes
are absent, so the costs become smaller. Fortunately, we only care about an upper bound.

Based on this we can guess O(n log n) as an upper bound.

23
Amortized Analysis

 Amortized Analysis is used for algorithms where an occasional operation is very slow, but
most of the other operations are faster.
 In Amortized Analysis, analyze a sequence of operations and guarantee a worst-case
average time which is lower than the worst-case time of a particular expensive operation.
 The example data structures whose operations are analyzed using Amortized Analysis are
Hash Tables, Disjoint Sets and Splay Trees.

Asymptotic Analysis Vs Amortized Analysis


[https://medium.com/@aleksandrasays/amortised-analysis-in-the-nutshell-7b056277ab9b]
Asymptotic Analysis
Regular asymptotic analysis looks at the performance of an individual operation asymptotically, as
a function of the size of the problem. It is about how the performance of a given operation scales to
a large data set.

Worst-Case and Average-Case Analysis


 Worst case analysis always considers a single operation. To find the cost of the algorithm
we need to find the worst-case cost of every single operation and then count the number of
their executions. If algorithm runs in time T(n) it means that it is and upper bound for any
inputs of size n.
 In the average-cost analysis, we try to calculate the running time for randomly chosen
input. It’s kind of harder due to the fact it needs some probabilistic arguments and some
assumptions about the input’s distribution, which is not the easiest thing to justify.

Order of growth and Big-O notation


Order of growth describes how an algorithm’s time and space complexity is going to increase or
decrease when we increase or decrease a size of the input.

 Asymptotic analysis is the most common method for analysing algorithms, but it’s not
perfect. Let’s consider an example of two algorithms taking
respectively 1000n*log(n) and 2n*log(n) time. In case of asymptotic analysis they are both
the same, having asymptotic complexity n*log(n) , so that we can’t judge which one is
better as constants are ignored. Another thing is that in asymptotic analysis we always
consider input sizes larger that a constant value, but they may be never given as an input, so
algorithm which is asymptotically slower, may performs better for the particular situation.

Amortized Analysis
 Asymptotic analysis is about how the performance of a given operation scales to a large
data set.
 Amortized analysis in the other hand is about how the average of the performance of all of
the operations on a large data set scale. Comparing to the average-case analysis, amortized
analysis gives an upper bound of the actual cost of an algorithm, which the average-case
doesn’t guarantee. To describe it in one sentence we can say that it gives the average
performance (over time) of each operation in the worst-case.

24
 When we have some sequence of operations the worst-case doesn’t occur very often in each
operation. Operations vary in their costs — some may be cheap and some may be expensive.
 Let’s take a dynamic array as an example. In the dynamic array the number of elements
does not need to be known until program execution and it can be resized at any time. What
is important for us is the fact that in the dynamic array only some inserts take a linear time,
though others — a constant time.
 So, if the inserts differ in their costs, how we are able to correctly calculate the total time?
This is where amortized approach comes into play.
 It assigns an artificial cost to each operation in the sequence, which is called the amortized
cost.
 It needs the total cost of the algorithm to be bounded by the total number of the amortized
costs of all operations. There are three methods used for assigning the amortised cost:
1. Aggregate Method (brute force)
2. Accounting Method (the banker’s method)
3. Potential Method (the physicist’s method)

Conclusion
 The critical difference between asymptotic and amortized analysis is that the first is
dependent on the input itself, while the second is dependent on the sequence of operations
the algorithm will execute.
 The motivation for amortized analysis is, that looking at the worst-case run time can be too
pessimistic. Instead, amortized analysis averages the running times of operations in a
sequence over that sequence
 Classical asymptotic analysis gives worst case analysis of each operation without taking
the effect of one operation on the other, whereas amortized analysis focuses on a sequence
of operations, an interplay between operations, and thus yielding an analysis which is
precise and depicts a micro-level analysis.

Binary Search:

25
 Let ai , 1≤ i ≤ n, be a list of elements that are sorted in non-decreasing order.
 Consider the problem of determining whether a given element x is present in the list.
 If x is present return the position j, such that aj = x otherwise j is set to zero.
 Divide and Conquer Strategy van be used to solve this problem.
 Small(P) will be true if n = 1.
 If P has more than one element, it can be divided (or reduced) into a new subproblem as follows
 Pick an index q (in the range [i,l]) and compare x with aq. There are three possibilities
o x = aq  In this case the problem P is immediately solved
o x < aq  In this case, x has to be searched in a1,a2, …. aq-1, here right part of the list is
ignored.
o x > aq  In this case, x has to be searched in aq+1,aq+2, …. al, here left part of the list is
ignored.
 Here, any given problem P gets divided (reduced)into one new subproblem.
 This division takes only Ɵ(1) time.
 Best approach to choose q is the middle element.
 Note that the answer to the new subproblem is also the answer to the original problem P;
there is no need for any combining

Example:

a[1:14] = { -15,-6, 0, 7, 9, 23,54, 82,101, 112, 125, 131, 142, 151}


Index 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Value -15 -6 0 7 9 23 54 82 101 112 125 131 142 151

Search for x = 151, -14 and 9

x = 151
Low High Mid Remarks
1 14 (1+14)/2 = 7 a[7] = 54, 151 > 54, Low = Mid+1
8 14 (8+14)/2 = 11 a[11] = 125, 151 > 125, Low = Mid+1
12 14 (12+14)/2 = a[13] = 142, 151 > 125, Low = Mid+1
13
14 14 (14+14)/2 = a[14] = 151, found, return Mid = 14
14

x = -14
Low High Mid Remarks
1 14 (1+14)/2 = 7 a[7] = 54, -14 < 54, High = Mid-1
1 6 (1+6)/2 = 3 a[3] = 0, -14 < 0, High = Mid-1
1 2 (1+2)/2 = 1 a[1] = -15, -14 > -15, Low = Mid+1
2 2 (2+2)/2 = 2 a[2] = -6, -14 < -6 High = Mid-1
2 1 (2+1)/2 = 1 Not found Low > high

x= 9
Low High Mid Remarks
1 14 (1+14)/2 = 7 a[7] = 54, 9 < 54, High = Mid-1
1 6 (1+6)/2 = 3 a[3] = 0, 9 > 0, High = Mid-1
4 6 (4+6)/2 = 5 a[5] = 9, found, return Mid = 5

26
Recursive Binary Approach: Function is invoked as BinRecSearch(a, 1, n, x)

Algorithm int BinRecSearch(a, l, h, x) T(n)


/* Given an array a[l :h] of elements in non decreasing order,1 ≤
i ≤ l, determine whether x is present, and if so, return j such that
x = a[j]; else return 0, l – low, h – high */
{
if (l == h) then // If Small(P) 1
{
if (x = a[l]) then
return l;
else
return 0;
}
else
{// Reduce P into a smaller subproblem.
mid = floor((l+h)/2); 1
if (x = a[mid]) then 1
return mid;
elseif (x <a[mid]) then 1
return BinRecSearch(a,l, mid-1,x); T(n/2)
else 1 𝑛=1
T(n) = {
return BinRecSearch(a, mid+1,h,x); 𝑻(𝒏/𝟐) + 𝟏 𝒏>𝟏
}
} Ɵ(𝐥𝐨𝐠 𝟐 𝒏)
Iterative Binary Approach:
Function is invoked as BinIteSearch(a, n, x)
Algorithm int BinIteSearch(a, n, x)
// Given an array a[1 :n] of elements in nondecreasing
// order, n ≥ 0, determine whether x is present, and
// if so, return j such that x = a[j]; else return 0.
{
low = 1; high = n;
while(low ≤ high) do
{
mid = floor((low+high)/2);
if (x < a[mid]) then high = mid-1;
elseif (x > a[mid]) then low = mid+1;
else return mid;
}
return 0;
}

27
Index 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Value -15 -6 0 7 9 23 54 82 101 112 125 131 142 151
Comparisons 3 4 2 4 3 4 1 4 3 4 2 4 3 4
 No element requires more than 4 comparisons to be found. The average is obtained by summing
the comparisons needed to find all 14 items and dividing by 14; these yields 45/14 or approximately
3.21 comparisons per successful search on the average
 There are 15 possible ways that an unsuccessful search may terminate depending on the value of
x.
 If x < a[1], the algorithm requires 3 element comparisons to determine that x is not present.
 For all the remaining possibilities, BinIteSearch, requires 4 element comparisons,
 So, (3+14*4)/15  59/15 approximately 3.93 comparisons for an unsuccessful search.
 The analysis applies to any sortedsequencecontaining14 elements.
 Minimum comparisons = Ɵ(1), Maximum Comparisons = 4, , Hight of the tree, Ɵ(𝐥𝐨𝐠 𝟐 𝒏)

Binary decision tree for binary search, n = 14

 The first comparison is x with a[7].


 If x <a[7],then the next comparison is with a[3]; similarly, if x > a[7],then the next comparison is
with a[ll].
 Each path through the tree represents a sequence of comparisons in the binary search method.
 If x is present, then the algorithm will end at one of the circular nodes that lists the index into
the array where x was found.
 If x is not present, the algorithm will terminate at one of the square nodes.
 Circular nodes are called internal nodes and square nodes are referred to as external nodes.

28
Divide and Conquer strategy to find ab

ab  ab/2 * ab/2

Floor and Ceil:

ceil[3.7] = 4 >3 nearest integer


Floor[3.7] =3 <= 3 nearest integer

ceil[-3.7] = -2 >-3 nearest integer


Floor[-3.7] =-4 <= -4 nearest integer

a10  a5 * a5  1 multiplication if a5 is already calculated

a5  a2 * a2 * a  2 multiplications if a2 is already calculated

a2  a * a  1 multiplication, already know value of a

In total 4 multiplications instead of 10 multiplications in iteration.

#Using iteration import math


def power(a,b): def powerrecursion(a,b):
prod = 1 if b == 1:
for i in range(b): return a
prod *= a if b%2 == 0:
return prod temp = powerrecursion(a,b/2)
return temp * temp
else:
temp = powerrecursion(a,math.floor(b/2))
return temp * temp * a
import timeit

starttime = timeit.default_timer()
print(power(20,20000))
endtime = timeit.default_timer()
print("The Time Difference iterative: ", endtime -starttime)

starttime = timeit.default_timer()

print(powerrecursion(20,20000))
endtime = timeit.default_timer()
print("The Time Difference recursion: ", endtime -starttime)

The Time Difference iterative : 0.04445384800010288


The Time Difference recursion : 0.012111030000141909

29
T(b) = T(b/2) + O(1) if b > 0
= O(1) if b = 1

Independent of a

Solve it.

1 𝑛=1
T(n) = {
𝑻(𝒏/𝟐) + 𝟏 𝒏 > 𝟏
Very Important Recurrence relation
Recursion tree Method:

Total n * k times
We assume that n/2k = 1,
therefore, n = 2k, and k = log 2 𝑛


k = log 2 𝑏

Time complexity:

O(log b) [Recursive] < O(b) [linear]

Space Complexity in Recursion:


O(log b)

Space Complexity in linear:


O(1)

Even though more space, speed is


important.

Answer:
5 + foo(34,10)
5 + 4 + foo(3,10)
5 + 4 + 3 +foo(0,10)
5 + 4 + 3 + 0  12

30
From Text book:

𝑇(𝑛) 𝑛=1
T(n) = { 𝐧
𝐚𝐓 (𝐛) + 𝐟(𝐧) 𝑛>1
 Where a and b are known constants. We assume that T(l) is known and n is a power of b
(i.e. n= bk).

T(n) = 𝑛log𝑏 𝑎 [T(1) + u(n)]

Where u(n) = ∑𝑘𝑗=1 ℎ(𝑏 𝑗 ) and

h(n) = f(n) / 𝑛log𝑏 𝑎

The asymptotic values of u(n) for various values of h(n) are shown below:

h(n) u(n)
O(𝑛𝑟 ), r < 0 O(1)
Ɵ((log n)i), i ≥ 0 Ɵ((log n)i+1 / (i+1))
Ω(𝑛𝑟 ), r > 0 Ɵ(h(n))

Examples:

𝑇(1) 𝑛=1
T(n) = { 𝐧
𝐓 (𝟐) + 𝐜 𝑛>1

Comparing with master theorem

a = 1, b =2 and f(n) = c

So, log 𝑏 𝑎 = 0

h(n) = f(n) / 𝑛log𝑏 𝑎  c


 c (log n)0
 Ɵ( (log n)0 )

So, u(n) = Ɵ(log n)

So, T(n) = 𝑛log𝑏 𝑎 [T(1) + u(n)]


 𝑛log𝑏 𝑎 [ c + Ɵ(log n)]
 1 [ c + Ɵ(log n)]

31
 Ɵ(log n)
Example -2:
𝑇(𝑛) 𝑛=1
T(n) = { 𝐧
𝐚𝐓 (𝐛) + 𝐟(𝐧) 𝑛>1
 Where a and b are known constants. We assume that T(l) is known and n is a power of b
(i.e. n= bk).

a= 2, b= 2, f(n) = cn

So, log 𝑏 𝑎 = 1

h(n) = f(n) / 𝑛log𝑏 𝑎  f(n) / n  cn / n  c


 Ɵ( (log n)0 )

So, u(n) = Ɵ(log n)

So, T(n) = 𝑛log𝑏 𝑎 [T(1) + u(n)]


 𝑛log𝑏 𝑎 [ T(1) + Ɵ(log n)]
 n [ T(1) + Ɵ(log n)]
 Ɵ(nlog n)

Example -3:

T(n) = 7T(n/2) + 18n2 , n ≥ 2, a power of 2


a=7, b= 2, f(n) =18n2

log 𝑏 𝑎  log 2 7 ≈ 2.81 Ɵ(𝑛log𝑏 𝑎 )


 Ɵ(𝑛log2 7 )

h(n) = f(n) / 𝑛log𝑏 𝑎  18n2 / 𝑛log2 7


 18n 2 - log 2 7

 Ɵ( (log n)0 )

32

You might also like