DAA Part 1
DAA Part 1
(KCS503)
Lecture - 1
Overview
• Provide an overview of algorithms and analysis.
• Try to touching the distinguishing features and
the significance of analysis of algorithm.
• Start using frameworks for describing and
analysing algorithms.
• See how to describe algorithms in pseudo code in
the context of real world software development.
• Begin using asymptotic notation to express
running-time analysis with Examples.
What is an Algorithm ?
• An algorithm is a finite set of rules that gives a
sequence of operations for solving a specific
problem
• An algorithm is any well defined computational
procedure that takes some value or set of
values, as input and produces some value or set
of values as output.
• We can also view an algorithm as a tool for
solving a well specified computational problem.
Characteristics of an Algorithm
• Input: provided by the user.
• Output: produced by algorithm.
• Definiteness: clearly define.
• Finiteness: It has finite number of steps.
• Effectiveness: An algorithm must be
effective so that it’s output can be carried
out with the help of paper and pen.
Analysis of an Algorithm
• Loop Invariant technique was done in three
steps:
– Initialization
– Maintenance
– Termination
• It deals with predicting the resources that an
algorithm requires to its completion such as
memory and CPU time.
• To main measure for the efficiency of an algorithm
are Time and space.
Complexity of an Algorithm
• Complexity of an Algorithm is a function, f(n)
which gives the running time and storage
space requirement of the algorithm in terms
of size n of the input data.
• Space Complexity: Amount of memory
needed by an algorithm to run its
completion.
• Time complexity: Amount of time that it
needs to complete itself.
Cases in Complexity Theory
int main()
{
printf("Hello PSIT");
return 0;
}
Complexity of Example 1
#include <stdio.h>
int main()
{
printf("Hello PSIT");
return 0;
}
𝑇(𝑛) = 𝑂(1)
Example 2
#include <stdio.h>
void main()
{
int i, n = 8;
for (i = 1; i <= n; i++) {
printf("Hello PSIT !!!\n");
}
}
Complexity of Example 2
#include <stdio.h>
void main()
{
int i, n = 8;
for (i = 1; i <= n; i++) {
printf("Hello PSIT !!!\n");
}
}
𝑇(𝑛) = 𝑂(𝑛)
Example 3
#include <stdio.h>
void main()
{
int i, n = 8;
for (i = 1; i <= n; i=i*2) {
printf("Hello PSIT !!!\n");
}
}
Complexity of Example 3
#include <stdio.h>
void main()
{
int i, n = 8;
for (i = 1; i <= n; i=i*2) {
printf("Hello PSIT !!!\n");
}
}
𝑇 𝑛 = log 2 𝑛
Example 4
#include <stdio.h>
#include <math.h>
void main()
{
int i, n = 8;
for (i = 2; i <= n; i=pow(i,2)) {
printf("Hello PSIT !!!\n");
}
}
Complexity of Example 4
#include <stdio.h>
#include <math.h>
void main()
{
int i, n = 8;
for (i = 2; i <= n; i=pow(i,2)) {
printf("Hello PSIT !!!\n");
}
}
𝑇 𝑛 = 𝛰(𝑛)
Example 6
A()
{
int i=1 ,s=1;
scanf(“%d”, &n);
while(s<=n)
{
i++;
s=s+i;
printf(“abcd”);
}
}
Complexity of Example 6
A()
{
int i=1 ,s=1;
scanf(“%d”, &n);
while(s<=n)
{
i++;
s=s+i;
printf(“abcd”);
}
} 𝑇 𝑛 = 𝛰( 𝑛)
Example 7
A()
{
int i=1;
for(i=1; 𝑖 2 <=n; i++)
{
printf(“abcd”);
}
}
Complexity of Example 7
A()
{
int i=1;
for(i=1; 𝑖 2 <=n; i++)
{
printf(“abcd”);
}
}
𝑇 𝑛 = 𝛰( 𝑛)
Example 8
A()
{
int i=1;
for (i=1; i≤n; i++)
{
for (j=1; j≤ 𝑖 2 ; j++)
{
for (k=1; k≤n/2; k++)
{
printf(“abcd”);
}
}
}
}
Complexity of Example 8
𝑇 𝑛 = Ο 𝑛4
Explanation:
𝐼 = 1, 2, 3, 4, 5, … … … … … … … … … … … .
𝐽 = 1, 4, 9, 16, 25, … … … … … … … … … …
𝑛 4𝑛 9𝑛 16𝑛
𝐾= , , , , …………………………
2 2 2 2
Complexity of Example 8
𝑇 𝑛 = Ο 𝑛4
Explanation:
𝐼 = 1, 2, 3, 4, 5, … … … … … … … … … … … .
𝐽 = 1, 4, 9, 16, 25, … … … … … … … … … …
𝑛 4𝑛 9𝑛 16𝑛
𝐾= , , , , ………………………..
2 2 2 2
Lecture - 2
Searching
• Linear Searching
• Binary Searching
Linear Searching
To understand the working of linear search algorithm,
let's take an unsorted array. It will be easy to understand
the working of linear search with an example.
Lecture -3
A Sorting Problem
(Exhaustive Search Approach)
2 3 4
3 4 2 4 2 3
4 3 4 2 3 2
A Sorting Problem
(Exhaustive Search Approach)
How to generate the 24 permutation ?
1 2
2 3 4 1 3 4
3 4 2 4 2 3 3 4 1 4 1 3
4 3 4 2 3 2 4 3 4 1 3 1
A Sorting Problem
(Exhaustive Search Approach)
How to generate the 24 permutation ?
3 4
1 2 4 1 3 2
2 4 1 4 2 1 3 2 1 2 1 3
4 2 4 1 1 2 2 3 2 1 3 1
A Sorting Problem
(Exhaustive Search Approach)
An in-depth look at the analysis of the 24 permutations of four digits
A Sorting Problem
(Exhaustive Search Approach)
Let there be a set of four digits and note that there are multiple
possible permutations for the four digits. They are:
1234 2134 3124 41 23
1243 2143 3142 41 32
1324 2314 3214 42 13
1342 2341 3241 42 31
1423 2431 3412 43 12
1432 2413 3421 43 21
• There are 24 different permutations possible. (as shown
above)
• Only one of these permutations meets our criteria.
(i.e. A1 ≤ A2 ≤ …≤ An) . (1 2 3 4)
A Sorting Problem
(Exhaustive Search Approach)
How we do this ?
Step 1: Generate all the permutation and store it.
Step 2: Check all the permutation one by one and find which
permutation is satisfying the required condition (i.e. a1 ≤ a2 ≤
・ ・ ・ ≤ an).
Step 3: Once we get it , we got the victory.
A Sorting Problem
(Exhaustive Search Approach)
How we do this ?
Step 1: Generate all the permutation and store it.
Step 2: Check all the permutation one by one and find which
permutation is satisfying the required condition (i.e. a1 ≤ a2 ≤
・ ・ ・ ≤ an).
Step 3: Once we get it , we got the victory.
• For the first position in the sorted array, the whole array is traversed
from index 0 to 4 sequentially. After going through the entire array, it is
evident that 3 is the lowest value, with 7 being stored at the first
position.
• Thus, replace 7 with 3. At the end of the first iteration, the item with
the lowest value, in this case 3, at position 2 is most likely to be at the
top of the sorted list.
A Sorting Problem
(Selection Sort Approach)
Lets consider the following array as an example:
A [] = (7, 4, 3, 6, 5).
• 1st Iteration
A Sorting Problem
(Selection Sort Approach)
• For the second position, where 25 is present, again traverse the rest of
the array in a sequential manner.
• Using the traversal method, we determined that the value 12 is the
second-lowest in the array and thus should be placed in the second
position. So no need of swapping.
A Sorting Problem
(Selection Sort Approach)
Lets consider the following array as an example:
A [] = (7, 4, 3, 6, 5).
• 2nd Iteration
A Sorting Problem
(Selection Sort Approach)
• For the third position, where 7 is present, again traverse the rest of the
array in a sequential manner.
• Using the traversal method, we determined that the value 5 is the third-
lowest in the array and thus should be placed in the third position.
A Sorting Problem
(Selection Sort Approach)
Lets consider the following array as an example:
A [] = (7, 4, 3, 6, 5).
• 3rd Iteration
A Sorting Problem
(Selection Sort Approach)
• Similarly we execute it for fourth and fifth iteration and finally the sorted
array is looks like as below:
A Sorting Problem
(Selection Sort Algorithm)
SELECTION SORT(arr, n)
Lecture - 4
A Sorting Problem (Incremental Approach)
Input: A sequence of n numbers a1, a2, . . . , an.
Output: A permutation (reordering) a1 , a2 , . . . , an of the input
sequence such that a1 ≤ a2 ≤ ・ ・ ・ ≤ an .
We also refer to the numbers as keys. Along with each key may be
additional information, known as satellite data. (You might want to clarify
that .satellite data. does not necessarily come from a satellite!)
We will see several ways to solve the sorting problem. Each way will be
expressed as an algorithm: a well-defined computational procedure that
takes some value, or set of values, as input and produces some value, or
set of values, as output.
Insertion sort
• A good algorithm for sorting a small number of elements.
– Start with an empty left hand and the cards face down on the
table.
– Then remove one card at a time from the table, and insert it
into the correct position in the left hand.
– To find the correct position for a card, compare it with each of
the cards already in the hand, from right to left.
– At all times, the cards held in the left hand are sorted, and
these cards were originally the top cards of the pile on the
table.
Insertion sort (Example)
Insertion sort (Example)
1st Iteration
Insertion sort (Example)
4th Iteration
Insertion sort (Example)
• Termination: The outer for loop ends when j > n; this occurs when j
= n + 1. Therefore, j −1 = n. Plugging n in for j −1 in the loop invariant,
the sub array A[1 . . n] consists of the elements originally in A[1 . . n]
but in sorted order. In other words, the entire array is sorted!
How do we analyze an algorithm's running
time?
• Input size: Depends on the problem being studied.
– Usually, the number of items in the input. Like the size n of the
array being sorted.
– But could be something else. If multiplying two integers, could
be the total number of bits in the two integers.
– Could be described by more than one number. For example,
graph algorithm running times are usually expressed in terms of
the number of vertices and the number of edges in the input
graph.
• Running time: On a particular input, it is the number
of primitive operations (steps) executed.
– Want to define steps to be machine-independent.
– Figure that each line of pseudo code requires a constant
amount of time.
– One line may take a different amount of time than another,
but each execution of line i takes the same amount of time
ci .
– This is assuming that the line consists only of primitive
operations.
• If the line is a subroutine call, then the actual call takes
constant time, but the execution of the subroutine being
called might not.
• If the line specifies operations other than primitive ones,
then it might take
• more than constant time.
Analysis of insertion sort
Growth of Functions
Example 4
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛2 + 3𝑛 + 4 ∈ Ω 𝑛2
Asymptotic notation (Big Omega )
Example 4
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛2 + 3𝑛 + 4 ∈ Ω 𝑛2
⟹ 2𝑛2 + 3𝑛 + 4 ≥ 1 ∗ 𝑛2
𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 = 𝛺 𝑛2 𝑤ℎ𝑒𝑟𝑒 𝑐 = 1 𝑎𝑛𝑑 𝑛 ≥ 1
Asymptotic notation (Big Omega )
Example 5
𝐼𝑓 𝑓 𝑛 = 3𝑛 + 2, 𝑔 𝑛 = 𝑛2 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
Asymptotic notation (Big Omega )
Example 5
𝐼𝑓 𝑓 𝑛 = 3𝑛 + 2, 𝑔 𝑛 = 𝑛2 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 >0
𝑛→∞ 𝑔 𝑛
3𝑛 + 2
⟹ 𝑙𝑖𝑚 >0
𝑛→∞ 𝑛2
2
𝑛 3+
𝑛
⟹ 𝑙𝑖𝑚 >0
𝑛→∞ 𝑛2
2
3+
𝑛
⟹ 𝑙𝑖𝑚 >0
𝑛→∞ 𝑛
⟹ 0 > 0 𝑖𝑠 𝑓𝑎𝑙𝑠𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
Asymptotic notation (Big Omega )
Example 6
Example 6
𝐻𝑒𝑛𝑐𝑒, 𝑓 𝑛 ∈ Ω 𝑔 𝑛 𝑖𝑠 𝑡𝑟𝑢𝑒
Asymptotic notation (Theta)
Asymptotic notation (Theta)
Example 7
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 10𝑛3 + 5𝑛2 + 17 ∈ Θ 𝑛3
Asymptotic notation (Theta)
Example 7
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 10𝑛3 + 5𝑛2 + 17 ∈ Θ 𝑛3
As per the definition of 𝜃 notation 𝐶1 𝑔 𝑛 ≤ 𝑓 𝑛 ≤ 𝐶2 𝑔 𝑛
⟹ 10𝑛3 ≤ 10𝑛3 + 5𝑛2 + 171 < 10𝑛3 + 5𝑛3 + 17𝑛3
⟹ 10𝑛3 ≤ 10𝑛3 + 5𝑛2 + 171 < 32𝑛3
𝑆𝑜, 𝐶1 = 10 and 𝐶2 = 32 for all n ≥ 1
𝐻𝑒𝑛𝑐𝑒, 𝑃𝑟𝑜𝑣𝑒𝑑
Asymptotic notation (Theta)
Example 8
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = (𝑛 + 𝑎)𝑏 ∈ Θ 𝑛𝑏
Asymptotic notation (Theta)
Example 8
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = (𝑛 + 𝑎)𝑏 ∈ Θ 𝑛𝑏
𝑓 𝑛
As per the definition of 𝜃 notation ⟹ 𝑙𝑖𝑚 = 𝑐 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 1 𝑎𝑛𝑑 𝑐 > 0
𝑛→∞ 𝑔 𝑛
(𝑛 + 𝑎)𝑏
⟹ 𝑙𝑖𝑚
𝑛→∞ 𝑛𝑏
𝑏 𝑎 𝑏
𝑛 1+𝑛
⟹ 𝑙𝑖𝑚
𝑛→∞ 𝑛𝑏
𝑎 𝑏 𝑎
⟹ 𝑙𝑖𝑚 1 + ∴ =0
𝑛→∞ 𝑛 ∞
⟹ 1 𝑤ℎ𝑖𝑐ℎ 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
𝐻𝑒𝑛𝑐𝑒 , 𝑓 𝑛 = (𝑛 + 𝑎)𝑏 ∈ Θ 𝑛𝑏 is true
Asymptotic notation (Little Oh )
Asymptotic notation (Little Oh )
Example 9
𝐼𝑓 𝑓 𝑛 = 2𝑛, 𝑔 𝑛 = 𝑛2 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )
Example 9
𝐼𝑓 𝑓 𝑛 = 2𝑛, 𝑔 𝑛 = 𝑛2 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =0
𝑛→∞ 𝑔 𝑛
2𝑛
⟹ 𝑙𝑖𝑚 2
𝑛→∞ 𝑛
2
⟹ 𝑙𝑖𝑚
𝑛→∞ 𝑛
⟹ 0
𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑇𝑟𝑢𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )
Example 10
𝐼𝑓 𝑓 𝑛 = 2𝑛2 , 𝑔 𝑛 = 𝑛2 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )
Example 10
𝐼𝑓 𝑓 𝑛 = 2𝑛2 , 𝑔 𝑛 = 𝑛2 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =0
𝑛→∞ 𝑔 𝑛
2𝑛2
⟹ 𝑙𝑖𝑚 2
𝑛→∞ 𝑛
⟹ 𝑙𝑖𝑚 2
𝑛→∞
⟹ 2≠0
𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑇𝑟𝑢𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
Asymptotic notation (Little omega )
Asymptotic notation (Little omega )
Example 11
𝐼𝑓 𝑓 𝑛 = 2𝑛2 + 16 𝑎𝑛𝑑 𝑔 𝑛 = 𝑛2 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
Asymptotic notation (Little omega )
Example 11
𝐼𝑓 𝑓 𝑛 = 2𝑛2 + 16 𝑎𝑛𝑑 𝑔 𝑛 = 𝑛2 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =∞
𝑛→∞ 𝑔 𝑛
16
𝑛2 2 + 2
𝑛
⟹ 𝑙𝑖𝑚
𝑛→∞ 𝑛2
16
⟹ 𝑙𝑖𝑚 2 + 2
𝑛→∞ 𝑛
⟹ 𝑙𝑖𝑚 2 + 0
𝑛→∞
⟹ 𝑙𝑖𝑚 2
𝑛→∞
𝑆𝑜 2 ≠ ∞ 𝑖𝑠 𝑡𝑟𝑢𝑒 , 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
Asymptotic notation (Little Oh omega )
Example 12
𝐼𝑓 𝑓 𝑛 = 𝑛2 𝑎𝑛𝑑 𝑔 𝑛 = log 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ 𝜔 𝑔 𝑛
Asymptotic notation (Little Oh omega )
Example 12
𝐼𝑓 𝑓 𝑛 = 𝑛2 𝑎𝑛𝑑 𝑔 𝑛 = log 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ 𝜔 𝑔 𝑛
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =∞
𝑛→∞ 𝑔 𝑛
𝑛2
⟹ 𝑙𝑖𝑚
𝑛→∞ log 𝑛
10 1 1 𝑛
> > >
𝑛 𝑛2 𝑛3 2𝑛
Growth of the Functions
10
= 0 (𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡) [𝑖𝑓 𝑛 → ∞]
𝑛
Growth of the Functions
Order of Functions
Example:
1
< 1 < lg 𝑛 < 𝑛 𝑛 < 𝑛2 < 𝑛3 < ⋯ < 2𝑛 < 3𝑛 < ⋯ < 𝑛𝑛
𝑛
Comparisons of functions
Standard notations and common functions
Design and Analysis of Algorithm
(KCS503)
Lecture -8
Objective
7 4 3 6 5 3 4 5 6 7
3 4 7 6 5 3 4 5 6 7
3 4 7 6 5 3 4 5 6 7
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1)
if (i==n-1)
return
j=minIndex(arr,i,n-1)
if I != j
swap(arr[i],arr[j])
SELECTION SORT(arr, i+1, n-1)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx
T(n)=n+T(n-1)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx
T(n)=n+T(n-1)
(n-1)+T(n-2)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx
T(n)=n+T(n-1)
(n-1)+T(n-2)
(n-2)+T(n-3)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx
T(n)=n+T(n-1)
(n-1)+T(n-2)
.....
(n-2)+T(n-3)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx
T(n)=n+T(n-1)
(n-1)+T(n-2)
.....
(n-2)+T(n-3)
2+T(1)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx
T(n)=n+T(n-1)
(n-1)+T(n-2)
i==n-1
.....
(n-2)+T(n-3)
2+T(1)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
T(n)=n+T(n-1)
(n-1)+T(n-2)
i==n-1
.....
(n-2)+T(n-3)
2+T(1)
𝑇(𝑛) = 𝑛 + (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) + ⋯ … … … . . +3 + 2 + 1
𝑛 𝑛+1
=
2
= Ο(𝑛2 )
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
T(n)=n+T(n-1)
(n-1)+T(n-2)
i==n-1
.....
(n-2)+T(n-3)
2+T(1)
𝑇(𝑛) = 𝑛 + (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) + ⋯ … … … . . +3 + 2 + 1
𝑛 𝑛+1
=
2
= Ο(𝑛2 )
Hence, the recursive (Tail) method have develop the recurrence equation
𝑇 𝑛 = Ο 𝑛 + 𝑇 𝑛 − 1 ∈ Ο(𝑛2 )
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx
Hence, the recursive (Tail) method have develop the recurrence equation
𝑇 𝑛 = Ο 𝑛 + 𝑇 𝑛 − 1 ∈ Ο(𝑛2 )
Thank You
Design and Analysis of Algorithm
(KCS503)
Lecture -9
Objective
𝑻 𝒏 = 𝑻 𝒏 − 𝟏 + 𝚶(𝒏), 𝑻 𝟏 = 𝟏
insertion-sort(A, 1) insertion-sort(A, 4)
insert 2 insert 1
return A = [2, 5] return A = [1, 2, 4, 5, 6]
insertion-sort(A, 2) insertion-sort(A, 5)
insert 4 insert 3
return A = [2, 4, 5] return A = [1, 2, 3, 4, 5, 6]
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
It was observed that the concept is:
Solution Steps:
• Base Case: If array size is 1 or smaller,
return.
• Recursively sort first n-1 elements.
• Insert the last element at its correct
position in the sorted array.
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
void insertion_sort(A, j) {
//Initially j=length(A)
// Base case
if (j <= 1)
return
// Sort first i-1 elements
insertion_sort( A, j-1 )
insert A[j-1] into A[0…j-2]
}
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
0 1 2 3 4 5
5 2 4 6 1 3
5 2 4 6 1 3
5 2 4 6 1
5 2 4 6
5 2 4
5 2
5
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
0 1 2 3 4 5 0 1 2 3 4 5
5 2 4 6 1 3 1 2 3 4 5 6
5 2 4 6 1 3 1 2 3 4 5 6
5 2 4 6 1 1 2 4 5 6
5 2 4 6 2 4 5 6
5 2 4 2 4 5
5 2 Bottom 2 5 UP
5 5
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
void insertion_sort(A, j) {
//Initially j=length(A)
// Base case
if (j <= 1)
return
// Sort first n-1 elements
insertion_sort( A, j-1 )
val = A[j-1]
i = j-2
while (i >= 0 && A[i] > val) {
A[i+1] = A[i]
i=i-1
}
A[i+1] = val
}
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
Complexity:
𝐓 𝐧 =𝐓 𝐧−𝟏 + 𝐧
𝐓 𝐧 =𝐓 𝐧−𝟐 + 𝐧−𝟏 + 𝐧
𝐓 𝐧 =𝐓 𝐧−𝟑 + 𝐧−𝟐 + 𝐧−𝟏 + 𝐧
…………………….…………………………………………..
…………………….…………….………………………………..
…………………………………………………………………………
𝐓 𝐧 = 𝟏+𝟐 +𝟑+⋯+⋯+ 𝐧 −𝟑 + 𝐧 −𝟐 + 𝐧 −𝟏 + 𝐧
𝐧 𝐧+𝟏
𝐇𝐞𝐧𝐜𝐞 𝐓 𝐧 = 𝐓 𝐧 = 𝚶(𝐧𝟐 )
𝟐
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
void insertion_sort(A, j) {
//Initially j=length(A)
// Base case
if (j <= 1)
return
// Sort first j-1 elements Hence, the recursive (Head) method have
insertion_sort( A, j-1 ) develop the recurrence equation
val = A[j-1] 𝑻 𝒏 = 𝑻 𝒏 − 𝟏 + 𝜪 𝒏 ∈ 𝜪(𝒏𝟐 )
i = j-2
while (i >= 0 && A[i] > val) {
A[i+1] = A[i]
i=i-1
}
A[i+1] = val
}
🙏🏻 Thank You 🙏🏻
Design and Analysis of Algorithm
Recurrence Equation
(Solving Recurrence using
Iteration Methods)
Lecture – 10 and 11
Overview
• A recurrence is a function is defined in terms of
– one or more base cases, and
– itself, with smaller arguments.
Examples:
𝑇 𝑛−1 +1 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =ቊ
1 𝑖𝑓 𝑛 = 1
Iteration Method ( Example 1)
It means 𝑇 𝑛 = 𝑇 𝑛 − 1 + 1 𝑖𝑓 𝑛 > 1
𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 −−−−−−−−−−−− −(1)
𝑃𝑢𝑡 𝑛 = 𝑛 − 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛−1 =𝑇 𝑛−2 +1
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇(𝑛 − 1) 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛 = 𝑇 𝑛 − 2 + 2 −−−−−−−−−−−−−−−− −(2)
Iteration Method ( Example 1)
𝑃𝑢𝑡 𝑛 = 𝑛 − 2 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛−2 =𝑇 𝑛−3 +1
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇(𝑛 − 2) 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛 = 𝑇 𝑛 − 3 + 3 −−−−−−−−−−−−−−−− −(3)
……….
𝑇 𝑛 = 𝑇 𝑛 − 𝑘 + 𝑘 −−−−−−−−−−−−−−−− −(𝑘)
Iteration Method ( Example 1)
𝐿𝑒𝑡 𝑇 𝑛 − 𝑘 = 𝑇 1 = 1
(𝐴𝑠 𝑝𝑒𝑟 𝑡ℎ𝑒 𝑏𝑎𝑠𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑜𝑓 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒)
𝑆𝑜 𝑛 − 𝑘 = 1
⇒𝑘 =𝑛−1
𝑁𝑜𝑤 𝑝𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑘 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 𝑘
𝑇 𝑛 =𝑇 𝑛− 𝑛−1 +𝑛−1
𝑇 𝑛 =𝑇 1 +𝑛−1
𝑇 𝑛 =1+𝑛−1 [∴ 𝑇 1 = 1]
𝑇 𝑛 =𝑛
∴ 𝑇 𝑛 =Θ 𝑛
Iteration Method ( Example 2)
Example 2:
Solve the following recurrence relation by using Iteration method.
𝑛
2𝑇 + 3𝑛2 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =ቐ 2
11 𝑖𝑓 𝑛 = 1
Iteration Method ( Example 2)
𝑛
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 2𝑇 + 3𝑛2 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 11 𝑤ℎ𝑒𝑛 𝑛 = 1 − −(1)
2
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 𝑛 2
𝑇 = 2𝑇 +3
2 4 2
𝑛 𝑛 𝑛 2
𝑇 = 2𝑇 2 + 3
2 2 2
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 2
𝑇 𝑛 = 2 2𝑇 2 + 3 + 3𝑛2
2 2
𝑛 𝑛2
𝑇 𝑛 = 2 𝑇 2 + 2.3 + 3𝑛2
2
2 4
2
𝑛 𝑛
𝑇 𝑛 = 22 𝑇 2 + 3 + 3𝑛2 −−−−−−−−−−−−− −(2)
2 2
Iteration Method ( Example 2)
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛 2
𝑇 = 2𝑇 +3
4 8 4
𝑛 𝑛 𝑛 2
𝑇 = 2𝑇 3 + 3
4 2 4
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
2 2
𝑛 𝑛 𝑛
𝑇 𝑛 = 22 2𝑇 +3 + 3 +3𝑛2
8 16 2
2
𝑛 𝑛 2 𝑛2
𝑇 𝑛 = 2 2𝑇 3 + 3 + 3 +3𝑛2
2 4 2
2 2
𝑛 𝑛 𝑛
𝑇 𝑛 = 23 𝑇 3 + 4.3 + 3 +3𝑛2
2 16 2
Iteration Method ( Example 2)
2 2
𝑛 𝑛 𝑛
𝑇 𝑛 = 23 𝑇 3 + 3 2 + 3 +3𝑛2 −−−−−−−−−−−−− −(3)
2 2 2
…….
𝑛 𝑛2 𝑛2
𝑇 𝑛 = 2 𝑇 𝑖 + ⋯ + ⋯ + ⋯ + 3 2 + 3 +3𝑛2 −−−−−−−− −(𝑖 𝑡ℎ 𝑡𝑒𝑟𝑚)
𝑖
2 2 2
𝑛
𝑎𝑛𝑑 𝑡ℎ𝑒 𝑠𝑒𝑟𝑖𝑒𝑠 𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒 𝑤ℎ𝑒𝑛 𝑖 = 1
2
⇒ 𝑛 = 2𝑖
𝑇𝑎𝑘𝑖𝑛𝑔 log 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⇒ log 2 𝑛 = 𝑖 log 2 2
⇒ 𝑖 = log 2 𝑛 (𝑏𝑒𝑐𝑎𝑢𝑠𝑒 log 2 2 = 1)
Iteration Method ( Example 2)
𝐻𝑒𝑛𝑐𝑒 𝑤𝑒 𝑐𝑎𝑛 𝑤𝑟𝑖𝑡𝑒 𝑡ℎ𝑒 𝑖 𝑡ℎ 𝑡𝑒𝑟𝑚 𝑎𝑠 𝑓𝑜𝑙𝑙𝑜𝑤𝑠
2 2
𝑛 𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛2 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 2𝑖 𝑇 𝑖
2 2 2
2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛2 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 2log2 𝑛 𝑇 1
2 2
𝑛2 𝑛2
⇒ 𝑇 𝑛 = 3𝑛 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 2log2 𝑛 . 11
2
2 2
2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛2 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 𝑛log2 2 . 11 [ 𝐴𝑠 log 2 2 = 1]
2 2
2
𝑛2 𝑛2
⇒ 𝑇 𝑛 = 3𝑛 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 𝑛. 11
2 2
2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛2 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 11. 𝑛
2 2
1 1
⇒ 𝑇 𝑛 = 3𝑛2 1 + + 2 + ⋯ + ⋯ + ⋯ + 11. 𝑛
2 2
Iteration Method ( Example 2)
𝐴𝑠 𝑤𝑒 𝑘𝑛𝑜𝑤 𝑡ℎ𝑎𝑡 𝑆𝑢𝑚 𝑜𝑓 𝑖𝑛𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑠𝑒𝑟𝑖𝑒𝑠 𝑖𝑠
∞
1 𝑎
= 𝑎 + 𝑎𝑟 + 𝑎𝑟 2 + ...+ 𝑎𝑟 (𝑛−1) = 𝑎𝑟 𝑖 =𝑎 =
1−𝑟 1−𝑟
𝑖=0
Hence,
1
⇒ 𝑇 𝑛 ≤ 3𝑛2 + 11 𝑛
1
1−
2
⇒ 𝑇 𝑛 ≤ 3𝑛2 . 2 + 11 𝑛
⇒ 𝑇 𝑛 ≤ 6𝑛2 + 11 𝑛
𝐻𝑒𝑛𝑐𝑒 𝑇 𝑛 = Ο(𝑛2 )
Iteration Method ( Example 3)
Example 3:
Solve the following recurrence relation by using Iteration method.
𝑛
8𝑇 + 𝑛2 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =ቐ 2
1 𝑖𝑓 𝑛 = 1
Iteration Method ( Example 3)
𝑛
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 8𝑇 + 𝑛2 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 −−− −(1)
2
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 𝑛 2
𝑇 = 8𝑇 +
2 4 2
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 2
𝑇 𝑛 = 8 8𝑇 + + 𝑛2
4 2
2
𝑛 𝑛2
𝑇 𝑛 =8 𝑇 + 8 + 𝑛2 −−−−−−−−−−−−− −(2)
4 4
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛 2
𝑇 = 8𝑇 +
4 8 4
Iteration Method ( Example 3)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
2
𝑛 𝑛 2 𝑛2
𝑇 𝑛 = 8 8𝑇 + + 8 + 𝑛2
8 4 4
𝑛 𝑛2 𝑛2
𝑇 𝑛 = 83 𝑇 8 + 82 42 + 84 + 𝑛2 −−−−−−−−−−−−− −(3)
……
𝑇 𝑛
𝑛 𝑛 2 𝑛 2 𝑛 2
= 8𝑘 𝑇 𝑘 + 8𝑘−1 𝑘−1 + ⋯ + ⋯ + ⋯ + 82 2 + 8 + 𝑛2 −−− −(𝑘 𝑡ℎ 𝑡𝑒𝑟𝑚)
2 4 4 4
𝑛 8𝑘−1 8𝑘−2 82 8
𝑇 𝑛 = 8𝑘 𝑇 𝑘 + 𝑛2 𝑘−1 + 𝑘−2 … + ⋯ + ⋯ + 2 + + 1
2 4 4 4 4
𝑛
𝑇 𝑛 = 8 𝑇 𝑘 + 𝑛2 2𝑘−1 + 2𝑘−2 … + ⋯ + ⋯ + 22 + 2 + 1 −−−− −(4)
𝑘
2
Iteration Method ( Example 3)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
2
𝑛 𝑛 2 𝑛2
𝑇 𝑛 = 8 8𝑇 + + 8 + 𝑛2
8 4 4
𝑛 𝑛2 𝑛2
𝑇 𝑛 = 83 𝑇 8 + 82 42 + 84 + 𝑛2 −−−−−−−−−−−−− −(3)
……
𝑛 𝑛2 𝑛2 𝑛2
𝑇 𝑛 = 8𝑘 𝑇𝑘 +8𝑘−1
𝑘−1 + ⋯ + ⋯ + 8 2 + 8 + 𝑛2 −−− −(𝑘 𝑡ℎ 𝑡𝑒𝑟𝑚)
2
2 4 4 4
𝑛 8𝑘−1 8𝑘−2 82 8
𝑘
𝑇 𝑛 =8 𝑇 𝑘 + 𝑛 2 + …+ ⋯+ ⋯+ 2 + + 1
2 4𝑘−1 4𝑘−2 4 4
𝑛
𝑇 𝑛 = 8𝑘 𝑇 𝑘 + 𝑛2 2𝑘−1 + 2𝑘−2 … + ⋯ + ⋯ + 22 + 2 + 1 −−−− −(4)
2
Iteration Method ( Example 3)
𝑛
𝑇 𝑛 = 8 𝑇 𝑘 + 𝑛2 2𝑘−1 + 2𝑘−2 … + ⋯ + ⋯ + 22 + 2 + 1 −−− −(4)
𝑘
2
𝑛
𝑇 𝑛 = 8𝑘 𝑇 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2𝑘−2 + 2𝑘−1 −−− − (5)
2𝑘
𝑛
𝑎𝑛𝑑 𝑡ℎ𝑒 𝑠𝑒𝑟𝑖𝑒𝑠 𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒 𝑤ℎ𝑒𝑛 𝑘 = 1
2
⇒ 𝑛 = 2𝑘
𝑇𝑎𝑘𝑖𝑛𝑔 log 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⇒ log 2 𝑛 = 𝑘 log 2 2
⇒ 𝑘 = log 2 𝑛 (𝑏𝑒𝑐𝑎𝑢𝑠𝑒 log 2 2 = 1)
𝑛
𝑁𝑜𝑤, 𝑎𝑝𝑝𝑙𝑦 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑘 = log 2 𝑛 𝑎𝑛𝑑 𝑘
= 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 5
2
Iteration Method ( Example 3)
𝑇 𝑛 = 8log2 𝑛 𝑇 1 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1 −
(6)
= 𝑛log2 8 . 1 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1
= 𝑛3 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1
= 𝑛3
Hence the complexity is 𝑇(𝑛) = 𝑛3
Iteration Method ( Example 3)
𝑇 𝑛 = 8log2 𝑛 𝑇 1 + 𝑛2 1 + 2 + 22 + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1 − (6)
= 𝑛log2 8 . 1 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1
= 𝑛3 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1
= 𝑛3
Hence the complexity is 𝑇(𝑛) = 𝜪(𝑛3 )
𝑆𝑢𝑚 𝑜𝑓 𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑃𝑟𝑜𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑠𝑒𝑟𝑖𝑒𝑠 𝑖𝑠
𝑛 𝑛+1 −1
𝑛 𝑟
= 𝑎 + 𝑎𝑟 + 𝑎𝑟 2 + . . . + 𝑎𝑟 = 𝑎𝑟 𝑖 = 𝑎
𝑟−1
𝑖=0
Iteration Method ( Example 4)
Example 4:
Solve the following recurrence relation by using Iteration method.
𝑛
7𝑇 + 𝑛2 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =ቐ 2
1 𝑖𝑓 𝑛 = 1
(𝑖. 𝑒. 𝑆𝑡𝑟𝑎𝑠𝑠𝑖𝑜𝑛 𝐴𝑙𝑔𝑜𝑟𝑖𝑡ℎ𝑚)
Iteration Method ( Example 4)
𝑛
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 7𝑇 + 𝑛2 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 −−− −(1)
2
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 𝑛 2
𝑇 = 7𝑇 +
2 4 2
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 2
𝑇 𝑛 = 7 7𝑇 + + 𝑛2
4 2
2
𝑛 𝑛2
𝑇 𝑛 =7 𝑇 + 7 + 𝑛2 −−−−−−−−−−−−− −(2)
4 4
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛 2
𝑇 = 7𝑇 +
4 8 4
Iteration Method ( Example 4)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
2
𝑛 𝑛 2 𝑛2
𝑇 𝑛 = 7 7𝑇 + + 7 + 𝑛2
8 4 4
3
𝑛 𝑛2 𝑛2
𝑇 𝑛 =7 𝑇 + 7 2 + 7 + 𝑛2 −−−−−−−−−−−−− −(3)
2
8 4 4
……
2 2 2
𝑛 𝑛 𝑛 𝑛
𝑇 𝑛 = 7𝑘 𝑇 𝑘 + 7𝑘−1 𝑘−1 + ⋯ + ⋯ + ⋯ + 72 2 + 7 + 𝑛2 −− −(𝑘 𝑡ℎ 𝑡𝑒𝑟𝑚)
2 4 4 4
𝑘−1 𝑘−2 2
𝑛 7 7 7 7
𝑇 𝑛 = 7𝑘 𝑇 𝑘 + 𝑛2 𝑘−1 + 𝑘−2 … + ⋯ + ⋯ + 2 + + 1
2 4 4 4 4
𝑘−1 𝑖
𝑛 7
𝑇 𝑛 = 7𝑘 𝑇 𝑘 + 𝑛2 −−−− −(4)
2 4
𝑖=0
Iteration Method ( Example 4)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
2 2
𝑛 𝑛 𝑛
𝑇 𝑛 = 72 7𝑇 + + 7 + 𝑛2
8 4 4
𝑛 𝑛 2 𝑛 2
𝑇 𝑛 = 73 𝑇 + 72 2 + 7 + 𝑛2 −−−−−−−−−−−−− −(3)
8 4 4
……
𝑘
𝑛 𝑘−1
𝑛2 2
𝑛2 𝑛2 2 −− −(𝑘 𝑡ℎ 𝑡𝑒𝑟𝑚)
𝑇 𝑛 =7 𝑇 𝑘 +7 + ⋯ + ⋯ + ⋯ + 7 + 7 + 𝑛
2 4𝑘−1 42 4
𝑘
𝑛 2
7𝑘−1 7𝑘−2 72 7
𝑇 𝑛 =7 𝑇 𝑘 + 𝑛 + …+ ⋯+ ⋯+ 2 + + 1
2 4𝑘−1 4𝑘−2 4 4
𝑘−1 𝑖
𝑘
𝑛 2
7
𝑇 𝑛 =7 𝑇 𝑘 + 𝑛 −−−− −(4)
2 4
𝑖=0
Iteration Method ( Example 4)
𝑘−1 𝑖
𝑛 7
𝑇 𝑛 = 7𝑘 𝑇 𝑘 + 𝑛2 −−−− − 4
2 4
𝑖=0
𝑛
𝑎𝑛𝑑 𝑡ℎ𝑒 𝑠𝑒𝑟𝑖𝑒𝑠 𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒 𝑤ℎ𝑒𝑛 𝑘 = 1
2
⇒ 𝑛 = 2𝑘
𝑇𝑎𝑘𝑖𝑛𝑔 log 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⇒ log 2 𝑛 = 𝑘 log 2 2
⇒ 𝑘 = log 2 𝑛 (𝑏𝑒𝑐𝑎𝑢𝑠𝑒 log 2 2 = 1)
𝑛
𝑁𝑜𝑤, 𝑎𝑝𝑝𝑙𝑦 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑘 = log 2 𝑛 𝑎𝑛𝑑 = 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 4
2𝑘
Iteration Method ( Example 4)
log2 𝑛−1 𝑖
7
𝑇 𝑛 = 7log2 𝑛 𝑇 1 + 𝑛2 −−−− − 5
4
𝑖=0
log2 𝑛−1 𝑖
7
= 𝑛log2 7 . 1 + 𝑛2
4
𝑖=0
log2 𝑛−1 𝑖
7
= 𝑛log2 7 . 1 + 𝑛2
4
𝑖=0
log2 𝑛−1 𝑖
7
= 𝑛2.8 + 𝑛2
4
𝑖=0
= 𝑛2.8
Hence the complexity is 𝑇(𝑛) = 𝑛2.8
Iteration Method ( Example 4)
log2 𝑛−1 𝑖
7
= 𝑛2.8 + 𝑛2
4
𝑖=0
= 𝑛2.8
Hence the complexity is 𝑇(𝑛) = 𝜪(𝑛2.8 )
𝑆𝑢𝑚 𝑜𝑓 𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑃𝑟𝑜𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑠𝑒𝑟𝑖𝑒𝑠 𝑖𝑠
𝑛
2 𝑛 𝑟 𝑛+1 −1
𝑖
= 𝑎 + 𝑎𝑟 + 𝑎𝑟 + . . . + 𝑎𝑟 = 𝑎𝑟 = 𝑎
𝑟−1
𝑖=0
Iteration Method ( Example 5)
Example 5:
Solve the following recurrence relation by using Iteration method.
𝑇 𝑛 − 1 + log 𝑛 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =ቊ
1 𝑖𝑓 𝑛 = 1
Iteration Method ( Example 5)
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 𝑇 𝑛 − 1 + log 𝑛 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 − −(1)
𝑃𝑢𝑡 𝑛 = 𝑛 − 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛 − 1 = 𝑇 𝑛 − 2 + log(𝑛 − 1)
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑛 − 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛 = 𝑇 𝑛 − 2 + log(𝑛 − 1) + log 𝑛
= 𝑇 𝑛 − 3 + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
= T n − 4 + log (𝑛 − 3) + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
…………….
…………….
Hence the 𝑘𝑡ℎ order is :
T n = T n − k + log n − k − 1 + ⋯ + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
T n = T n − k + log n − 𝑘 + 1 + ⋯ + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
Iteration Method ( Example 5)
Hence the 𝑘𝑡ℎ order is :
T n = T n − k + log n − 𝑘 + 1 + ⋯ + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
𝑇 𝑛 − 1 + (𝑛 − 1) 𝑖𝑓 𝑛 > 1
𝑄2. 𝑇 𝑛 = ቊ
1 𝑖𝑓 𝑛 = 1
𝑇 𝑛 − 1 + 𝑛2 𝑖𝑓 𝑛 > 1
𝑄3. 𝑇 𝑛 = ቊ
1 𝑖𝑓 𝑛 = 1
Design and Analysis of Algorithm
Recurrence Equation
(Solving Recurrence using
Recursion Tree Methods)
Lecture – 12 and 13
Overview
• A recurrence is a function is defined in terms of
– one or more base cases, and
– itself, with smaller arguments.
Examples:
𝑛
Solve the recurrence 𝑇 𝑛 = 3𝑇 + Θ 𝑛2 by
4
using recursion tree method.
Recursion Tree Method (Example 1)
Answer:
We start by focusing on finding an upper bound for the
solution by using good guess. As we know that floors
and ceilings usually do not matter when solving the
recurrences, we drop the floor and write the recurrence
equation as follows:
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛2 , 𝑐 > 0
4
The term 𝑐𝑛2 , at the root represent the costs incurred by
𝑛
the subproblems of size .
4
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛2 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4
𝑇(𝑛)
Fig –(a)
Figure (a) shows T(n), which progressively expands in (b) and (d) to form recursion
tree.
𝑐 𝑛 2
𝑛 𝑛 𝑛
𝑇 𝑇 𝑇
4 4 4
Fig –(b)
L1.10
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛2 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4
𝑐 𝑛 2
𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛
𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇
16 16 16 16 16 16 16 16 16
Fig –(c)
L1.11
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛2 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4
2 2
𝑐 𝑛 𝑐 𝑛
3 2
𝑐(𝑛/4)2 𝑐(𝑛/4)2 𝑐(𝑛/4)2 𝑐 𝑛
16
log 4 𝑛
2
𝑛 2
𝑛 2 𝑛 2 𝑛 2 𝑛 2 𝑛 2
𝑛 2 𝑛 2 𝑛 2 3 2
𝑐
16
𝑐
16
𝑐
16
𝑐
16
𝑐
16
𝑐
16
𝑐
16
𝑐
16
𝑐
16 𝑐 𝑛
16
𝑖
3 2
𝑐 𝑛
16
𝑖
𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1 3
Fig –(d)
L1.12
Recursion Tree Method (Example 1)
Analysis
First, we find the height of the recursion tree
𝑛
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
4 𝑖
𝑛
i.e. the subproblem size hits 𝑛 = 1, when =1
4 𝑖
𝑛
So, if 4 𝑖
=1
⟹𝑛= 4 𝑖 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 4 𝑖
⟹ 𝑖 = log 4 𝑛
So the height of the tree is log 4 𝑛.
Recursion Tree Method (Example 1)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 3 𝑖 . It was observed that
2 3 𝑖
3 3 3 3
𝑇 𝑛 = 𝑐𝑛2 + 𝑐𝑛2 + 𝑐𝑛2 + 𝑐𝑛2 + ⋯ + 𝑐𝑛2 + (3)𝑖
16 16 16 16
𝑛 2
So, each node at depth ′𝑖 ′ 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … , log 4 𝑛 − 1 has cost 𝑐 .
4𝑖
𝑛 2
Hence the total cost at level ′𝑖′ is 3𝑖 𝑐 4𝑖
𝑖
𝑖
𝑛 2
𝑖
𝑛2 3𝑖 2
3
⟹ 3 . 𝑐. 𝑖 ⟹ 3 . 𝑐. 𝑖 ⟹ 𝑖 . 𝑐. 𝑛 ⟹ . 𝑐. 𝑛2
4 16 16 16
Recursion Tree Method (Example 1)
However, the bottom level is special. Each of the bottom node has
contribute cost = 𝑇(1)
Hence the cost of the bottom level is = 3𝑖
⟹ 3log4 𝑛 𝑎𝑠 𝑖 = log 4 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹ 𝑛log4 3
So, the total cost of entire tree is
𝑇 𝑛
2 3 𝑖
3 3 3 3
= 𝑐𝑛2 + 𝑐𝑛2 + 𝑐𝑛2 + 𝑐𝑛2 + ⋯ + ⋯ + 𝑐𝑛2 + Θ(𝑛log4 3 )
16 16 16 16
log4 𝑛 𝑖
3
𝑇 𝑛 = 𝑐𝑛2 + Θ(𝑛log4 3 )
16
𝑖=0
Recursion Tree Method (Example 1)
The left term is just a sum of Geometric series. So the value of 𝑇 𝑛 is as follows.
3ൗ log4 𝑛 − 1
𝑇 𝑛 = 16 𝑐𝑛2 + Θ(𝑛log4 3 )
3
−1
16
The above equation looks very complicated. So, we use an infinite geometric series as an upper
bound. Hence the new form of the equation is given below:
log4 𝑛−1 𝑖
3
𝑇 𝑛 = 𝑐𝑛2 + Θ(𝑛log4 3 )
𝑖=0
16 𝑆𝑢𝑚 𝑜𝑓 𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑃𝑟𝑜𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛
∞ 𝑛
3
𝑖 2
𝑆𝑛 = 𝑎 + 𝑎𝑟 + 𝑎𝑟 + . . . + 𝑎𝑟
𝑇 𝑛 ≤ 𝑐𝑛2 + Θ(𝑛log4 3 )
16
𝑖=0 𝑛
1 𝑟 𝑛+1 −1
2 log4 3
𝑇 𝑛 ≤ 𝑐𝑛 + Θ(𝑛 )
1−
3 𝑆𝑛 = 𝑎𝑟 𝑖 = 𝑎
16 𝑟−1
𝑇 𝑛 ≤
16 2
𝑐𝑛 + Θ(𝑛log4 3 )
𝑖=0
13
𝑇 𝑛 = Ο(𝑛2 )
Recursion Tree Method (Example 2)
Example 2
𝑛
Solve the recurrence 𝑇 𝑛 = 4𝑇 + 𝑛 by using recursion tree
2
method.
𝑛
𝑇 𝑛 = 4𝑇 + 𝑐𝑛 , 𝑐 > 0
2
The term 𝑐𝑛, at the root represent the costs incurred by the
𝑛
subproblems of size .
2
𝑇(𝑛)
Fig –(a)
𝑛 4
𝑛 𝑐 𝑐𝑛
𝑛 𝑐 2
𝑐
𝑛
2 2
𝑐
2 2
log 2 𝑛
2
4
𝑛 𝑛 𝑇
𝑛
𝑛 𝑛 𝑇
𝑛
𝑐𝑛
𝑇
4 𝑇
𝑛
𝑇
𝑛
4
𝑇
4 𝑇
𝑛
4 𝑇
𝑛
𝑇
𝑛 𝑇
𝑛
4 𝑇
𝑛
𝑛 𝑛 𝑇
𝑛
4 𝑇
4
𝑇
4
4
2
4 4 4 𝑇 4
4 𝑇
4 4 3
4
𝑐𝑛
2
𝑖
4
𝑐𝑛
2
𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1 4 𝑖
Fig –(b)
Recursion Tree Method (Example 2)
Analysis
First, we find the height of the recursion tree
𝑛
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
2 𝑖
𝑛
i.e. the subproblem size hits 𝑛 = 1, when =1
2 𝑖
𝑛
So, if =1
2 𝑖
⟹𝑛= 2 𝑖 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2 𝑖
⟹ 𝑖 = log 2 𝑛
So the height of the tree is log 2 𝑛.
Recursion Tree Method (Example 2)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 4 𝑖 .
So, each node at depth ′𝑖 ′ 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … … , log 2 𝑛 − 1 has cost
𝑛
𝑐 2𝑖 .
𝑛
Hence the total cost at level ′𝑖′ is 4𝑖 𝑐 2𝑖
𝑖
𝑛
⟹ 4 . 𝑐. 𝑖
2
𝑖
𝑛
⟹ 4 . 𝑐. 𝑖
2
𝑖
4
⟹ 𝑖 . 𝑐𝑛
2
𝑖
4
⟹ . 𝑐𝑛
2
Recursion Tree Method (Example 2)
However, the bottom level is special. Each of the bottom node has contribute
cost = 𝑇(1)
Hence the cost of the bottom level is = 4𝑖
⟹ 4log2 𝑛 𝑎𝑠 𝑖 = log 2 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹ 𝑛log2 4
𝑐𝑛 𝑛
𝑛 𝑛 𝑛
𝑐 𝑐 2 =𝑛
2 2 2
log 2 𝑛
𝑛 𝑛 𝑛 𝑛 𝑛
𝑇 𝑇 𝑇 𝑇 22 =𝑛
4 4 4 4 22
𝑛
2𝑖 =𝑛
2𝑖
𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1
2𝑖
Fig –(b)
Recursion Tree Method (Example 3)
Analysis
First, we find the height of the recursion tree
𝑛
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
2 𝑖
𝑛
i.e. the subproblem size hits 𝑛 = 1, when =1
2 𝑖
𝑛
So, if =1
2 𝑖
⟹𝑛= 2 𝑖 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2 𝑖
⟹ 𝑖 = log 2 𝑛
So the height of the tree is log 2 𝑛.
Recursion Tree Method (Example 3)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 4 𝑖 .
𝑛
So, each node at depth ′𝑖 ′ 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … … , log 2 𝑛 − 1 has cost 𝑐 .
2𝑖
𝑛
Hence the total cost at level ′𝑖′ is 2𝑖 𝑐 2𝑖 = 𝑐𝑛
However, the bottom level is special. Each of the bottom node has contribute
cost = 𝑇(1)
Hence the cost of the bottom level is = 2𝑖
⟹ 2log2 𝑛 𝑎𝑠 𝑖 = log 2 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹ 𝑛log2 2
⟹𝑛
Recursion Tree Method (Example 3)
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯ + 𝑐𝑛
log 𝑛
𝑇 𝑛 = 𝑐𝑛 σ𝑖=02 1𝑖 .
𝑇 𝑛 = 𝑐𝑛 (log 2 𝑛 + 1)
𝑇 𝑛 = 𝑐𝑛 log 2 𝑛 + 𝑐𝑛
𝑛 𝑛
𝑛 𝑐 7
𝑐 𝑐 8
2 4 𝑐𝑛
8
log 2 𝑛
2
𝑇
𝑛
𝑇
𝑛
𝑇
𝑛
𝑇
𝑛
8
𝑇
𝑛
16
𝑇
𝑛
𝑇
𝑛
𝑇
𝑛
𝑇
𝑛
7
2 4 8 32 16 32 64
𝑐𝑛
8
𝑖
7
𝑐𝑛
8
𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1 𝑐𝑛
Fig –(b)
Recursion Tree Method (Example 4)
Analysis
First, we find the height of the recursion tree
𝑛 𝑛 𝑛
Hear the problem divide into three subproblem of size 𝑖 , 𝑖 , 𝑎𝑛𝑑 𝑖 .
2 4 8
For calculating the height of the tree, we consider the longest path of the tree. It has
been observed that the node on the left-hand side is the longest path of the tree.
𝑛
Hence the node at depth ′𝑖′ reflects a subproblem of size .
2𝑖
𝑛
i.e. the subproblem size hits 𝑛 = 1, when =1
2𝑖
𝑛
So, if =1
2𝑖
⟹ 𝑛 = 2𝑖 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2 𝑖
⟹ 𝑖 = log 2 𝑛
So the height of the tree is log 2 𝑛.
Recursion Tree Method (Example 4)
𝑛 𝑛 𝑛
Second, we determine the cost of the tree in level ′𝑖′ = + +
2𝑖 4𝑖 8𝑖
So, the total cost of the tree is:
2 3
7 7 7
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯
8 8 8
For simplicity we take ∞ Geometric Series
2 3
7 7 7
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ∞
8 8 8
∞ 𝑖
7
𝑇 𝑛 ≤ 𝑐𝑛
8
𝑖=0
1
𝑇 𝑛 ≤ 𝑐𝑛
7
1−
8
8
𝑇 𝑛 ≤ 𝑐𝑛
1
𝑇 𝑛 ≤ 8𝑐𝑛
𝐻𝑒𝑛𝑐𝑒, 𝑇 𝑛 = Ο(𝑛)
Design and Analysis of Algorithm
(Heap Sort)
Lecture -14 - 17
Overview
BUILD-MAX-HEAP(A,n) 9
𝒇𝒐𝒓 𝒊 ← 𝒏/𝟐 𝒅𝒐𝒘𝒏𝒕𝒐 𝟏
do MAX-HEAPIFY (A,i,n) 6 5
0 8 2 1
1 2 3 4 5 6 7 8
9 6 5 0 8 2 1 3
3
Tighter analysis Proof
• For easy understanding, Let us take a complete binary
Tree,
𝑛 1Τ 𝑛
≤Ο 2
⟹ Ο(2 2 )
2 1−1Τ2 2
T(n) ⟹ Ο 𝑛
Hence the running time of BUILD-MAX-HEAP(A,n) is Ο 𝑛 in tight bound .
Tighter analysis Proof
∞
1
𝑥𝑘 = value of Infanite G P Series
1−𝑥
𝑘=0
∞
𝑥 𝑘 = (1 − 𝑥)−1
𝑘=0
Differentiate both side:
∞
1
𝑘. 𝑥 𝑘−1 = −1 1 − 𝑥 −2
−1 =
(1 − 𝑥)2
𝑘=0
Multiply x both side
∞
𝑥
𝑘. 𝑥 𝑘 ==
(1 − 𝑥)2
𝑘=0
The heapsort algorithm
Given an input array, the heapsort algorithm acts as
follows:
• Builds a max-heap from the array.
• Starting with the root (the maximum element), the
algorithm places the maximum element into the
correct place in the array by swapping it with the
element in the last position in the array.
• “Discard” this last node (knowing that it is in its
correct place) by decreasing the heap size, and calling
MAX-HEAPIFY on the new (possibly incorrectly-placed)
root.
• Repeat this “discarding” process until only one node
(the smallest element) remains, and therefore is in the
correct place in the array.
Example
Analysis
• BUILD-MAX-HEAP: O(n)
• for loop: n − 1 times
• exchange elements: O(1)
• MAX-HEAPIFY: O(lg n)
Lecture -18
Overview
for i=0 to k 0 1 2 3 4 5
C 0 0 0 0 0 0
C[i]= 0;
Counting Sort
• Let us illustrate the counting sort with an example.
Apply the concept of counting sort on the given array.
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 7 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 7 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 2 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 2 2 4 6 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 1 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 6 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 4 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 4 5 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 1 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 1 2 3 5 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 0 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 5 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 4 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 7
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 3 4 7 7
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 3 4 7 7
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 2 4 7 7
Counting Sort
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k
3. C[i]= 0;
4. for j=1 to A. length
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Counting Sort
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k [Loop 1]
3. C[i]= 0;
4. for j=1 to A. length [Loop 2]
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k [Loop 3]
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1 [Loop 4]
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Complexity Analysis
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k [Loop 1] 𝚶 𝒌 𝒕𝒊𝒎𝒆𝒔
3. C[i]= 0;
4. for j=1 to A. length [Loop 2] 𝚶 𝒏 𝒕𝒊𝒎𝒆𝒔
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k [Loop 3] 𝚶 𝒌 𝒕𝒊𝒎𝒆𝒔
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1 [Loop 2] 𝚶 𝒏 𝒕𝒊𝒎𝒆𝒔
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Complexity Analysis
• So the counting sort takes a total time of: O(n + k)
• Counting sort is called stable sort.
( A sorting algorithm is stable when numbers with the
same values appear in the output array in the same
order as they do in the input array.)
Pro’s and Con’s of Counting Sort
• Pro’s
• Asymptotically very Fast - O(n + k)
• Simple to code
• Con’s
• Doesn’t sort in place.
• Requires O(n + k) extra storage space.
Design and Analysis of Algorithm
Lecture -19
Linear Time Sorting
(Radix Sort)
Overview
Radix_Sort(A,d)
𝑓𝑜𝑟 𝑖 ← 𝑑 𝑑𝑜𝑤𝑛 𝑡𝑜 1
𝑈𝑠𝑒 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡 𝑡𝑜 𝑠𝑜𝑟𝑡 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦 𝐴 𝑜𝑛 𝑑𝑖𝑔𝑖𝑡 𝑖
(𝑖. 𝑒. 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡)
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅)
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜𝑑
𝑑𝑜 "use a stable sort to sort array A on digit i;
329
457
657
839
436
720
355
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅)
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜𝑑
𝑑𝑜 "use a stable sort to sort array A on digit i;
329 720
457 355
657 436
839 457
436 657
720 329
355 839
i
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅)
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜𝑑
𝑑𝑜 "use a stable sort to sort array A on digit i;
329 720 720
457 355 329
657 436 436
839 457 839
436 657 355
720 329 457
355 839 657
i i
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅)
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜𝑑
𝑑𝑜 "use a stable sort to sort array A on digit i;
329 720 720 329
457 355 329 355
657 436 436 436
839 457 839 457
436 657 355 657
720 329 457 720
355 839 657 839
i i i
Radix Sort (Analysis)
Radix_Sort(A,d)
𝑓𝑜𝑟 𝑖 ← 𝑑 𝑑𝑜𝑤𝑛 𝑡𝑜 1
𝑈𝑠𝑒 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡 𝑡𝑜 𝑠𝑜𝑟𝑡 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦 𝐴 𝑜𝑛 𝑑𝑖𝑔𝑖𝑡 𝑖
(𝑖. 𝑒. 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡)
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
0 1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
1 2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
0 1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
2
0 1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
0 2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
0 2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
0 0 0 1 1 2 2 2 2 3 3 4 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
3 2 4 2 2
0 1 2 3 4
0 0 0 1 1 2 2 2 2 3 3 4 4
Bucket Sort
• One Value per bucket:
Algorithm BucketSort( S )
( values in S are between 0 and m-1 )
for j 0 to m-1 do // initialize m buckets
b[j] 0
for i 0 to n-1 do // place elements in their
b[S[i]] b[S[i]] + 1 // appropriate buckets
i0
for j 0 to m-1 do // place elements in buckets
for r 1 to b[j] do // back in S (Concatination)
S[i] j
ii+1
Bucket Sort
One Value per bucket (Analysis)
• Bucket initialization: 𝑂( 𝑚 )
• From array to buckets: 𝑂( 𝑛 )
• From buckets to array: 𝑂( 𝑛 )
• Due to the implementation of dequeue.
• Since 𝑚 will likely be small compared to 𝑛, Bucket
sort is 𝑂( 𝑛 )
• Strictly speaking, time complexity is 𝑂 ( 𝑛 + 𝑚 )
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99
.20
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99
.12 .20
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99
.64
.12 .20 .58 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99
.64
.12 .20 .36 .58 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99
.37 .64
.12 .20 .36 .58 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99
.37 .64
.12 .20 .36 .47 .58 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99
Apply Internal
sorting(stable)
on highlighted
data
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99
.09 .12 .18 .20 .36 .37 .47 .52 .58 .63 .64 .88 .99
Bucket Sort
• Multiple items per bucket:
Algorithm BucketSort( S )
1. 𝐿𝑒𝑡 𝐵[0. . (𝑛 − 1)]𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦.
2. 𝑛 ← 𝐴. 𝑙𝑒𝑛𝑔𝑡ℎ
3. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1
4. 𝑚𝑎𝑘𝑒 𝐵 𝑖 𝑎𝑛 𝑒𝑚𝑝𝑡𝑦 𝑙𝑖𝑠𝑡
5. 𝑓𝑜𝑟 𝑖 ←← 1 𝑡𝑜 𝑛
6. 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵[𝑛 ∗ 𝐴 𝑖 ]
7. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1
8. 𝑠𝑜𝑟𝑡 𝑙𝑖𝑠𝑡 𝐵 𝑖 𝑤𝑖𝑡ℎ 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡𝑖𝑛𝑔(𝑖𝑛𝑠𝑒𝑟𝑡𝑖𝑜𝑛 𝑠𝑜𝑟𝑡)
9. 𝐶𝑜𝑛𝑐𝑎𝑡𝑒𝑛𝑎𝑡𝑒 𝑡ℎ𝑒 𝑙𝑖𝑠𝑡 𝐵 0 , 𝐵 1 , 𝐵 2 , … … , 𝐵 𝑛 − 1
𝑡𝑜𝑔𝑒𝑡ℎ𝑒𝑟 𝑖𝑛 𝑜𝑟𝑑𝑒𝑟.
Bucket Sort
• Multiple items per bucket:
Algorithm BucketSort( S )
1. 𝐿𝑒𝑡 𝐵[0. . (𝑛 − 1)]𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦. Ο(1) if all the
elements
2. 𝑛 ← 𝐴. 𝑙𝑒𝑛𝑔𝑡ℎ Ο(1) belongs to
3. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1 one bucket.
4. 𝑚𝑎𝑘𝑒 𝐵 𝑖 𝑎𝑛 𝑒𝑚𝑝𝑡𝑦 𝑙𝑖𝑠𝑡 } Ο(𝑛)
5. 𝑓𝑜𝑟 𝑖 ←← 1 𝑡𝑜 𝑛
6. 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵[𝑛 ∗ 𝐴 𝑖 ] }
Ο(𝑛)
7. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1
}
8. 𝑠𝑜𝑟𝑡 𝑙𝑖𝑠𝑡 𝐵 𝑖 𝑤𝑖𝑡ℎ 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡𝑖𝑛𝑔(𝑖𝑛𝑠𝑒𝑟𝑡𝑖𝑜𝑛 𝑠𝑜𝑟𝑡) Ο(𝑛2 )
9. 𝐶𝑜𝑛𝑐𝑎𝑡𝑒𝑛𝑎𝑡𝑒 𝑡ℎ𝑒 𝑙𝑖𝑠𝑡 𝐵 0 , 𝐵 1 , 𝐵 2 , … … , 𝐵 𝑛 − 1
𝑡𝑜𝑔𝑒𝑡ℎ𝑒𝑟 𝑖𝑛 𝑜𝑟𝑑𝑒𝑟. } Ο(𝑛)
Bucket Sort
Multiple items per bucket (Analysis)
• It was observed that except line no 8 all other lines
take Ο 𝑛 time in worst case.
• Line no. 8 (i.e. insertion sort) takes Ο 𝑛2 , if all the
elements belongs to one bucket.
• The average time complexity for Bucket Sort is
𝑂(𝑛 + 𝑘) in uniform distribution of data.
Bucket Sort
Characteristics of Bucket Sort
• Bucket sort assumes that the input is drawn from a
uniform distribution.
• The computational complexity estimates involve the
number of buckets.
• Bucket sort can be exceptionally fast because of the
way elements are assigned to buckets, typically using
an array where the index is the value.
Bucket Sort
Characteristics of Bucket Sort
• This means that more auxiliary memory is required
for the buckets at the cost of running time than more
comparison sorts.
• The average time complexity is 𝑂(𝑛 + 𝑘).
• The worst time complexity is 𝑂(𝑛²).
• The space complexity for Bucket Sort is 𝑂(𝑛 + 𝑘).
Design and Analysis of Algorithm
Lecture -20
Overview
35 33 42 10 14 19 27 44
35 33 42 10 14 19 27 44
swap
Shell Sort
Swap count =1
14 33 42 10 35 19 27 44
swap
Shell Sort
Swap count =1
14 33 42 10 35 19 27 44
swap
Shell Sort
Swap count =2
14 19 42 10 35 33 27 44
swap
Shell Sort
Swap count =2
14 19 42 10 35 33 27 44
swap
Shell Sort
Swap count =3
14 19 27 10 35 33 42 44
swap
Shell Sort
Swap count =3
14 19 27 10 35 33 42 44
No swap required
Shell Sort
• After the first iteration the array looks like as
follows.
14 19 27 10 35 33 42 44
14 19 27 10 35 33 42 44
No swap
Shell Sort
Swap count =3
14 19 27 10 35 33 42 44
No swap
Shell Sort
Swap count =3
14 19 27 10 35 33 42 44
swap
Shell Sort
Swap count =4
14 19 10 27 35 33 42 44
swap
Shell Sort
Swap count =4
14 19 10 27 35 33 42 44
swap
14 19 10 27 35 33 42 44
swap
Shell Sort
Swap count =5
14 19 10 27 35 33 42 44
swap
14 10 19 27 35 33 42 44
swap
Shell Sort
Swap count =5
14 19 10 27 35 33 42 44
swap
14 10 19 27 35 33 42 44
swap
14 10 19 27 35 33 42 44
swap
Shell Sort
Swap count =6
14 19 10 27 35 33 42 44
swap
14 10 19 27 35 33 42 44
swap
10 14 19 27 35 33 42 44
swap
Shell Sort
Swap count =6
10 14 19 27 35 33 42 44
No swap
Shell Sort
Swap count =6
10 14 19 27 35 33 42 44
swap
Shell Sort
Swap count =7
10 14 19 27 33 35 42 44
swap
Shell Sort
Swap count =7
10 14 19 27 33 35 42 44
No swap
Shell Sort
Swap count =7
10 14 19 27 33 35 42 44
No swap
Shell Sort
Swap count =7
10 14 19 27 33 35 42 44
No swap
• Hence ,total number of swap required in
• 1st iteration= 3
• 2nd iteration= 4
• So total 7 numbers of swap required to sort the
array by shell sort.
Shell Sort
Algorithm Shell sort (Knuth Method)
1. gap=1
2. while(gap < A.length/3)
3. gap=gap*3+1
4. while( gap>0)
5. for(outer=gap; outer<A.length; outer++)
6. Ins_value=A[outer]
7. inner=outer
8. while(inner>gap-1 && A[inner-gap]≥ Ins_value)
9. A[inner]=A[inner-gap]
10. inner=inner-gap
11. A[inner]=Ins_value
12. gap=(gap-1)/3
Shell Sort
• Let us dry run the shell sort algorithm with the same example
as already discussed.
35 33 42 10 14 19 27 44
At the beginning
A .length=8 and gap=1
After first three line execution the gap value changed to 4
Now, gap>0 (i.e. 4>0)
Now in for loop outer=4;outer<8;outer++
Ins_value=A[outer]=A[4]=14
inner=outer i.e. inner=4
Now the line no 8 is true⟹ 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑐𝑐𝑢𝑟𝑒𝑑 and the updated array is
looked as follow
14 33 42 10 35 19 27 44
Swap
Shell Sort
14 19 42 10 35 33 27 44
swap
Shell Sort
14 19 27 10 35 33 42 44
swap
Shell Sort
14 19 27 10 35 33 42 44
No swap required
Shell Sort
10 14 19 27 33 35 42 44
Shell Sort
Analysis:
• Shell sort is efficient for medium size lists.
• For bigger list, this algorithm is not the best choice.
• But it is the fastest of all Ο(𝑛2 ) sorting algorithm.
• The best case in shell sort is when the array is already sorted in
the right order i.e. Ο(𝑛)
• The worst case time complexity is based on the gap sequence.
That’s why various scientist give their gap intervals. They are:
1. Donald Shell give the gap interval 𝑛Τ2. ⟹ Ο(𝑛 log 𝑛)
3Τ
2. Knuth give the gap interval 𝑔𝑎𝑝 ← 𝑔𝑎𝑝 ∗ 3 + 1 ⟹ Ο(𝑛 2 )
3. Hibbard give the gap interval 2𝑘−1 ⟹ Ο(𝑛 log 𝑛)
Shell Sort
Analysis:
In General
• Shell sort is an unstable sorting algorithm because this
algorithm does not examine the elements lying in between the
intervals.
• Worst Case Complexity: less than or equal to 𝑂 𝑛2 or float
between 𝑂 𝑛 log 𝑛 𝑎𝑛𝑑 𝑂 𝑛2 .
• Best Case Complexity: 𝑂(𝑛 log 𝑛)
When the array is already sorted, the total number of
comparisons for each interval (or increment) is equal to 𝑂(𝑛 )
i.e. the size of the array.
• Average Case Complexity: 𝑂(𝑛 log 𝑛)
It is around 𝑂 𝑛1.25 .
Shell Sort
Analysis:
In General
• Shell sort is an unstable sorting algorithm because this
algorithm does not examine the elements lying in between the
intervals.
• Worst Case Complexity: less than or equal to 𝑂 𝑛2 or float
between 𝑂 𝑛 log 𝑛 𝑎𝑛𝑑 𝑂 𝑛2 .
• Best Case Complexity: 𝑂(𝑛 log 𝑛)
When the array is already sorted, the total number of
comparisons for each interval (or increment) is equal to 𝑂(𝑛 )
i.e. the size of the array.
• Average Case Complexity: 𝑂(𝑛 log 𝑛)
It is around 𝑂 𝑛1.25 .
• Sorts in place.
Description of quicksort
Performance of quicksort
We will analyze
• the worst-case running time of QUICKSORT
and RANDOMIZED-QUICKSORT (the same),
and
• the expected (average-case) running time of
RANDOMIZED-QUICKSORT is O(n lg n) .
Design and Analysis of Algorithm
10
0
0 1 2 3 4
100
80
60
40
20
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
➢ Question
➢ How many buy/sell pairs are possible over ‘n’ days?
(i.e. search every possible pair of buy and sell dates in
which the buy date precedes the sell date)
𝑨𝒓𝒓[𝒙]
𝒙=𝒊
Is as large as possible.
(Note: The number of the input array may be positive,
negative or zero)
The Maximum sum subarray problem
• Input: an array A[1..n] of n numbers
– Assume that some of the numbers are negative, because this
problem is trivial when all numbers are nonnegative
• Output: a nonempty subarray A[i..j] having the largest sum
S[i, j] = Ai + Ai+1 +... + Aj
Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97
Changes 13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
A
maximum subarray
The Maximum subarray problem
➢ Divide and Conquer Approach
• Subproblem: Find a maximum subarray of A[low .. high]
In initial call, low =1 and high= n.
• Divide: the subarray into two subarrays of as equal size as
possible. Find the midpoint mid of the subarrays, and consider
the subarrays A[low ..mid] and A[mid+1 .. high] .
• Conquer: by finding the maximum subarrays of A[low .. mid] and
A[mid+1..high] .
• Combine: by finding a maximum subarray that crosses the
midpoint, and using the best solution out of the three (the
subarray crossing the midpoint and the two solutions found in
the conquer step).
The Maximum subarray problem
mid +1
entirely in A[low..mid] entirely in A[mid+1..high]
Fig (a): Possible locations of subarrays of A[low..high]
A[mid+1..j]
low
i mid high
mid +1 j
A[i..mid]
mid +1
entirely in A[low..mid] entirely in A[mid+1..high]
For example :
-1 3 4 -5 9 -2
The Maximum subarray problem
➢Divide and Conquer Approach
crosses the midpoint
mid +1
entirely in A[low..mid] entirely in A[mid+1..high]
For example :
-1 3 4 -5 9 -2
Left Sum = -1 + 3 + 4 = 6
Right Sum= -5 + 9 + -2 = 2
Cross Midpoint Sum = 3 + 4 + -5 + 9 = 11
indices 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
S[8..8] 𝑚𝑎𝑥 − 𝑙𝑒𝑓𝑡 ⇒ 18 20 S[9..9]
S[7..8] -5 13 S[9..10]
S[4..8] -4 -2 S[9..13]
Θ(1)
𝑛
𝑇
2
𝑛
𝑇
2
Θ(𝑛)
Θ(1)
The Maximum subarray problem(Analysis)
𝑇(𝑛)
Θ(1)
Θ(1)
𝑛
𝑇
Type equation here. 2
𝑛
𝑇
2
Θ(𝑛)
Θ(1)
𝒏
𝑻 𝒏 = 𝟐𝑻 + 𝜣(𝒏)
𝟐
The Maximum subarray problem(Analysis)
𝑇(𝑛)
Θ(1)
Θ(1)
𝑛
𝑇
Type equation here. 2
𝑛
𝑇
2
Θ(𝑛)
Θ(1)
𝒏
𝑻 𝒏 = 𝟐𝑻 + 𝜣(𝒏) ⟹ 𝚯(𝒏𝒍𝒈 𝒏)
𝟐
Analysing Maximum subarray problem
Analysing Maximum subarray problem
Analysing Maximum subarray problem
Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The Maximum subarray problem
➢Divide and Conquer Approach
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Divide
Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Divide
Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The Maximum subarray problem
➢Divide and Conquer Approach
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Divide
Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The Maximum subarray problem
➢Divide and Conquer Approach
Start Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
CS=10 CS=-5
CS=-19 CS=-5 CS=13 CS=7 CS=-7 CS=3 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
The Maximum subarray problem
➢Divide and Conquer Approach
CS=10 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
The Maximum subarray problem
➢Divide and Conquer Approach
CS=10 CS=-5
CS=-19 CS=-5 CS=13 CS=7 CS=-7 CS=3 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
The Maximum subarray problem
➢Divide and Conquer Approach
CS=10 CS=-5
CS=-19 CS=-5 CS=13 CS=7 CS=-7 CS=3 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
The Maximum subarray problem
➢Divide and Conquer Approach
CS=17 CS=16
Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=13 RS=20 LS=-3 RS=18 LS=20 RS=7
Conquer
RS=12 LS=15
CS=10 CS=-5
CS=-19 CS=-5 CS=13 CS=7 CS=-7 CS=3 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
The Maximum subarray problem
➢Divide and Conquer Approach
maximum subarray
Conquer
LS=20 RS=25
CS=43
CS=17 CS=16
Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=13 RS=20 LS=-3 RS=18 LS=20 RS=7
Conquer
RS=12 LS=15
CS=10 CS=-5
CS=-19 CS=-5 CS=13 CS=7 CS=-7 CS=3 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
Home Assignment
• Solve the Maximum Subarray problem in
Θ 𝑛 time.
Design and Analysis of Algorithm
4 2 0 1 2 1 3 2
𝐴= 3 1 2 5 ,B= 5 4 2 3
3 2 1 4 1 4 0 2
5 2 6 7 3 2 4 1
Matrix Multiplication
Example 2
First we partition the input matrices into sub matrices as shown
below:
4 2 0 1 2 1 3 2
𝐴11 = , 𝐴12 = 𝐵11 = , 𝐵12 =
3 1 2 5 5 4 2 3
3 2 1 4 1 4 0 2
𝐴21 = , 𝐴22 = 𝐵21 = , 𝐵22 =
5 2 6 7 3 2 4 1
Matrix Multiplication
Example 2
Calculate the value of P, Q, R, S, T, U and V
𝑃 = (A11 + A22 ). (𝐵11 +𝐵22 )
5 6 2 3
=
9 8 9 5
Q=(𝐴21 + 𝐴22 ). 𝐵11
64 45
=
90 67 4 6 2 1
=
11 9 5 4
38 28
=
67 47
Matrix Multiplication
Example 2
R= A11 .(𝐵12 − 𝐵22 )
4 2 3 0
=
3 1 −2 2
8 4
=
7 2 S= 𝐴22 .(𝐵21 − 𝐵11 )
1 4 −1 3
=
6 7 −2 −2
−9 −5
=
−20 4
Matrix Multiplication
Example 2
T=(𝐴11 + 𝐴12 ). 𝐵22 U= (𝐴21 − 𝐴11 ).(𝐵11 + 𝐵12 )
4 3 0 2 −1 0 5 3
= =
5 6 4 1 2 1 7 7
12 11 −5 −3
= =
24 16 17 13
V=(𝐴12 − 𝐴22 ).(𝐵21 + 𝐵22 )
−1 −3 1 6
=
−4 −2 7 3
−22 −15
=
−18 −30
Matrix Multiplication
Example 2
Now, Compute 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 :
𝐶11 = 𝑃 + 𝑆 − 𝑇 + 𝑉
64 45 −9 −5 12 11 −22 −15 21 14
𝐶11 = + - + =
90 67 −20 4 24 16 −18 −30 28 25
𝐶12 = 𝑅 + 𝑇
8 4 12 11 20 15
𝐶12 = + =
7 2 24 16 31 18
Matrix Multiplication
Example 2
Now, Compute 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 :
𝐶21 = 𝑄 + 𝑆
38 28 −9 −5 29 23
𝐶21 = + =
67 47 −20 4 47 51
𝐶22 = 𝑃 + 𝑅 − 𝑄 + 𝑈
64 45 8 4 38 28 −5 −3 29 18
𝐶22 = + - + =
90 67 7 2 67 47 17 13 47 35
Matrix Multiplication
Example 2
So the values of 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 are:
21 14 20 15 29 23 29 18
𝐶11 = , 𝐶12 = , 𝐶21 = and 𝐶22 =
28 25 31 18 47 51 47 35
Hence the resultant Matrix C is =
21 14 20 15
𝐶11 𝐶12
𝐶= = 28 25 31 18
𝐶21 𝐶22 29 23 29 18
47 51 47 35
Design and Analysis of Algorithm
Lecture -25
Overview
6
2
7 1
• We represent the convex hull as the sequence of points on
the convex hull polygon, in counter- clockwise order.
Convex Hull
• Definition:
➢ Informal: Convex hull of a set of points in
plane is the shape taken by a rubber band
stretched around the nails pounded into
the plane at each point.
➢ Formal: The convex hull of a set of planar
points is the smallest convex polygon
containing all of the points.
Graham Scan
• Concept:
➢Start at point guaranteed to be on the hull.
(the point with the minimum y value)
➢Sort remaining points by polar angles of
vertices relative to the first point.
➢Go through sorted points, keeping vertices of
points that have left turns and dropping
points that have right turns.
Graham Scan
• Concept:
➢Start at point guaranteed to be on the hull.
(the point with the minimum y value)
➢Sort remaining points by polar angles of
vertices relative to the first point.
➢Go through sorted points, keeping vertices of
points that have left turns and dropping
points that have right turns.
Graham Scan - Example
Graham Scan - Example
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Algorithm
Graham Scan - Algorithm
−−−−− −𝜪(𝒏)
− −𝜪(𝒏𝒍𝒐𝒈𝒏)
−−−−− −𝜪(𝟏)
−−−−− −𝜪(𝟏)
−−−−− −𝜪(𝟏)
−𝜪(𝒏)
Graham Scan - Algorithm
−−−−− −𝜪(𝒏)
− −𝜪(𝒏𝒍𝒐𝒈𝒏)
−−−−− −𝜪(𝟏)
−−−−− −𝜪(𝟏)
−−−−− −𝜪(𝟏)
−𝜪(𝒏)
n
T n = Ο n + 2T +Ο 𝑛 + Ο 1 + Ο 1 + Ο 1 + Ο(𝑛)
2
𝐓 𝐧 = 𝚶(𝒏 𝒍𝒈 𝒏)+ 𝜪 𝒏
Graham Scan - Algorithm
• if the splits are not balanced, then the running time can
easily increase to𝑂(𝑛2 ).
Divide and Conquer (Quickhull)
Time Complexity of Quickhull
𝑇 𝑛 =𝑇 𝑙 +𝑇 𝑛−𝑙 +Ο 𝑛
Where,
• 𝑇 𝑙 ⟶ Point in left side of AB.
• 𝑇 𝑛 − 𝑙 ⟶ Point in right side of AB.
• Ο 𝑛 ⟶ To find the farthest point.
Assume that T(l) contain (n/2) points and T(n-l) contain (n/2) points.
Hence ,
𝑛
𝑇 𝑛 = 2𝑇 +Ο 𝑛
2
After applying Master Method
𝑇 𝑛 = Θ(𝑛 lg 𝑛) in average case
𝑇 𝑛 = Θ 𝑛2 in worst case.
Algorithm Analysis and Design
Recurrence Equation
(Solving Recurrence using
Master Method)
Lecture – 26 and 27
Overview
• A recurrence is a function is defined in terms of
– one or more base cases, and
– itself, with smaller arguments.
Examples:
Overview
• Many technical issues:
• Floors and ceilings
[Floors and ceilings can easily be removed and don’t affect
the solution to the recurrence. They are better left to a
discrete math course.]
• Exact vs. asymptotic functions
• Boundary conditions
Overview
𝑎𝑛𝑑, 𝑓 𝑛 = 𝑛
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒, 𝑛log𝑏 𝑎 = 𝑓 𝑛 ,
Hence as per the definition of master theorem Case 2
𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 lg 𝑛) = Θ( 𝑛 lg 𝑛)
Master Method
• Recurrence (Changing Variable)
Example 8
Solve the following recurrence by using Master Method
𝑇 𝑛 = 2𝑇( 𝑛 ) + lg 𝑛
Due to, a little algebraic manipulation the above recurrence looks very difficult.
These recurrences can be simplified by using change of variable. For convenience,
we shall not worry about rounding off values, such as 𝑛 , to be integers.
First, Renaming 𝑚 = log 𝑛
⟹ 𝑛 = 2𝑚
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑛 𝑎𝑛𝑑 𝑚 𝑜𝑛 𝑡ℎ𝑒 𝑎𝑏𝑜𝑣𝑒 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒.
Hence the above recurrence can be written as follows
𝑇 2𝑚 = 2𝑇( 2𝑚 ) + 𝑚
1
𝑚 𝑚2
⟹ 𝑇 2 = 2𝑇(2 )+𝑚
𝑚Τ
⟹ 𝑇 2𝑚 = 2𝑇 2 2 +𝑚
Master Method
𝑊𝑒 𝑐𝑎𝑛 𝑛𝑜𝑤 𝑟𝑒𝑛𝑎𝑚𝑒
𝑆 𝑚 = 𝑇 2𝑚
𝑚
⟹ 𝑆(𝑚/2) = 𝑇 2 ൗ2
Now put these values in above equation
𝑆(𝑚) = 2𝑆(𝑚/2) + 𝑚
Now apply master method for solve the above equation
𝐻𝑒𝑎𝑟, 𝑎 = 2, 𝑏 = 2 𝑎𝑛𝑑 𝑓 𝑚 = 𝑚
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
𝑆𝑜, 𝑚log𝑏 𝑎 = 𝑚log2 2 = 𝑚1 = 𝑚
𝑎𝑛𝑑, 𝑓 𝑚 = 𝑚
Master Method
Hence as per the definition of master theorem Case 2
𝑆 𝑚 = Θ(𝑚log𝑏 𝑎 lg 𝑚)
⟹ 𝑆(𝑚) = Θ(𝑚 lg 𝑚)
⟹ 𝑆(𝑚) = Θ(𝑚 lg 𝑚)
⟹ 𝑇 2𝑚 = Θ 𝑚 lg 𝑚 𝑎𝑠 𝑆 𝑚 = 𝑇 2𝑚
⟹ 𝑇 𝑛 = Θ log 𝑛 lg log 𝑛 𝑎𝑠 𝑛 = 2𝑚 , 𝑎𝑛𝑑 𝑚 = log 𝑛
Hence the complexity of above recurrence is Θ log 𝑛 lg log 𝑛
Master Method
Example 9
Solve the following recurrence by using Master Method
𝑇 𝑛 = 2𝑇( 𝑛 ) + 1
Master Method
Example 9
Solve the following recurrence by using Master Method
𝑇 𝑛 = 2𝑇( 𝑛 ) + 1
Due to, a little algebraic manipulation the above recurrence looks very difficult.
These recurrences can be simplified by using change of variable. For convenience,
we shall not worry about rounding off values, such as 𝑛 , to be integers.
First, Renaming 𝑚 = log 𝑛
⟹ 𝑛 = 2𝑚
⟹ 𝑛1Τ2 = 2mΤ2
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑛 𝑜𝑛 𝑡ℎ𝑒 𝑎𝑏𝑜𝑣𝑒 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒.
Hence the above recurrence can be written as follows
𝑇 2𝑚 = 2𝑇( 2𝑚 ) + 1
1/2
⟹ 𝑇 2𝑚 = 2𝑇(2𝑚 )+1
⟹ 𝑇 2𝑚 = 2𝑇 2𝑚Τ2 + 1
Master Method
𝑊𝑒 𝑐𝑎𝑛 𝑛𝑜𝑤 𝑟𝑒𝑛𝑎𝑚𝑒
𝑆 𝑚 = 𝑇 2𝑚
𝑚
⟹ 𝑆(𝑚/2) = 𝑇 2 ൗ2
Now put these values in above equation
𝑆(𝑚) = 2𝑆(𝑚/2) + 1
Now apply master method for solve the above equation
𝐻𝑒𝑎𝑟, 𝑎 = 2, 𝑏 = 2 𝑎𝑛𝑑 𝑓 𝑚 = 1
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
𝑆𝑜, 𝑚log𝑏 𝑎 = 𝑚log2 2 = 𝑚1 = 𝑚 𝑎𝑛𝑑, 𝑓 𝑚 = 1
Hence as per the definition of master theorem Case 1:
𝑚log𝑏 𝑎 > 𝑓 𝑚
⇒𝑚>1
⇒ 𝑚1−𝜀 = 1 𝑤ℎ𝑒𝑟𝑒 𝜀 = 1
Master Method
Hence, 𝑆 𝑚 = Θ(𝑚log𝑏 𝑎 )
⟹ 𝑆 𝑚 = Θ(𝑚log𝑏 𝑎 ) = Θ(𝑚)
⟹𝑆 𝑚 =Θ 𝑚
⟹ 𝑇 2𝑚 = Θ log 𝑛 𝑎𝑠 𝑆 𝑚 = 𝑇 2𝑚 𝑎𝑛𝑑 𝑚 = log 𝑛
⟹ 𝑇 𝑛 = Θ log 𝑛 𝑎𝑠 𝑛 = 2𝑚
log2 2𝑛
Q7. 𝑇 𝑛 = 2𝑇 𝑛 + .
log log2 2𝑛
(Solving Recurrence using
Advanced version of Master Method)
(For GATE questions only)
Master Method (GATE)
Definition (Advance Version)
Let 𝑎 ≥ 1, 𝑏 > 1, 𝑘 ≥ 0 𝑎𝑛𝑑 𝑝 𝑖𝑠 𝑎 𝑟𝑒𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 and let 𝑇 (𝑛) be defined on the
nonnegative integers by the recurrence
𝑇(𝑛) = 𝑎𝑇(𝑛/𝑏) + Θ(𝑛𝑘 𝑙𝑜𝑔𝑝 𝑛)
𝑤ℎ𝑒𝑟𝑒 𝑤𝑒 𝑖𝑛𝑡𝑒𝑟𝑝𝑟𝑒𝑡 𝑛/𝑏 𝑡𝑜 𝑚𝑒𝑎𝑛 𝑒𝑖𝑡ℎ𝑒𝑟 ⌊𝑛/𝑏⌋ 𝑜𝑟 ⌈𝑛/𝑏⌉.
𝑇ℎ𝑒𝑛 𝑇 (𝑛) 𝑐𝑎𝑛 𝑏𝑒 𝑏𝑜𝑢𝑛𝑑𝑒𝑑 𝑎𝑠𝑦𝑚𝑝𝑡𝑜𝑡𝑖𝑐𝑎𝑙𝑙𝑦 𝑏𝑦 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑛𝑔 𝑎 𝑤𝑖𝑡ℎ 𝑏 𝑘 𝑎𝑠 𝑓𝑜𝑙𝑙𝑜𝑤𝑠.
1.𝑖𝑓 𝑎 > 𝑏 𝑘 , 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ( 𝑛log𝑏 𝑎 )
2. 𝐼𝑓 𝑎 = 𝑏 𝑘 𝑎𝑛𝑑
Option 1 : 𝑖𝑓 𝑝 < −1, 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 )
Option 2 :𝑖𝑓 𝑝 = −1, 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 . 𝑙𝑜𝑔2 𝑛)