0% found this document useful (0 votes)
19 views626 pages

DAA Part 1

Uploaded by

anantdrive2103
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views626 pages

DAA Part 1

Uploaded by

anantdrive2103
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 626

Design and Analysis of Algorithm

(KCS503)

Introduction to Algorithms and its


analysis mechanism

Lecture - 1
Overview
• Provide an overview of algorithms and analysis.
• Try to touching the distinguishing features and
the significance of analysis of algorithm.
• Start using frameworks for describing and
analysing algorithms.
• See how to describe algorithms in pseudo code in
the context of real world software development.
• Begin using asymptotic notation to express
running-time analysis with Examples.
What is an Algorithm ?
• An algorithm is a finite set of rules that gives a
sequence of operations for solving a specific
problem
• An algorithm is any well defined computational
procedure that takes some value or set of
values, as input and produces some value or set
of values as output.
• We can also view an algorithm as a tool for
solving a well specified computational problem.
Characteristics of an Algorithm
• Input: provided by the user.
• Output: produced by algorithm.
• Definiteness: clearly define.
• Finiteness: It has finite number of steps.
• Effectiveness: An algorithm must be
effective so that it’s output can be carried
out with the help of paper and pen.
Analysis of an Algorithm
• Loop Invariant technique was done in three
steps:
– Initialization
– Maintenance
– Termination
• It deals with predicting the resources that an
algorithm requires to its completion such as
memory and CPU time.
• To main measure for the efficiency of an algorithm
are Time and space.
Complexity of an Algorithm
• Complexity of an Algorithm is a function, f(n)
which gives the running time and storage
space requirement of the algorithm in terms
of size n of the input data.
• Space Complexity: Amount of memory
needed by an algorithm to run its
completion.
• Time complexity: Amount of time that it
needs to complete itself.
Cases in Complexity Theory

• Best Case: Minimum time


• Worst Case: Maximum amount of time
• Average Case: Expected / Average value
of the function f(n).
Example 1
#include <stdio.h>

int main()
{
printf("Hello PSIT");
return 0;
}
Complexity of Example 1
#include <stdio.h>

int main()
{
printf("Hello PSIT");
return 0;
}

𝑇(𝑛) = 𝑂(1)
Example 2
#include <stdio.h>
void main()
{
int i, n = 8;
for (i = 1; i <= n; i++) {
printf("Hello PSIT !!!\n");
}
}
Complexity of Example 2
#include <stdio.h>
void main()
{
int i, n = 8;
for (i = 1; i <= n; i++) {
printf("Hello PSIT !!!\n");
}
}

𝑇(𝑛) = 𝑂(𝑛)
Example 3
#include <stdio.h>
void main()
{
int i, n = 8;
for (i = 1; i <= n; i=i*2) {
printf("Hello PSIT !!!\n");
}
}
Complexity of Example 3
#include <stdio.h>
void main()
{
int i, n = 8;
for (i = 1; i <= n; i=i*2) {
printf("Hello PSIT !!!\n");
}
}

𝑇 𝑛 = log 2 𝑛
Example 4
#include <stdio.h>
#include <math.h>
void main()
{
int i, n = 8;
for (i = 2; i <= n; i=pow(i,2)) {
printf("Hello PSIT !!!\n");
}
}
Complexity of Example 4
#include <stdio.h>
#include <math.h>
void main()
{
int i, n = 8;
for (i = 2; i <= n; i=pow(i,2)) {
printf("Hello PSIT !!!\n");
}
}

𝑇 𝑛 = Ο(log 2 (log 2 𝑛))


Example 5
A()
{
int i;
for(i=1;i<=n;i++)
{
printf(“ABCD”);
}
}
Complexity of Example 5
A()
{
int i;
for(i=1;i<=n;i++)
{
printf(“ABCD”);
}
}

𝑇 𝑛 = 𝛰(𝑛)
Example 6
A()
{
int i=1 ,s=1;
scanf(“%d”, &n);
while(s<=n)
{
i++;
s=s+i;
printf(“abcd”);
}
}
Complexity of Example 6
A()
{
int i=1 ,s=1;
scanf(“%d”, &n);
while(s<=n)
{
i++;
s=s+i;
printf(“abcd”);
}
} 𝑇 𝑛 = 𝛰( 𝑛)
Example 7
A()
{
int i=1;
for(i=1; 𝑖 2 <=n; i++)
{
printf(“abcd”);
}
}
Complexity of Example 7
A()
{
int i=1;
for(i=1; 𝑖 2 <=n; i++)
{
printf(“abcd”);
}
}
𝑇 𝑛 = 𝛰( 𝑛)
Example 8
A()
{
int i=1;
for (i=1; i≤n; i++)
{
for (j=1; j≤ 𝑖 2 ; j++)
{
for (k=1; k≤n/2; k++)
{
printf(“abcd”);
}
}
}
}
Complexity of Example 8
𝑇 𝑛 = Ο 𝑛4

Explanation:
𝐼 = 1, 2, 3, 4, 5, … … … … … … … … … … … .
𝐽 = 1, 4, 9, 16, 25, … … … … … … … … … …
𝑛 4𝑛 9𝑛 16𝑛
𝐾= , , , , …………………………
2 2 2 2
Complexity of Example 8
𝑇 𝑛 = Ο 𝑛4

Explanation:
𝐼 = 1, 2, 3, 4, 5, … … … … … … … … … … … .
𝐽 = 1, 4, 9, 16, 25, … … … … … … … … … …
𝑛 4𝑛 9𝑛 16𝑛
𝐾= , , , , ………………………..
2 2 2 2

𝑺𝒖𝒎 𝒐𝒇 𝒔𝒒𝒖𝒂𝒓𝒆 𝒐𝒇 𝑵𝒂𝒕𝒖𝒓𝒂𝒍 𝑵𝒖𝒎𝒃𝒆𝒓.


= [𝒏(𝒏 + 𝟏)(𝟐𝒏 + 𝟏)]/𝟔
Design and Analysis of Algorithm
(KCS503)

Analysis of Linear Search

Lecture - 2
Searching

• Searching is a process of finding a


particular element among several given
elements.
• The search is successful if the required
element is found.
• Otherwise, the search is unsuccessful.
Searching
Searching Algorithms are a family of algorithms
used for the purpose of searching.

The searching of an element in the given array may


be carried out in the following two ways-

• Linear Searching
• Binary Searching
Linear Searching
To understand the working of linear search algorithm,
let's take an unsorted array. It will be easy to understand
the working of linear search with an example.

Let the elements of array are :

Let the element to be searched is K = 41


Linear Searching
Now, start from the first element and
compare K=41 with each element of the array.

The value of K=41, is


not matched with the
first element of the
array. So, move to the
next element. And
follow the same
process until the
respective element is
found.
Searching
Now, start from the first element and
compare K=41 with each element of the array.
Searching
Now, start from the first element and
compare K=41 with each element of the array.
Searching
Now, start from the first element and
compare K=41 with each element of the array.
Searching
Now, start from the first element and
compare K=41 with each element of the array.
Searching
Now, start from the first element and
compare K=41 with each element of the array.
Searching
Now, the element to be searched is found. So, algorithm
will return the index of the element matched.
Linear Searching
What Is Linear Search?

• A linear Search is the simplest searching


algorithm.
• It traverses the array sequentially to locate the
required element.
• It searches for an element by comparing it with
each element of the array one by one.
• So, it is also called as Sequential Search.
Linear Searching

When Linear Search Algorithm is applied-

• No information is given about the array.


• The given array is unsorted, or the elements are
unordered.
• The list of data items is smaller.
Linear Searching
Linear Search Algorithm –

Linear_Search (a , n , item , loc)


Begin
for i = 0 to (n - 1) by 1 do
if (a[i] = item) then
set loc = i
exit
endif
end for
set loc = -1
End
Linear Searching
Time Complexity Analysis- (Best case)

In the best possible case,

• The element being searched may be found at the


first position.
• In this case, the search terminates in success
with just one comparison.
• Thus in best case, linear search algorithm takes
O(1) operations.
Linear Searching
Time Complexity Analysis- (Worst Case)
In the worst possible case,

• The element being searched may be present at


the last position or not present in the array at all.
• In the former case, the search terminates in
success with n comparisons.
• In the later case, the search terminates in failure
with n comparisons.
• Thus in worst case, linear search algorithm takes
O(n) operations.
Linear Searching
Time Complexity Analysis- (Worst Case)
In the worst possible case,(Mathematically)
𝑇 𝑛 = 𝑂(1) + 𝑇(𝑛 − 1)
Linear Searching
Time Complexity Analysis- (Worst Case)
In the worst possible case,(Mathematically)
𝑇 𝑛 = 𝑂(1) + 𝑇(𝑛 − 1)
= 𝑂(1) + 𝑂(1) + 𝑇(𝑛 − 2)
Linear Searching
Time Complexity Analysis- (Worst Case)
In the worst possible case,(Mathematically)
𝑇 𝑛 = 𝑂(1) + 𝑇(𝑛 − 1)
= 𝑂(1) + 𝑂(1) + 𝑇(𝑛 − 2)
= 𝑂(1) + 𝑂(1) + 𝑂(1) + 𝑇(𝑛 − 3)
= 𝑂(1) + 𝑂(1) + 𝑂(1) + 𝑂(1) + 𝑇(𝑛 − 4)
after n times of iteration
= 𝑂 1 + 𝑂 1 + 𝑂 1 + 𝑂 1 + ⋯ +𝑂 1 + 𝑇(0)
=𝑂 𝑛 +𝑂 0
=𝑂 𝑛
Linear Searching
Time Complexity Analysis-

Thus, we can say that in general:

The time Complexity of Linear Search Algorithm is


O(n). where, n is the number of elements in the
linear array.
Design and Analysis of Algorithm
(KCS503)

Definition of Sorting problem through


exhaustive search and analysis of
Selection Sort through iteration Method

Lecture -3
A Sorting Problem
(Exhaustive Search Approach)

Input: A sequence of n numbers 𝑨𝟏, 𝑨𝟐, . . . , 𝑨𝒏.

Output: A permutation (reordering) 𝑨𝟏 , 𝑨𝟐 , . . . , 𝑨𝒏 of the

input sequence such that 𝑨𝟏 ≤ 𝑨𝟐 ≤ … ≤ 𝑨𝒏 .

The sequences are typically stored in arrays.


A Sorting Problem
(Exhaustive Search Approach)
Let there be a set of four digits and note that there are multiple
possible permutations for the four digits. They are:
1 2 34 2134 3124 41 23
1 2 43 2143 3142 41 32
1 3 24 2314 3214 42 13
1 3 42 2341 3241 42 31
1 4 23 2431 3412 43 12
1 4 32 2413 3421 43 21
• There are 24 different permutations possible. (as shown
above)
• Only one of these permutations meets our criteria.
(i.e. A1 ≤ A2 ≤ …≤ An) .

A Sorting Problem
(Exhaustive Search Approach)
How to generate the 24 permutation ?

2 3 4

3 4 2 4 2 3

4 3 4 2 3 2
A Sorting Problem
(Exhaustive Search Approach)
How to generate the 24 permutation ?

1 2

2 3 4 1 3 4

3 4 2 4 2 3 3 4 1 4 1 3

4 3 4 2 3 2 4 3 4 1 3 1
A Sorting Problem
(Exhaustive Search Approach)
How to generate the 24 permutation ?

3 4

1 2 4 1 3 2

2 4 1 4 2 1 3 2 1 2 1 3

4 2 4 1 1 2 2 3 2 1 3 1
A Sorting Problem
(Exhaustive Search Approach)
An in-depth look at the analysis of the 24 permutations of four digits
A Sorting Problem
(Exhaustive Search Approach)
Let there be a set of four digits and note that there are multiple
possible permutations for the four digits. They are:
1234 2134 3124 41 23
1243 2143 3142 41 32
1324 2314 3214 42 13
1342 2341 3241 42 31
1423 2431 3412 43 12
1432 2413 3421 43 21
• There are 24 different permutations possible. (as shown
above)
• Only one of these permutations meets our criteria.
(i.e. A1 ≤ A2 ≤ …≤ An) . (1 2 3 4)
A Sorting Problem
(Exhaustive Search Approach)
How we do this ?
Step 1: Generate all the permutation and store it.
Step 2: Check all the permutation one by one and find which
permutation is satisfying the required condition (i.e. a1 ≤ a2 ≤
・ ・ ・ ≤ an).
Step 3: Once we get it , we got the victory.
A Sorting Problem
(Exhaustive Search Approach)
How we do this ?
Step 1: Generate all the permutation and store it.
Step 2: Check all the permutation one by one and find which
permutation is satisfying the required condition (i.e. a1 ≤ a2 ≤
・ ・ ・ ≤ an).
Step 3: Once we get it , we got the victory.

How we do this in algo based?


For each permutation P ∈ set of n! permutations:
if (a1 ≤ a2 ≤ ・ ・ ・ ≤ an) == permutation set[p]:
print (permutation set[p])
A Sorting Problem
(Exhaustive Search Approach)
How we do this ?
Step 1: Generate all the permutation and store it.
Step 2: Check all the permutation one by one and find which
permutation is satisfying the required condition (i.e. a1 ≤ a2 ≤
・ ・ ・ ≤ an).
Step 3: Once we get it , we got the victory. Exhaustive
Search
How we do this in algo based?
For each permutation P ∈ set of n! permutations:
if (a1 ≤ a2 ≤ ・ ・ ・ ≤ an) == permutation set[p]:
print (permutation set[p])
A Sorting Problem
(Exhaustive Search Approach)
How we do this ?
Step 1: Generate all the permutation and store it.
Step 2: Check all the permutation one by one and find
which permutation is satisfying the required condition
(i.e. a1 ≤ a2 ≤ ・ ・ ・ ≤ an).
Step 3: Once we get it , we got the victory. Exhaustive
Search
How we do this in algo based?
For each permutation P ∈ set of n! permutations:
if (a1 ≤ a2 ≤ ・ ・ ・ ≤ an) == permutation set[p]:
print (permutation set[p])
𝑪𝒐𝒎𝒑𝒍𝒆𝒙𝒊𝒕𝒚
= 𝚶 𝒏! ∗ 𝒏 𝒕𝒊𝒎𝒆
A Sorting Problem
(Selection Sort Approach)
Selection sort is a simple and efficient sorting algorithm that works by
repeatedly selecting the smallest (or largest) element from the
unsorted portion of the list and moving it to the sorted portion of the
list.
A Sorting Problem
(Selection Sort Approach)
Selection sort is a simple and efficient sorting algorithm that works by
repeatedly selecting the smallest (or largest) element from the
unsorted portion of the list and moving it to the sorted portion of the
list.

Lets consider the following array as an example:


A [] = (7, 4, 3, 6, 5).
A Sorting Problem
(Selection Sort Approach)

Lets consider the following array as an example:


A [] = (7, 4, 3, 6, 5).

• For the first position in the sorted array, the whole array is traversed
from index 0 to 4 sequentially. After going through the entire array, it is
evident that 3 is the lowest value, with 7 being stored at the first
position.
• Thus, replace 7 with 3. At the end of the first iteration, the item with
the lowest value, in this case 3, at position 2 is most likely to be at the
top of the sorted list.
A Sorting Problem
(Selection Sort Approach)
Lets consider the following array as an example:
A [] = (7, 4, 3, 6, 5).

• 1st Iteration
A Sorting Problem
(Selection Sort Approach)

Lets consider the updated array as an example:


A [] = (3, 4, 7, 6, 5).

• For the second position, where 25 is present, again traverse the rest of
the array in a sequential manner.
• Using the traversal method, we determined that the value 12 is the
second-lowest in the array and thus should be placed in the second
position. So no need of swapping.
A Sorting Problem
(Selection Sort Approach)
Lets consider the following array as an example:
A [] = (7, 4, 3, 6, 5).

• 2nd Iteration
A Sorting Problem
(Selection Sort Approach)

Lets consider the updated array as an example:


A [] = (3, 4, 7, 6, 5).

• For the third position, where 7 is present, again traverse the rest of the
array in a sequential manner.
• Using the traversal method, we determined that the value 5 is the third-
lowest in the array and thus should be placed in the third position.
A Sorting Problem
(Selection Sort Approach)
Lets consider the following array as an example:
A [] = (7, 4, 3, 6, 5).

• 3rd Iteration
A Sorting Problem
(Selection Sort Approach)

Lets consider the updated array as an example:


A [] = (3, 4, 5, 6, 7).

• Similarly we execute it for fourth and fifth iteration and finally the sorted
array is looks like as below:
A Sorting Problem
(Selection Sort Algorithm)
SELECTION SORT(arr, n)

Step 1: Repeat Steps 2 and 3 for


i = 0 to n-1
Step 2: CALL SMALLEST(arr, i, n, SMALLEST (arr, i, n, pos)
pos) Step 1: [INITIALIZE] SET SMALL = arr[i]
Step 3: SWAP arr[i] with arr[pos] Step 2: [INITIALIZE] SET pos = i
[END OF LOOP] Step 3: Repeat for j = i+1 to n
Step 4: EXIT if (SMALL > arr[j])
SET SMALL = arr[j]
SET pos = j
[END OF if]
[END OF LOOP]
Step 4: RETURN pos
Use C programming language to convert the above into a programme
A Sorting Problem
(Selection Sort Complexity)
Input: Given n input elements.
Output: Number of steps incurred to sort a list.
Logic: If we are given n elements, then in the first pass, it will do n-
1 comparisons; in the second pass, it will do n-2; in the third pass, it will
do n-3 and so on. Thus, the total number of comparisons can be found by;
A Sorting Problem
(Selection Sort Complexity)

• Best Case Complexity: The selection sort algorithm has a best-case


time complexity of O(n2) for the already sorted array.
• Average Case Complexity: The average-case time complexity for the
selection sort algorithm is O(n2), in which the existing elements are in
jumbled ordered, i.e., neither in the ascending order nor in the
descending order.
• Worst Case Complexity: The worst-case time complexity is
also O(n2), which occurs when we sort the descending order of an array
into the ascending order.
Thank You
Design and Analysis of Algorithm
(KCS503)

Insertion Sort and its Analysis

Lecture - 4
A Sorting Problem (Incremental Approach)
Input: A sequence of n numbers a1, a2, . . . , an.
Output: A permutation (reordering) a1 , a2 , . . . , an of the input
sequence such that a1 ≤ a2 ≤ ・ ・ ・ ≤ an .

The sequences are typically stored in arrays.

We also refer to the numbers as keys. Along with each key may be
additional information, known as satellite data. (You might want to clarify
that .satellite data. does not necessarily come from a satellite!)

We will see several ways to solve the sorting problem. Each way will be
expressed as an algorithm: a well-defined computational procedure that
takes some value, or set of values, as input and produces some value, or
set of values, as output.
Insertion sort
• A good algorithm for sorting a small number of elements.

• It works the way you might sort a hand of playing cards:

– Start with an empty left hand and the cards face down on the
table.
– Then remove one card at a time from the table, and insert it
into the correct position in the left hand.
– To find the correct position for a card, compare it with each of
the cards already in the hand, from right to left.
– At all times, the cards held in the left hand are sorted, and
these cards were originally the top cards of the pile on the
table.
Insertion sort (Example)
Insertion sort (Example)

1st Iteration
Insertion sort (Example)

1st Iteration 2nd Iteration


Insertion sort (Example)

1st Iteration 2nd Iteration 3rd Iteration


Insertion sort (Example)

1st Iteration 2nd Iteration 3rd Iteration

4th Iteration
Insertion sort (Example)

1st Iteration 2nd Iteration 3rd Iteration

4th Iteration 5th Iteration


Insertion sort (Example)

1st Iteration 2nd Iteration 3rd Iteration

4th Iteration 5th Iteration 6th Iteration


Insertion sort (Example)
Insertion sort (Algorithm)
Insertion sort (Algorithm)
Correctness Proof of Insertion Sort
• Initialization: Just before the first iteration, j = 2. The sub array
A[1 . . j − 1] is the single element A[1], which is the element originally
in A[1], and it is trivially sorted.

• Maintenance: To be precise, we would need to state and prove a loop


invariant for the “inner” while loop. Rather than getting bogged down
in another loop invariant, we instead note that the body of the inner
while loop works by moving A[ j − 1], A[ j − 2], A[ j − 3], and so on, by
one position to the right until the proper position for key (which has the
value that started out in A[ j ]) is found. At that point, the value of key is
placed into this position.

• Termination: The outer for loop ends when j > n; this occurs when j
= n + 1. Therefore, j −1 = n. Plugging n in for j −1 in the loop invariant,
the sub array A[1 . . n] consists of the elements originally in A[1 . . n]
but in sorted order. In other words, the entire array is sorted!
How do we analyze an algorithm's running
time?
• Input size: Depends on the problem being studied.

– Usually, the number of items in the input. Like the size n of the
array being sorted.
– But could be something else. If multiplying two integers, could
be the total number of bits in the two integers.
– Could be described by more than one number. For example,
graph algorithm running times are usually expressed in terms of
the number of vertices and the number of edges in the input
graph.
• Running time: On a particular input, it is the number
of primitive operations (steps) executed.
– Want to define steps to be machine-independent.
– Figure that each line of pseudo code requires a constant
amount of time.
– One line may take a different amount of time than another,
but each execution of line i takes the same amount of time
ci .
– This is assuming that the line consists only of primitive
operations.
• If the line is a subroutine call, then the actual call takes
constant time, but the execution of the subroutine being
called might not.
• If the line specifies operations other than primitive ones,
then it might take
• more than constant time.
Analysis of insertion sort

• Assume that the i th line takes time ci , which is a


constant. (Since the third line is a comment, it
takes no time.)
• For j = 2, 3, . . . , n, let tj be the number of times
that the while loop test is executed for that value
of j .
• Note that when a for or while loop exits in the
usual way-due to the test in the loop header-the
test is executed one time more than the loop body.
Running Time of Insertion Sort
Home Assignment

• Write the algorithm of Bubble sort and


calculate the time complexity.
Design and Analysis of Algorithm

Growth of Functions

Lecture –5,6, and 7


Overview
• A way to describe behaviour of functions in the limit. We’re
studying Asymptotic efficiency.
• Describe growth of functions.(i.e. The order of growth of the
running time of an algorithm)
• Focus on what’s important by abstracting away low-order
terms and constant factors.
• How we indicate running times of algorithms.
• A way to compare “sizes” of functions through different
notations (i.e. Asymptotic Notations):
𝜪 ≈ ≤
𝜴 ≈ ≥
𝜣 ≈ =
𝝄 ≈ <
𝝎 ≈ >
Asymptotic notation (Big Oh )
Asymptotic notation (Big Oh )
Asymptotic notation (Big Oh )
Example 1
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
Asymptotic notation (Big Oh )
Example 1
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
⟹ 2𝑛 + 3 ≤ 𝑐𝑔 𝑛
⟹ 2𝑛 + 3 ≤ 𝑐 𝑛
⟹ 2𝑛 + 3 ≤ 5𝑛, 𝑛≥1
𝐻𝑒𝑛𝑐𝑒𝑓 𝑛 = 𝑂 𝑛
Asymptotic notation (Big Oh )
Example 1
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
⟹ 2𝑛 + 3 ≤ 𝑐𝑔 𝑛
⟹ 2𝑛 + 3 ≤ 𝑐 𝑛
⟹ 2𝑛 + 3 ≤ 5𝑛, 𝑛≥1
𝐻𝑒𝑛𝑐𝑒𝑓 𝑛 = 𝑂 𝑛

𝐹𝑜𝑟, 𝑓 𝑛 = 𝑂 𝑛2 𝑖𝑠 𝑎𝑙𝑠𝑜 𝑡𝑟𝑢𝑒


𝐹𝑜𝑟, 𝑓 𝑛 = 𝑂 2𝑛 𝑖𝑠 𝑎𝑙𝑠𝑜 𝑡𝑟𝑢𝑒
Asymptotic notation (Big Oh )
Example 1
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
⟹ 2𝑛 + 3 ≤ 𝑐𝑔 𝑛
⟹ 2𝑛 + 3 ≤ 𝑐 𝑛
⟹ 2𝑛 + 3 ≤ 5𝑛, 𝑛≥1
𝐻𝑒𝑛𝑐𝑒𝑓 𝑛 = 𝑂 𝑛
𝐹𝑜𝑟, 𝑓 𝑛 = 𝑂 𝑛2 𝑖𝑠 𝑎𝑙𝑠𝑜 𝑡𝑟𝑢𝑒
𝐹𝑜𝑟, 𝑓 𝑛 = 𝑂 2𝑛 𝑖𝑠 𝑎𝑙𝑠𝑜 𝑡𝑟𝑢𝑒

𝐵𝑢𝑡 𝑓𝑜𝑟, 𝑓 𝑛 = 𝑂 lg 𝑛 𝑖𝑠 𝑛𝑜𝑡 𝑡𝑟𝑢𝑒


Because
𝟏 < 𝒍𝒈 𝒏 < 𝒏 𝒏 < 𝒏𝟐 < 𝒏𝟑 < ⋯ < 𝟐𝒏 < 𝟑𝒏 < ⋯ < 𝒏𝒏
Asymptotic notation (Big Oh )
Example 1 (Second Method)
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
Hear 𝑓(𝑛) = 2𝑛 + 3 𝑎𝑛𝑑 𝑔(𝑛) = 𝑛
So as per the definition of Big Oh
If 𝑓 𝑛 = Ο 𝑔 𝑛 , 𝑡ℎ𝑒𝑛
𝑓(𝑛)
lim ≤ 𝐶 ∀ 𝑐 > 0, 𝑛0 ≥ 1
𝑛⟶∞ 𝑔(𝑛)
Asymptotic notation (Big Oh )
Example 1 (Second Method)
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
Hear 𝑓(𝑛) = 2𝑛 + 3 𝑎𝑛𝑑 𝑔(𝑛) = 𝑛
So as per the definition of Big Oh
If 𝑓 𝑛 = Ο 𝑔 𝑛 , 𝑡ℎ𝑒𝑛
𝑓(𝑛)
lim ≤ 𝐶 ∀ 𝑐 > 0, 𝑛0 ≥ 1
𝑛⟶∞ 𝑔(𝑛)
2𝑛 + 3
lim
𝑛⟶∞ 𝑛
3
𝑛(2+𝑛) 3 3
= lim = lim (2 + ) = lim (2 + ) = lim 2 + 0= lim 2
𝑛⟶∞ 𝑛 𝑛⟶∞ 𝑛 𝑛⟶∞ ∞ 𝑛⟶∞ 𝑛⟶∞
Is a constant, Hence Proved
Asymptotic notation (Big Oh )
Example 2

𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛2 + 3𝑛 + 4 ∈ 𝑂 𝑛2


Asymptotic notation (Big Oh )
Example 2

𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛2 + 3𝑛 + 4 ∈ 𝑂 𝑛2


⟹ 2𝑛2 + 3𝑛 + 4 ≤ 2𝑛2 + 3𝑛2 + 4𝑛2
⟹ 2𝑛2 + 3𝑛 + 4 ≤ 11𝑛2 , 𝑤ℎ𝑒𝑟𝑒 𝑐 = 11 𝑎𝑛𝑑 𝑛 ≥ 1
𝐻𝑒𝑛𝑐𝑒𝑓 𝑛 = 𝑂 𝑛2
Asymptotic notation (Big Oh )
Example 3
𝐼𝑓 𝑓 𝑛 = 2𝑛+1 𝑎𝑛𝑑 𝑔 𝑛 = 2𝑛 𝑡ℎ𝑒 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ Ο(𝑔 𝑛 )
Asymptotic notation (Big Oh )
Example 3
𝐼𝑓 𝑓 𝑛 = 2𝑛+1 𝑎𝑛𝑑 𝑔 𝑛 = 2𝑛 𝑡ℎ𝑒 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ Ο(𝑔 𝑛 )
⟹ 2𝑛+1 = 2𝑛 . 2
So, as per the definition of Big Oh
𝑓 𝑛 ≤ 𝑐𝑔(𝑛)
Hence
⟹ 2𝑛+1 ≤ 2𝑛 . 2
⟹ 2𝑛+1 ≤ 2. 2𝑛 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 1 𝑎𝑛𝑑 𝑐 > 1
𝐻𝑒𝑛𝑐𝑒, 𝑓 𝑛 ∈ Ο(𝑔 𝑛 )
Asymptotic notation (Big Omega )
Asymptotic notation (Big Omega )
Asymptotic notation (Big Omega )

Example 4
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛2 + 3𝑛 + 4 ∈ Ω 𝑛2
Asymptotic notation (Big Omega )

Example 4
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛2 + 3𝑛 + 4 ∈ Ω 𝑛2
⟹ 2𝑛2 + 3𝑛 + 4 ≥ 1 ∗ 𝑛2
𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 = 𝛺 𝑛2 𝑤ℎ𝑒𝑟𝑒 𝑐 = 1 𝑎𝑛𝑑 𝑛 ≥ 1
Asymptotic notation (Big Omega )

Example 5
𝐼𝑓 𝑓 𝑛 = 3𝑛 + 2, 𝑔 𝑛 = 𝑛2 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
Asymptotic notation (Big Omega )

Example 5
𝐼𝑓 𝑓 𝑛 = 3𝑛 + 2, 𝑔 𝑛 = 𝑛2 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 >0
𝑛→∞ 𝑔 𝑛
3𝑛 + 2
⟹ 𝑙𝑖𝑚 >0
𝑛→∞ 𝑛2
2
𝑛 3+
𝑛
⟹ 𝑙𝑖𝑚 >0
𝑛→∞ 𝑛2
2
3+
𝑛
⟹ 𝑙𝑖𝑚 >0
𝑛→∞ 𝑛
⟹ 0 > 0 𝑖𝑠 𝑓𝑎𝑙𝑠𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
Asymptotic notation (Big Omega )

Example 6

𝐼𝑓 𝑓 𝑛 = 2𝑛 + 𝑛2 𝑎𝑛𝑑 𝑔 𝑛 = 2𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ Ω(𝑔 𝑛 )


Asymptotic notation (Big Omega )

Example 6

𝐼𝑓 𝑓 𝑛 = 2𝑛 + 𝑛2 𝑎𝑛𝑑 𝑔 𝑛 = 2𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ Ω(𝑔 𝑛 )

⟹ 2𝑛 + 𝑛2 > 2𝑛 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 1 𝑎𝑛𝑑 𝑐 = 1

𝐻𝑒𝑛𝑐𝑒, 𝑓 𝑛 ∈ Ω 𝑔 𝑛 𝑖𝑠 𝑡𝑟𝑢𝑒
Asymptotic notation (Theta)
Asymptotic notation (Theta)
Example 7
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 10𝑛3 + 5𝑛2 + 17 ∈ Θ 𝑛3
Asymptotic notation (Theta)
Example 7
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 10𝑛3 + 5𝑛2 + 17 ∈ Θ 𝑛3
As per the definition of 𝜃 notation 𝐶1 𝑔 𝑛 ≤ 𝑓 𝑛 ≤ 𝐶2 𝑔 𝑛
⟹ 10𝑛3 ≤ 10𝑛3 + 5𝑛2 + 171 < 10𝑛3 + 5𝑛3 + 17𝑛3
⟹ 10𝑛3 ≤ 10𝑛3 + 5𝑛2 + 171 < 32𝑛3
𝑆𝑜, 𝐶1 = 10 and 𝐶2 = 32 for all n ≥ 1
𝐻𝑒𝑛𝑐𝑒, 𝑃𝑟𝑜𝑣𝑒𝑑
Asymptotic notation (Theta)
Example 8
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = (𝑛 + 𝑎)𝑏 ∈ Θ 𝑛𝑏
Asymptotic notation (Theta)
Example 8
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = (𝑛 + 𝑎)𝑏 ∈ Θ 𝑛𝑏
𝑓 𝑛
As per the definition of 𝜃 notation ⟹ 𝑙𝑖𝑚 = 𝑐 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 1 𝑎𝑛𝑑 𝑐 > 0
𝑛→∞ 𝑔 𝑛
(𝑛 + 𝑎)𝑏
⟹ 𝑙𝑖𝑚
𝑛→∞ 𝑛𝑏
𝑏 𝑎 𝑏
𝑛 1+𝑛
⟹ 𝑙𝑖𝑚
𝑛→∞ 𝑛𝑏
𝑎 𝑏 𝑎
⟹ 𝑙𝑖𝑚 1 + ∴ =0
𝑛→∞ 𝑛 ∞
⟹ 1 𝑤ℎ𝑖𝑐ℎ 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
𝐻𝑒𝑛𝑐𝑒 , 𝑓 𝑛 = (𝑛 + 𝑎)𝑏 ∈ Θ 𝑛𝑏 is true
Asymptotic notation (Little Oh )
Asymptotic notation (Little Oh )

Example 9
𝐼𝑓 𝑓 𝑛 = 2𝑛, 𝑔 𝑛 = 𝑛2 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )

Example 9
𝐼𝑓 𝑓 𝑛 = 2𝑛, 𝑔 𝑛 = 𝑛2 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =0
𝑛→∞ 𝑔 𝑛

2𝑛
⟹ 𝑙𝑖𝑚 2
𝑛→∞ 𝑛
2
⟹ 𝑙𝑖𝑚
𝑛→∞ 𝑛

⟹ 0
𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑇𝑟𝑢𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )

Example 10
𝐼𝑓 𝑓 𝑛 = 2𝑛2 , 𝑔 𝑛 = 𝑛2 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )

Example 10
𝐼𝑓 𝑓 𝑛 = 2𝑛2 , 𝑔 𝑛 = 𝑛2 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =0
𝑛→∞ 𝑔 𝑛

2𝑛2
⟹ 𝑙𝑖𝑚 2
𝑛→∞ 𝑛
⟹ 𝑙𝑖𝑚 2
𝑛→∞

⟹ 2≠0
𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑇𝑟𝑢𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
Asymptotic notation (Little omega )
Asymptotic notation (Little omega )

Example 11
𝐼𝑓 𝑓 𝑛 = 2𝑛2 + 16 𝑎𝑛𝑑 𝑔 𝑛 = 𝑛2 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
Asymptotic notation (Little omega )

Example 11
𝐼𝑓 𝑓 𝑛 = 2𝑛2 + 16 𝑎𝑛𝑑 𝑔 𝑛 = 𝑛2 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =∞
𝑛→∞ 𝑔 𝑛
16
𝑛2 2 + 2
𝑛
⟹ 𝑙𝑖𝑚
𝑛→∞ 𝑛2
16
⟹ 𝑙𝑖𝑚 2 + 2
𝑛→∞ 𝑛
⟹ 𝑙𝑖𝑚 2 + 0
𝑛→∞
⟹ 𝑙𝑖𝑚 2
𝑛→∞
𝑆𝑜 2 ≠ ∞ 𝑖𝑠 𝑡𝑟𝑢𝑒 , 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
Asymptotic notation (Little Oh omega )
Example 12
𝐼𝑓 𝑓 𝑛 = 𝑛2 𝑎𝑛𝑑 𝑔 𝑛 = log 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ 𝜔 𝑔 𝑛
Asymptotic notation (Little Oh omega )
Example 12
𝐼𝑓 𝑓 𝑛 = 𝑛2 𝑎𝑛𝑑 𝑔 𝑛 = log 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ 𝜔 𝑔 𝑛
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =∞
𝑛→∞ 𝑔 𝑛
𝑛2
⟹ 𝑙𝑖𝑚
𝑛→∞ log 𝑛

Apply L Hospital Rule


2𝑛
⟹ 𝑙𝑖𝑚
𝑛→∞ 1ൗ
𝑛
⟹ 𝑙𝑖𝑚 2𝑛2
𝑛→∞
⟹ 𝑙𝑖𝑚 ∞
𝑛→∞
𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑡𝑟𝑢𝑒 𝑎𝑠 𝑝𝑒𝑟 𝜔 − 𝑛𝑜𝑡𝑎𝑡𝑖𝑜𝑛, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ∈ 𝜔 𝑔 𝑛
Growth of the Functions

• Basically, divided into following Categories:


1. Decrement Functions
2. Constant Functions
3. Logarithmic Functions
4. Polynomial Functions
5. Exponential Functions
Growth of the Functions

1. Decrement Functions (Smallest among all categories)


10 1 1 𝑛
𝑓 𝑛 = , , ,
𝑛 𝑛2 𝑛3 2𝑛

10 1 1 𝑛
> > >
𝑛 𝑛2 𝑛3 2𝑛
Growth of the Functions

2. Constant Functions (Second Smallest among all


categories)
𝑓 𝑛 = 𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡

10
= 0 (𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡) [𝑖𝑓 𝑛 → ∞]
𝑛
Growth of the Functions

3. Logarithmic Functions (Third Smallest among all


categories)
Example:
log 𝑛 > log log 𝑛 > log log log 𝑛
(log 𝑛)𝑘 > (log log 𝑛)𝑘 > (log log log 𝑛)𝑘
(log log log 𝑛) < (log log log 𝑛)𝑘
(log log 𝑛) < (log log 𝑛)𝑘
(log 𝑛) < (log 𝑛)𝑘
Combining above three inequalities, we have
(log log log 𝑛) < (log log log 𝑛)𝑘 < (log log 𝑛) < (log log 𝑛)𝑘 < (log 𝑛) < (log 𝑛)𝑘
for large value of n
Growth of the Functions

4. Polynomial Functions (Fourth Smallest among all


categories) 𝑛𝑘 : 𝑘 𝑖𝑠 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
Example:
𝑛1 , 𝑛2 , 𝑛3 ,…, 𝑛𝑘

𝑛1 < 𝑛2 < 𝑛3 <… < 𝑛𝑘


Growth of the Functions

5. Exponential Functions (Fifth Smallest among all


categories) 𝑎𝑛 : 𝑖𝑓 𝑎 > 1 𝑜𝑟 0 < 𝑎 < 1
Example:
2𝑛 , 3𝑛 ,5𝑛 ,…, 100𝑛

2𝑛 < 3𝑛 < 5𝑛 <…< 100𝑛


Growth of the Functions

Decrement < Constant < Logarithmic < Polynomial


<
Exponential
Function Function Function Function Function

Order of Functions

Example:

1
< 1 < lg 𝑛 < 𝑛 𝑛 < 𝑛2 < 𝑛3 < ⋯ < 2𝑛 < 3𝑛 < ⋯ < 𝑛𝑛
𝑛
Comparisons of functions
Standard notations and common functions
Design and Analysis of Algorithm
(KCS503)

Implementation and Analysis of


Selection Sort through Tail recursion
Approach

Lecture -8
Objective

• Able to learn and apply Tail recursive


approach to develop the recurrence equation
𝑻 𝒏 = 𝒏 + 𝑻 𝒏 − 𝟏 , 𝑻 𝟏 = 𝟏.
• Analyse and solve the recurrence by
substitution method.
• Finally, Show that selection sort can be solve
recursively by the recursive version of linear
search.
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
After 3 iteration
rd

5 elements 2 elements left

7 4 3 6 5 3 4 5 6 7

After 1st iteration After 4th iteration


4 elements left 1 elements left

3 4 7 6 5 3 4 5 6 7

After 2nd iteration After 5th iteration


3 elements left 0 elements left

3 4 7 6 5 3 4 5 6 7
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1)
if (i==n-1)
return
j=minIndex(arr,i,n-1)
if I != j
swap(arr[i],arr[j])
SELECTION SORT(arr, i+1, n-1)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx

T(n)=n+T(n-1)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx

T(n)=n+T(n-1)
(n-1)+T(n-2)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx

T(n)=n+T(n-1)
(n-1)+T(n-2)
(n-2)+T(n-3)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx

T(n)=n+T(n-1)
(n-1)+T(n-2)

.....
(n-2)+T(n-3)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx

T(n)=n+T(n-1)
(n-1)+T(n-2)

.....
(n-2)+T(n-3)

2+T(1)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx

T(n)=n+T(n-1)
(n-1)+T(n-2)
i==n-1

.....
(n-2)+T(n-3)

2+T(1)
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
T(n)=n+T(n-1)
(n-1)+T(n-2)
i==n-1
.....
(n-2)+T(n-3)

2+T(1)
𝑇(𝑛) = 𝑛 + (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) + ⋯ … … … . . +3 + 2 + 1
𝑛 𝑛+1
=
2
= Ο(𝑛2 )
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
T(n)=n+T(n-1)
(n-1)+T(n-2)
i==n-1
.....
(n-2)+T(n-3)

2+T(1)
𝑇(𝑛) = 𝑛 + (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) + ⋯ … … … . . +3 + 2 + 1
𝑛 𝑛+1
=
2
= Ο(𝑛2 )
Hence, the recursive (Tail) method have develop the recurrence equation
𝑇 𝑛 = Ο 𝑛 + 𝑇 𝑛 − 1 ∈ Ο(𝑛2 )
A Sorting Problem
(Selection Sort Algorithm with
Tail Recursive Approach)
SELECTION SORT(arr, i, n-1) minIndex (arr, i, n-1)
if (i==n-1) min_indx=i
return min_val=arr[i]
j=minIndex(arr,i,n-1) for j=i+1 to n-1
if I != j if min_val>arr[j]
swap(arr[i],arr[j]) min_val=arr[j]
SELECTION SORT(arr, i+1, n-1) min_indx=j
return min_indx

Hence, the recursive (Tail) method have develop the recurrence equation
𝑇 𝑛 = Ο 𝑛 + 𝑇 𝑛 − 1 ∈ Ο(𝑛2 )
Thank You
Design and Analysis of Algorithm
(KCS503)

Implementation and Analysis of


Insertion Sort through Head recursion
Approach

Lecture -9
Objective

• Able to learn and apply the head recursive


approach

𝑻 𝒏 = 𝑻 𝒏 − 𝟏 + 𝚶(𝒏), 𝑻 𝟏 = 𝟏

of Insertion sort with analysis and analyse its


complexity.
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)

1st Iteration 2nd Iteration 3rd Iteration

4th Iteration 5th Iteration 6th Iteration


A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
A = [ 5, 2, 4, 6, 1, 3], size = 6
insertion-sort(A, 0) insertion-sort(A, 3)
insert 5 insert 6
return A = [5] return A = [2, 4, 5, 6]

insertion-sort(A, 1) insertion-sort(A, 4)
insert 2 insert 1
return A = [2, 5] return A = [1, 2, 4, 5, 6]

insertion-sort(A, 2) insertion-sort(A, 5)
insert 4 insert 3
return A = [2, 4, 5] return A = [1, 2, 3, 4, 5, 6]
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
It was observed that the concept is:

Insert an element in a previously sorted array named as ‘A’.

Solution Steps:
• Base Case: If array size is 1 or smaller,
return.
• Recursively sort first n-1 elements.
• Insert the last element at its correct
position in the sorted array.
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)

void insertion_sort(A, j) {
//Initially j=length(A)
// Base case
if (j <= 1)
return
// Sort first i-1 elements
insertion_sort( A, j-1 )
insert A[j-1] into A[0…j-2]

}
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
0 1 2 3 4 5
5 2 4 6 1 3

5 2 4 6 1 3

5 2 4 6 1

5 2 4 6

5 2 4

5 2

5
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
0 1 2 3 4 5 0 1 2 3 4 5
5 2 4 6 1 3 1 2 3 4 5 6

5 2 4 6 1 3 1 2 3 4 5 6

5 2 4 6 1 1 2 4 5 6

5 2 4 6 2 4 5 6

5 2 4 2 4 5

5 2 Bottom 2 5 UP
5 5
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
void insertion_sort(A, j) {
//Initially j=length(A)
// Base case
if (j <= 1)
return
// Sort first n-1 elements
insertion_sort( A, j-1 )
val = A[j-1]
i = j-2
while (i >= 0 && A[i] > val) {
A[i+1] = A[i]
i=i-1
}
A[i+1] = val
}
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
Complexity:
𝐓 𝐧 =𝐓 𝐧−𝟏 + 𝐧
𝐓 𝐧 =𝐓 𝐧−𝟐 + 𝐧−𝟏 + 𝐧
𝐓 𝐧 =𝐓 𝐧−𝟑 + 𝐧−𝟐 + 𝐧−𝟏 + 𝐧
…………………….…………………………………………..
…………………….…………….………………………………..
…………………………………………………………………………
𝐓 𝐧 = 𝟏+𝟐 +𝟑+⋯+⋯+ 𝐧 −𝟑 + 𝐧 −𝟐 + 𝐧 −𝟏 + 𝐧

𝐧 𝐧+𝟏
𝐇𝐞𝐧𝐜𝐞 𝐓 𝐧 = 𝐓 𝐧 = 𝚶(𝐧𝟐 )
𝟐
A Sorting Problem
(Insertion Sort Algorithm with Head Recursive Approach)
void insertion_sort(A, j) {
//Initially j=length(A)
// Base case
if (j <= 1)
return
// Sort first j-1 elements Hence, the recursive (Head) method have
insertion_sort( A, j-1 ) develop the recurrence equation
val = A[j-1] 𝑻 𝒏 = 𝑻 𝒏 − 𝟏 + 𝜪 𝒏 ∈ 𝜪(𝒏𝟐 )
i = j-2
while (i >= 0 && A[i] > val) {
A[i+1] = A[i]
i=i-1
}
A[i+1] = val
}
🙏🏻 Thank You 🙏🏻
Design and Analysis of Algorithm

Recurrence Equation
(Solving Recurrence using
Iteration Methods)

Lecture – 10 and 11
Overview
• A recurrence is a function is defined in terms of
– one or more base cases, and
– itself, with smaller arguments.
Examples:

Linear Decay Division

Changing Variable Decision Tree


Overview
• Many technical issues:
• Floors and ceilings
[Floors and ceilings can easily be removed and don’t affect
the solution to the recurrence. They are better left to a
discrete math course.]
• Exact vs. asymptotic functions
• Boundary conditions
Overview

In algorithm analysis, the recurrence and it’s solution are


expressed by the help of asymptotic notation.
• Example: 𝑇 (𝑛) = 2𝑇 (𝑛/2) + 𝛩(𝑛), with solution
𝑇 (𝑛) = 𝛩(𝑛 lg 𝑛).
• The boundary conditions are usually expressed as
𝑇 (𝑛) = 𝛰(1) for sufficiently small n..
• But when there is a desire of an exact, rather than
an asymptotic, solution, the need is to deal with
boundary conditions.
• In practice, just use asymptotics most of the time,
and ignore boundary conditions.
Recursive Function
• Example
𝐴(𝑛)
{
𝐼𝑓(𝑛 > 1)
𝑅𝑒𝑡𝑢𝑟𝑛(𝐴 𝑛 − 1 )
}
The relation is called recurrence relation
The Recurrence relation of given function is written as follows.
𝑇(𝑛) = 𝑇(𝑛 − 1) + 1
Recursive Function
• To solve the Recurrence relation the following methods
are used:
1. Iteration method
2. Recursion-Tree method
3. Master Method
4. Substitution Method
Iteration Method( Example 1)
• In Iteration method the basic idea is to expand the recurrence
and express it as a summation of terms dependent only on ‘n’
(i.e. the number of input) and the initial conditions.
Example 1:
Solve the following recurrence relation by using Iteration method.

𝑇 𝑛−1 +1 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =ቊ
1 𝑖𝑓 𝑛 = 1
Iteration Method ( Example 1)
It means 𝑇 𝑛 = 𝑇 𝑛 − 1 + 1 𝑖𝑓 𝑛 > 1
𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 −−−−−−−−−−−− −(1)
𝑃𝑢𝑡 𝑛 = 𝑛 − 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛−1 =𝑇 𝑛−2 +1
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇(𝑛 − 1) 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛 = 𝑇 𝑛 − 2 + 2 −−−−−−−−−−−−−−−− −(2)
Iteration Method ( Example 1)
𝑃𝑢𝑡 𝑛 = 𝑛 − 2 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛−2 =𝑇 𝑛−3 +1
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇(𝑛 − 2) 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛 = 𝑇 𝑛 − 3 + 3 −−−−−−−−−−−−−−−− −(3)
……….
𝑇 𝑛 = 𝑇 𝑛 − 𝑘 + 𝑘 −−−−−−−−−−−−−−−− −(𝑘)
Iteration Method ( Example 1)
𝐿𝑒𝑡 𝑇 𝑛 − 𝑘 = 𝑇 1 = 1
(𝐴𝑠 𝑝𝑒𝑟 𝑡ℎ𝑒 𝑏𝑎𝑠𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑜𝑓 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒)
𝑆𝑜 𝑛 − 𝑘 = 1
⇒𝑘 =𝑛−1
𝑁𝑜𝑤 𝑝𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑘 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 𝑘
𝑇 𝑛 =𝑇 𝑛− 𝑛−1 +𝑛−1
𝑇 𝑛 =𝑇 1 +𝑛−1
𝑇 𝑛 =1+𝑛−1 [∴ 𝑇 1 = 1]
𝑇 𝑛 =𝑛
∴ 𝑇 𝑛 =Θ 𝑛
Iteration Method ( Example 2)
Example 2:
Solve the following recurrence relation by using Iteration method.

𝑛
2𝑇 + 3𝑛2 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =ቐ 2
11 𝑖𝑓 𝑛 = 1
Iteration Method ( Example 2)
𝑛
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 2𝑇 + 3𝑛2 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 11 𝑤ℎ𝑒𝑛 𝑛 = 1 − −(1)
2
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 𝑛 2
𝑇 = 2𝑇 +3
2 4 2
𝑛 𝑛 𝑛 2
𝑇 = 2𝑇 2 + 3
2 2 2
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 2
𝑇 𝑛 = 2 2𝑇 2 + 3 + 3𝑛2
2 2
𝑛 𝑛2
𝑇 𝑛 = 2 𝑇 2 + 2.3 + 3𝑛2
2
2 4
2
𝑛 𝑛
𝑇 𝑛 = 22 𝑇 2 + 3 + 3𝑛2 −−−−−−−−−−−−− −(2)
2 2
Iteration Method ( Example 2)
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛 2
𝑇 = 2𝑇 +3
4 8 4
𝑛 𝑛 𝑛 2
𝑇 = 2𝑇 3 + 3
4 2 4
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
2 2
𝑛 𝑛 𝑛
𝑇 𝑛 = 22 2𝑇 +3 + 3 +3𝑛2
8 16 2
2
𝑛 𝑛 2 𝑛2
𝑇 𝑛 = 2 2𝑇 3 + 3 + 3 +3𝑛2
2 4 2
2 2
𝑛 𝑛 𝑛
𝑇 𝑛 = 23 𝑇 3 + 4.3 + 3 +3𝑛2
2 16 2
Iteration Method ( Example 2)
2 2
𝑛 𝑛 𝑛
𝑇 𝑛 = 23 𝑇 3 + 3 2 + 3 +3𝑛2 −−−−−−−−−−−−− −(3)
2 2 2
…….
𝑛 𝑛2 𝑛2
𝑇 𝑛 = 2 𝑇 𝑖 + ⋯ + ⋯ + ⋯ + 3 2 + 3 +3𝑛2 −−−−−−−− −(𝑖 𝑡ℎ 𝑡𝑒𝑟𝑚)
𝑖
2 2 2
𝑛
𝑎𝑛𝑑 𝑡ℎ𝑒 𝑠𝑒𝑟𝑖𝑒𝑠 𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒 𝑤ℎ𝑒𝑛 𝑖 = 1
2
⇒ 𝑛 = 2𝑖
𝑇𝑎𝑘𝑖𝑛𝑔 log 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⇒ log 2 𝑛 = 𝑖 log 2 2
⇒ 𝑖 = log 2 𝑛 (𝑏𝑒𝑐𝑎𝑢𝑠𝑒 log 2 2 = 1)
Iteration Method ( Example 2)
𝐻𝑒𝑛𝑐𝑒 𝑤𝑒 𝑐𝑎𝑛 𝑤𝑟𝑖𝑡𝑒 𝑡ℎ𝑒 𝑖 𝑡ℎ 𝑡𝑒𝑟𝑚 𝑎𝑠 𝑓𝑜𝑙𝑙𝑜𝑤𝑠
2 2
𝑛 𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛2 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 2𝑖 𝑇 𝑖
2 2 2
2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛2 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 2log2 𝑛 𝑇 1
2 2
𝑛2 𝑛2
⇒ 𝑇 𝑛 = 3𝑛 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 2log2 𝑛 . 11
2
2 2
2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛2 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 𝑛log2 2 . 11 [ 𝐴𝑠 log 2 2 = 1]
2 2
2
𝑛2 𝑛2
⇒ 𝑇 𝑛 = 3𝑛 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 𝑛. 11
2 2
2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛2 + 3 + 3 2 + ⋯ + ⋯ + ⋯ + 11. 𝑛
2 2
1 1
⇒ 𝑇 𝑛 = 3𝑛2 1 + + 2 + ⋯ + ⋯ + ⋯ + 11. 𝑛
2 2
Iteration Method ( Example 2)
𝐴𝑠 𝑤𝑒 𝑘𝑛𝑜𝑤 𝑡ℎ𝑎𝑡 𝑆𝑢𝑚 𝑜𝑓 𝑖𝑛𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑠𝑒𝑟𝑖𝑒𝑠 𝑖𝑠

1 𝑎
= 𝑎 + 𝑎𝑟 + 𝑎𝑟 2 + ...+ 𝑎𝑟 (𝑛−1) = ෍ 𝑎𝑟 𝑖 =𝑎 =
1−𝑟 1−𝑟
𝑖=0

Hence,
1
⇒ 𝑇 𝑛 ≤ 3𝑛2 + 11 𝑛
1
1−
2
⇒ 𝑇 𝑛 ≤ 3𝑛2 . 2 + 11 𝑛
⇒ 𝑇 𝑛 ≤ 6𝑛2 + 11 𝑛
𝐻𝑒𝑛𝑐𝑒 𝑇 𝑛 = Ο(𝑛2 )
Iteration Method ( Example 3)
Example 3:
Solve the following recurrence relation by using Iteration method.
𝑛
8𝑇 + 𝑛2 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =ቐ 2
1 𝑖𝑓 𝑛 = 1
Iteration Method ( Example 3)
𝑛
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 8𝑇 + 𝑛2 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 −−− −(1)
2
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 𝑛 2
𝑇 = 8𝑇 +
2 4 2
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 2
𝑇 𝑛 = 8 8𝑇 + + 𝑛2
4 2
2
𝑛 𝑛2
𝑇 𝑛 =8 𝑇 + 8 + 𝑛2 −−−−−−−−−−−−− −(2)
4 4
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛 2
𝑇 = 8𝑇 +
4 8 4
Iteration Method ( Example 3)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
2
𝑛 𝑛 2 𝑛2
𝑇 𝑛 = 8 8𝑇 + + 8 + 𝑛2
8 4 4
𝑛 𝑛2 𝑛2
𝑇 𝑛 = 83 𝑇 8 + 82 42 + 84 + 𝑛2 −−−−−−−−−−−−− −(3)
……
𝑇 𝑛
𝑛 𝑛 2 𝑛 2 𝑛 2
= 8𝑘 𝑇 𝑘 + 8𝑘−1 𝑘−1 + ⋯ + ⋯ + ⋯ + 82 2 + 8 + 𝑛2 −−− −(𝑘 𝑡ℎ 𝑡𝑒𝑟𝑚)
2 4 4 4
𝑛 8𝑘−1 8𝑘−2 82 8
𝑇 𝑛 = 8𝑘 𝑇 𝑘 + 𝑛2 𝑘−1 + 𝑘−2 … + ⋯ + ⋯ + 2 + + 1
2 4 4 4 4
𝑛
𝑇 𝑛 = 8 𝑇 𝑘 + 𝑛2 2𝑘−1 + 2𝑘−2 … + ⋯ + ⋯ + 22 + 2 + 1 −−−− −(4)
𝑘
2
Iteration Method ( Example 3)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
2
𝑛 𝑛 2 𝑛2
𝑇 𝑛 = 8 8𝑇 + + 8 + 𝑛2
8 4 4
𝑛 𝑛2 𝑛2
𝑇 𝑛 = 83 𝑇 8 + 82 42 + 84 + 𝑛2 −−−−−−−−−−−−− −(3)
……
𝑛 𝑛2 𝑛2 𝑛2
𝑇 𝑛 = 8𝑘 𝑇𝑘 +8𝑘−1
𝑘−1 + ⋯ + ⋯ + 8 2 + 8 + 𝑛2 −−− −(𝑘 𝑡ℎ 𝑡𝑒𝑟𝑚)
2
2 4 4 4
𝑛 8𝑘−1 8𝑘−2 82 8
𝑘
𝑇 𝑛 =8 𝑇 𝑘 + 𝑛 2 + …+ ⋯+ ⋯+ 2 + + 1
2 4𝑘−1 4𝑘−2 4 4
𝑛
𝑇 𝑛 = 8𝑘 𝑇 𝑘 + 𝑛2 2𝑘−1 + 2𝑘−2 … + ⋯ + ⋯ + 22 + 2 + 1 −−−− −(4)
2
Iteration Method ( Example 3)
𝑛
𝑇 𝑛 = 8 𝑇 𝑘 + 𝑛2 2𝑘−1 + 2𝑘−2 … + ⋯ + ⋯ + 22 + 2 + 1 −−− −(4)
𝑘
2
𝑛
𝑇 𝑛 = 8𝑘 𝑇 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2𝑘−2 + 2𝑘−1 −−− − (5)
2𝑘
𝑛
𝑎𝑛𝑑 𝑡ℎ𝑒 𝑠𝑒𝑟𝑖𝑒𝑠 𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒 𝑤ℎ𝑒𝑛 𝑘 = 1
2
⇒ 𝑛 = 2𝑘
𝑇𝑎𝑘𝑖𝑛𝑔 log 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⇒ log 2 𝑛 = 𝑘 log 2 2
⇒ 𝑘 = log 2 𝑛 (𝑏𝑒𝑐𝑎𝑢𝑠𝑒 log 2 2 = 1)
𝑛
𝑁𝑜𝑤, 𝑎𝑝𝑝𝑙𝑦 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑘 = log 2 𝑛 𝑎𝑛𝑑 𝑘
= 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 5
2
Iteration Method ( Example 3)
𝑇 𝑛 = 8log2 𝑛 𝑇 1 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1 −
(6)
= 𝑛log2 8 . 1 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1
= 𝑛3 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1

Is a G.P Series, but in this case no need of


evaluation. Because the highest order
polynomial is 𝑛3 . So no need to calculate 𝑛2 .

= 𝑛3
Hence the complexity is 𝑇(𝑛) = 𝑛3
Iteration Method ( Example 3)
𝑇 𝑛 = 8log2 𝑛 𝑇 1 + 𝑛2 1 + 2 + 22 + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1 − (6)
= 𝑛log2 8 . 1 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1
= 𝑛3 + 𝑛2 1 + 2 + 22 + ⋯ + ⋯ + 2log2 𝑛−2 + 2log2 𝑛−1

Is a G.P Series, but in this case no need of


evaluation. Because the highest order
polynomial is 𝑛3 . So no need to calculate 𝑛2 .

= 𝑛3
Hence the complexity is 𝑇(𝑛) = 𝜪(𝑛3 )
𝑆𝑢𝑚 𝑜𝑓 𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑃𝑟𝑜𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑠𝑒𝑟𝑖𝑒𝑠 𝑖𝑠
𝑛 𝑛+1 −1
𝑛 𝑟
= 𝑎 + 𝑎𝑟 + 𝑎𝑟 2 + . . . + 𝑎𝑟 = ෍ 𝑎𝑟 𝑖 = 𝑎
𝑟−1
𝑖=0
Iteration Method ( Example 4)
Example 4:
Solve the following recurrence relation by using Iteration method.
𝑛
7𝑇 + 𝑛2 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =ቐ 2
1 𝑖𝑓 𝑛 = 1
(𝑖. 𝑒. 𝑆𝑡𝑟𝑎𝑠𝑠𝑖𝑜𝑛 𝐴𝑙𝑔𝑜𝑟𝑖𝑡ℎ𝑚)
Iteration Method ( Example 4)
𝑛
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 7𝑇 + 𝑛2 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 −−− −(1)
2
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 𝑛 2
𝑇 = 7𝑇 +
2 4 2
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 2
𝑇 𝑛 = 7 7𝑇 + + 𝑛2
4 2
2
𝑛 𝑛2
𝑇 𝑛 =7 𝑇 + 7 + 𝑛2 −−−−−−−−−−−−− −(2)
4 4
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛 2
𝑇 = 7𝑇 +
4 8 4
Iteration Method ( Example 4)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
2
𝑛 𝑛 2 𝑛2
𝑇 𝑛 = 7 7𝑇 + + 7 + 𝑛2
8 4 4
3
𝑛 𝑛2 𝑛2
𝑇 𝑛 =7 𝑇 + 7 2 + 7 + 𝑛2 −−−−−−−−−−−−− −(3)
2
8 4 4
……
2 2 2
𝑛 𝑛 𝑛 𝑛
𝑇 𝑛 = 7𝑘 𝑇 𝑘 + 7𝑘−1 𝑘−1 + ⋯ + ⋯ + ⋯ + 72 2 + 7 + 𝑛2 −− −(𝑘 𝑡ℎ 𝑡𝑒𝑟𝑚)
2 4 4 4
𝑘−1 𝑘−2 2
𝑛 7 7 7 7
𝑇 𝑛 = 7𝑘 𝑇 𝑘 + 𝑛2 𝑘−1 + 𝑘−2 … + ⋯ + ⋯ + 2 + + 1
2 4 4 4 4
𝑘−1 𝑖
𝑛 7
𝑇 𝑛 = 7𝑘 𝑇 𝑘 + 𝑛2 ෍ −−−− −(4)
2 4
𝑖=0
Iteration Method ( Example 4)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
2 2
𝑛 𝑛 𝑛
𝑇 𝑛 = 72 7𝑇 + + 7 + 𝑛2
8 4 4
𝑛 𝑛 2 𝑛 2
𝑇 𝑛 = 73 𝑇 + 72 2 + 7 + 𝑛2 −−−−−−−−−−−−− −(3)
8 4 4
……
𝑘
𝑛 𝑘−1
𝑛2 2
𝑛2 𝑛2 2 −− −(𝑘 𝑡ℎ 𝑡𝑒𝑟𝑚)
𝑇 𝑛 =7 𝑇 𝑘 +7 + ⋯ + ⋯ + ⋯ + 7 + 7 + 𝑛
2 4𝑘−1 42 4
𝑘
𝑛 2
7𝑘−1 7𝑘−2 72 7
𝑇 𝑛 =7 𝑇 𝑘 + 𝑛 + …+ ⋯+ ⋯+ 2 + + 1
2 4𝑘−1 4𝑘−2 4 4
𝑘−1 𝑖
𝑘
𝑛 2
7
𝑇 𝑛 =7 𝑇 𝑘 + 𝑛 ෍ −−−− −(4)
2 4
𝑖=0
Iteration Method ( Example 4)
𝑘−1 𝑖
𝑛 7
𝑇 𝑛 = 7𝑘 𝑇 𝑘 + 𝑛2 ෍ −−−− − 4
2 4
𝑖=0
𝑛
𝑎𝑛𝑑 𝑡ℎ𝑒 𝑠𝑒𝑟𝑖𝑒𝑠 𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒 𝑤ℎ𝑒𝑛 𝑘 = 1
2
⇒ 𝑛 = 2𝑘
𝑇𝑎𝑘𝑖𝑛𝑔 log 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⇒ log 2 𝑛 = 𝑘 log 2 2
⇒ 𝑘 = log 2 𝑛 (𝑏𝑒𝑐𝑎𝑢𝑠𝑒 log 2 2 = 1)
𝑛
𝑁𝑜𝑤, 𝑎𝑝𝑝𝑙𝑦 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑘 = log 2 𝑛 𝑎𝑛𝑑 = 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 4
2𝑘
Iteration Method ( Example 4)
log2 𝑛−1 𝑖
7
𝑇 𝑛 = 7log2 𝑛 𝑇 1 + 𝑛2 ෍ −−−− − 5
4
𝑖=0
log2 𝑛−1 𝑖
7
= 𝑛log2 7 . 1 + 𝑛2 ෍
4
𝑖=0
log2 𝑛−1 𝑖
7
= 𝑛log2 7 . 1 + 𝑛2 ෍
4
𝑖=0
log2 𝑛−1 𝑖
7
= 𝑛2.8 + 𝑛2 ෍
4
𝑖=0

Is a G.P Series, but in this case no need of


evaluation. Because the highest order
polynomial is 𝑛3 . So no need to calculate 𝑛2 .
Iteration Method ( Example 4)
log2 𝑛−1 𝑖
7
= 𝑛2.8 + 𝑛2 ෍
4
𝑖=0

Is a G.P Series, but in this case no need of


evaluation. Because the highest order
polynomial is 𝑛2.8 . So no need to calculate 𝑛2 .

= 𝑛2.8
Hence the complexity is 𝑇(𝑛) = 𝑛2.8
Iteration Method ( Example 4)
log2 𝑛−1 𝑖
7
= 𝑛2.8 + 𝑛2 ෍
4
𝑖=0

Is a G.P Series, but in this case no need of


evaluation. Because the highest order
polynomial is 𝑛2.8 . So no need to calculate 𝑛2 .

= 𝑛2.8
Hence the complexity is 𝑇(𝑛) = 𝜪(𝑛2.8 )
𝑆𝑢𝑚 𝑜𝑓 𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑃𝑟𝑜𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑠𝑒𝑟𝑖𝑒𝑠 𝑖𝑠
𝑛
2 𝑛 𝑟 𝑛+1 −1
𝑖
= 𝑎 + 𝑎𝑟 + 𝑎𝑟 + . . . + 𝑎𝑟 = ෍ 𝑎𝑟 = 𝑎
𝑟−1
𝑖=0
Iteration Method ( Example 5)
Example 5:
Solve the following recurrence relation by using Iteration method.

𝑇 𝑛 − 1 + log 𝑛 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =ቊ
1 𝑖𝑓 𝑛 = 1
Iteration Method ( Example 5)
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 𝑇 𝑛 − 1 + log 𝑛 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 − −(1)
𝑃𝑢𝑡 𝑛 = 𝑛 − 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛 − 1 = 𝑇 𝑛 − 2 + log(𝑛 − 1)
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑛 − 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛 = 𝑇 𝑛 − 2 + log(𝑛 − 1) + log 𝑛
= 𝑇 𝑛 − 3 + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
= T n − 4 + log (𝑛 − 3) + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
…………….
…………….
Hence the 𝑘𝑡ℎ order is :
T n = T n − k + log n − k − 1 + ⋯ + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
T n = T n − k + log n − 𝑘 + 1 + ⋯ + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
Iteration Method ( Example 5)
Hence the 𝑘𝑡ℎ order is :
T n = T n − k + log n − 𝑘 + 1 + ⋯ + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛

𝐴𝑠 𝑝𝑒𝑟 𝑡ℎ𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛 𝑛 − 𝑘 = 1


𝑆𝑜 𝑘 = 𝑛 − 1
The 𝑘 𝑡ℎ order can be written as:
T n = T 1 + log n − 𝑛 + 1 + 1 + ⋯ + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
= 1 + log 2 + ⋯ + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
= 1 + log 2 + log 3 + log(4) … + log(𝑛 − 2) + log(𝑛 − 1) + log 𝑛
= 1 + log 2. 3. 4. 5 … … … … . n
= 1 + log n!
𝑯𝒆𝒏𝒄𝒆 𝒕𝒉𝒆 𝒄𝒐𝒎𝒑𝒍𝒆𝒙𝒊𝒕𝒚 𝒊𝒔 ∶ 𝜪(𝒍𝒐𝒈𝒏!)
Iteration Method ( Practice)
7𝑛
𝑇 +𝑛 𝑖𝑓 𝑛 > 1
𝑄1. 𝑇 𝑛 = ൞ 10
1 𝑖𝑓 𝑛 = 1

𝑇 𝑛 − 1 + (𝑛 − 1) 𝑖𝑓 𝑛 > 1
𝑄2. 𝑇 𝑛 = ቊ
1 𝑖𝑓 𝑛 = 1

𝑇 𝑛 − 1 + 𝑛2 𝑖𝑓 𝑛 > 1
𝑄3. 𝑇 𝑛 = ቊ
1 𝑖𝑓 𝑛 = 1
Design and Analysis of Algorithm

Recurrence Equation
(Solving Recurrence using
Recursion Tree Methods)

Lecture – 12 and 13
Overview
• A recurrence is a function is defined in terms of
– one or more base cases, and
– itself, with smaller arguments.
Examples:

Linear Decay Division

Changing Variable Decision Tree


Overview
• Many technical issues:
• Floors and ceilings
[Floors and ceilings can easily be removed and don’t affect
the solution to the recurrence. They are better left to a
discrete math course.]
• Exact vs. asymptotic functions
• Boundary conditions
Overview

In algorithm analysis, the recurrence and it’s solution are


expressed by the help of asymptotic notation.
• Example: 𝑇 (𝑛) = 2𝑇 (𝑛/2) + 𝛩(𝑛), with solution
𝑇 (𝑛) = 𝛩(𝑛 lg 𝑛).
• The boundary conditions are usually expressed as
𝑇 (𝑛) = 𝛰(1) for sufficiently small n..
• But when there is a desire of an exact, rather than
an asymptotic, solution, the need is to deal with
boundary conditions.
• In practice, just use asymptotics most of the time,
and ignore boundary conditions.
Recursive Function
• Example
𝐴(𝑛)
{
𝐼𝑓(𝑛 > 1)
𝑛
𝑅𝑒𝑡𝑢𝑟𝑛 𝐴
2
}
The relation is called recurrence relation
The Recurrence relation of given function is written as follows.
𝑛
𝑇(𝑛) = 𝑇 +1
2
Recursive Function
• To solve the Recurrence relation the following methods
are used:
1. Iteration method
2. Recursion-Tree method
3. Master Method
4. Substitution Method
Recursion Tree Method
• Recursion Tree is another method for solving recurrence relations.
This method work on two steps. These are
• First, A set of pre level costs are obtained by sum the cost of each
level of the tree and the height of the tree.
• Second, to determine the total cost of all level of recursion, we sum
all the pre level cost.
• This method is best used for good guess.
• For generating good guess, we can ignore floors( 𝑥 ) and ceiling
( 𝑥 ) when solving the recurrences. Because they usually do not
affect the final guess.
Recursion Tree Method (Example 1)
Example 1

𝑛
Solve the recurrence 𝑇 𝑛 = 3𝑇 + Θ 𝑛2 by
4
using recursion tree method.
Recursion Tree Method (Example 1)
Answer:
We start by focusing on finding an upper bound for the
solution by using good guess. As we know that floors
and ceilings usually do not matter when solving the
recurrences, we drop the floor and write the recurrence
equation as follows:
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛2 , 𝑐 > 0
4
The term 𝑐𝑛2 , at the root represent the costs incurred by
𝑛
the subproblems of size .
4
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛2 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4
𝑇(𝑛)
Fig –(a)
Figure (a) shows T(n), which progressively expands in (b) and (d) to form recursion
tree.
𝑐 𝑛 2

𝑛 𝑛 𝑛
𝑇 𝑇 𝑇
4 4 4
Fig –(b)

L1.10
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛2 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4
𝑐 𝑛 2

𝑐(𝑛/4)2 𝑐(𝑛/4)2 𝑐(𝑛/4)2

𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛
𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇
16 16 16 16 16 16 16 16 16

Fig –(c)

L1.11
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛2 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4
2 2
𝑐 𝑛 𝑐 𝑛

3 2
𝑐(𝑛/4)2 𝑐(𝑛/4)2 𝑐(𝑛/4)2 𝑐 𝑛
16
log 4 𝑛

2
𝑛 2
𝑛 2 𝑛 2 𝑛 2 𝑛 2 𝑛 2
𝑛 2 𝑛 2 𝑛 2 3 2
𝑐
16
𝑐
16
𝑐
16
𝑐
16
𝑐
16
𝑐
16
𝑐
16
𝑐
16
𝑐
16 𝑐 𝑛
16

𝑖
3 2
𝑐 𝑛
16
𝑖
𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1 3
Fig –(d)

L1.12
Recursion Tree Method (Example 1)

Analysis
First, we find the height of the recursion tree
𝑛
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
4 𝑖
𝑛
i.e. the subproblem size hits 𝑛 = 1, when =1
4 𝑖
𝑛
So, if 4 𝑖
=1
⟹𝑛= 4 𝑖 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 4 𝑖
⟹ 𝑖 = log 4 𝑛
So the height of the tree is log 4 𝑛.
Recursion Tree Method (Example 1)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 3 𝑖 . It was observed that
2 3 𝑖
3 3 3 3
𝑇 𝑛 = 𝑐𝑛2 + 𝑐𝑛2 + 𝑐𝑛2 + 𝑐𝑛2 + ⋯ + 𝑐𝑛2 + (3)𝑖
16 16 16 16
𝑛 2
So, each node at depth ′𝑖 ′ 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … , log 4 𝑛 − 1 has cost 𝑐 .
4𝑖
𝑛 2
Hence the total cost at level ′𝑖′ is 3𝑖 𝑐 4𝑖
𝑖
𝑖
𝑛 2
𝑖
𝑛2 3𝑖 2
3
⟹ 3 . 𝑐. 𝑖 ⟹ 3 . 𝑐. 𝑖 ⟹ 𝑖 . 𝑐. 𝑛 ⟹ . 𝑐. 𝑛2
4 16 16 16
Recursion Tree Method (Example 1)

However, the bottom level is special. Each of the bottom node has
contribute cost = 𝑇(1)
Hence the cost of the bottom level is = 3𝑖
⟹ 3log4 𝑛 𝑎𝑠 𝑖 = log 4 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹ 𝑛log4 3
So, the total cost of entire tree is
𝑇 𝑛
2 3 𝑖
3 3 3 3
= 𝑐𝑛2 + 𝑐𝑛2 + 𝑐𝑛2 + 𝑐𝑛2 + ⋯ + ⋯ + 𝑐𝑛2 + Θ(𝑛log4 3 )
16 16 16 16
log4 𝑛 𝑖
3
𝑇 𝑛 = ෍ 𝑐𝑛2 + Θ(𝑛log4 3 )
16
𝑖=0
Recursion Tree Method (Example 1)
The left term is just a sum of Geometric series. So the value of 𝑇 𝑛 is as follows.

3ൗ log4 𝑛 − 1
𝑇 𝑛 = 16 𝑐𝑛2 + Θ(𝑛log4 3 )
3
−1
16
The above equation looks very complicated. So, we use an infinite geometric series as an upper
bound. Hence the new form of the equation is given below:
log4 𝑛−1 𝑖
3
𝑇 𝑛 = ෍ 𝑐𝑛2 + Θ(𝑛log4 3 )
𝑖=0
16 𝑆𝑢𝑚 𝑜𝑓 𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑃𝑟𝑜𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛
∞ 𝑛
3
𝑖 2
𝑆𝑛 = 𝑎 + 𝑎𝑟 + 𝑎𝑟 + . . . + 𝑎𝑟
𝑇 𝑛 ≤෍ 𝑐𝑛2 + Θ(𝑛log4 3 )
16
𝑖=0 𝑛
1 𝑟 𝑛+1 −1
2 log4 3
𝑇 𝑛 ≤ 𝑐𝑛 + Θ(𝑛 )
1−
3 𝑆𝑛 = ෍ 𝑎𝑟 𝑖 = 𝑎
16 𝑟−1
𝑇 𝑛 ≤
16 2
𝑐𝑛 + Θ(𝑛log4 3 )
𝑖=0
13
𝑇 𝑛 = Ο(𝑛2 )
Recursion Tree Method (Example 2)
Example 2
𝑛
Solve the recurrence 𝑇 𝑛 = 4𝑇 + 𝑛 by using recursion tree
2
method.
𝑛
𝑇 𝑛 = 4𝑇 + 𝑐𝑛 , 𝑐 > 0
2
The term 𝑐𝑛, at the root represent the costs incurred by the
𝑛
subproblems of size .
2

Construction of Recursion tree

𝑇(𝑛)

Fig –(a)

Figure (a) shows T(n), which progressively expands in (b) to form


recursion tree.
Recursion Tree Method (Example 2)
𝑛
𝑇 𝑛 = 4𝑇 + 𝑐𝑛 , 𝑐 > 0
2
𝑐𝑛 𝑐𝑛

𝑛 4
𝑛 𝑐 𝑐𝑛
𝑛 𝑐 2
𝑐
𝑛
2 2
𝑐
2 2
log 2 𝑛

2
4
𝑛 𝑛 𝑇
𝑛
𝑛 𝑛 𝑇
𝑛
𝑐𝑛
𝑇
4 𝑇
𝑛
𝑇
𝑛
4
𝑇
4 𝑇
𝑛
4 𝑇
𝑛
𝑇
𝑛 𝑇
𝑛
4 𝑇
𝑛
𝑛 𝑛 𝑇
𝑛
4 𝑇
4
𝑇
4
4
2
4 4 4 𝑇 4
4 𝑇
4 4 3
4
𝑐𝑛
2
𝑖
4
𝑐𝑛
2

𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1 4 𝑖

Fig –(b)
Recursion Tree Method (Example 2)

Analysis
First, we find the height of the recursion tree
𝑛
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
2 𝑖
𝑛
i.e. the subproblem size hits 𝑛 = 1, when =1
2 𝑖
𝑛
So, if =1
2 𝑖
⟹𝑛= 2 𝑖 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2 𝑖
⟹ 𝑖 = log 2 𝑛
So the height of the tree is log 2 𝑛.
Recursion Tree Method (Example 2)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 4 𝑖 .
So, each node at depth ′𝑖 ′ 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … … , log 2 𝑛 − 1 has cost
𝑛
𝑐 2𝑖 .
𝑛
Hence the total cost at level ′𝑖′ is 4𝑖 𝑐 2𝑖
𝑖
𝑛
⟹ 4 . 𝑐. 𝑖
2
𝑖
𝑛
⟹ 4 . 𝑐. 𝑖
2
𝑖
4
⟹ 𝑖 . 𝑐𝑛
2
𝑖
4
⟹ . 𝑐𝑛
2
Recursion Tree Method (Example 2)
However, the bottom level is special. Each of the bottom node has contribute
cost = 𝑇(1)
Hence the cost of the bottom level is = 4𝑖
⟹ 4log2 𝑛 𝑎𝑠 𝑖 = log 2 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹ 𝑛log2 4

So, the total cost of entire tree is


2 3 𝑖
4 4 4 4
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯ + 𝑐𝑛 + Θ(𝑛log2 4 )
2 2 2 2
log2 𝑛 𝑖
4
𝑇 𝑛 = ෍ 𝑐𝑛 + Θ(𝑛log2 4 )
2
𝑖=0
Recursion Tree Method (Example 2)
The left term is just a sum of Geometric series. So the value of 𝑇 𝑛 is as follows.
log2 𝑛
4
−1
2
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛log2 4 )
4
−1
2
log2 𝑛
2 −1
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛2 )
2−1
log2 2
𝑛 −1
𝑛 = 𝑐𝑛 + 𝑐𝑛2
2−1
𝑛−1
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛2
1
𝑇 𝑛 = 𝑐𝑛2 − 𝑐𝑛 + 𝑐𝑛2
𝑇 𝑛 = 2𝑐𝑛2 − 𝑐𝑛
𝐻𝑒𝑛𝑐𝑒, 𝑇 𝑛 = Θ(𝑛2 )
Recursion Tree Method (Example 3)
Example 3
𝑛
Solve the recurrence 𝑇 𝑛 = 2𝑇 2 + Θ(𝑛) by using recursion tree
method.
𝑇(𝑛) = 2𝑇(𝑛/2) + 𝑐𝑛 , 𝑐 > 0
The term 𝑐𝑛, at the root represent the costs incurred by the
𝑛
subproblems of size 2 .
Construction of Recursion tree
𝑇(𝑛)
Fig –(a)
Figure (a) shows T(n), which progressively expands in (b) to form
recursion tree.
Recursion Tree Method (Example 3)
𝑇(𝑛) = 2𝑇(𝑛/2) + 𝑐𝑛 , 𝑐 > 0

𝑐𝑛 𝑛

𝑛 𝑛 𝑛
𝑐 𝑐 2 =𝑛
2 2 2
log 2 𝑛

𝑛 𝑛 𝑛 𝑛 𝑛
𝑇 𝑇 𝑇 𝑇 22 =𝑛
4 4 4 4 22

𝑛
2𝑖 =𝑛
2𝑖
𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1
2𝑖
Fig –(b)
Recursion Tree Method (Example 3)
Analysis
First, we find the height of the recursion tree
𝑛
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
2 𝑖
𝑛
i.e. the subproblem size hits 𝑛 = 1, when =1
2 𝑖
𝑛
So, if =1
2 𝑖
⟹𝑛= 2 𝑖 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2 𝑖
⟹ 𝑖 = log 2 𝑛
So the height of the tree is log 2 𝑛.
Recursion Tree Method (Example 3)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 4 𝑖 .
𝑛
So, each node at depth ′𝑖 ′ 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … … , log 2 𝑛 − 1 has cost 𝑐 .
2𝑖
𝑛
Hence the total cost at level ′𝑖′ is 2𝑖 𝑐 2𝑖 = 𝑐𝑛
However, the bottom level is special. Each of the bottom node has contribute
cost = 𝑇(1)
Hence the cost of the bottom level is = 2𝑖
⟹ 2log2 𝑛 𝑎𝑠 𝑖 = log 2 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹ 𝑛log2 2
⟹𝑛
Recursion Tree Method (Example 3)

So, the total cost of entire tree is

𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯ + 𝑐𝑛

log 𝑛
𝑇 𝑛 = 𝑐𝑛 σ𝑖=02 1𝑖 .

𝑇 𝑛 = 𝑐𝑛 (log 2 𝑛 + 1)

𝑇 𝑛 = 𝑐𝑛 log 2 𝑛 + 𝑐𝑛

𝐻𝑒𝑛𝑐𝑒, 𝑇 𝑛 = Θ(𝑛 log 2 𝑛)


Recursion Tree Method (Example 4)
Example 4
𝑛 𝑛 𝑛
Solve the recurrence 𝑇 𝑛 = 𝑇 2 + 𝑇 4 + 𝑇 8 + Θ(𝑛) by using
recursion tree method.
𝑛 𝑛 𝑛
𝑇 𝑛 =𝑇 +𝑇 +𝑇 + 𝑐𝑛 , 𝑐 > 0
2 4 8
The term 𝑐𝑛, at the root represent the costs incurred by the subproblems
𝑛 𝑛 𝑛
of size 2 , 4 , 𝑎𝑛𝑑 8 .
Construction of Recursion tree
𝑇(𝑛)
Fig –(a)
Figure (a) shows T(n), which progressively expands in (b) to form
recursion tree.
Recursion Tree Method (Example 4)
𝑛 𝑛 𝑛
𝑇 𝑛 =𝑇 +𝑇 +𝑇 + 𝑐𝑛 , 𝑐 > 0
2 4 8
𝑐𝑛 𝑐𝑛

𝑛 𝑛
𝑛 𝑐 7
𝑐 𝑐 8
2 4 𝑐𝑛
8
log 2 𝑛

2
𝑇
𝑛
𝑇
𝑛
𝑇
𝑛
𝑇
𝑛
8
𝑇
𝑛
16
𝑇
𝑛
𝑇
𝑛
𝑇
𝑛
𝑇
𝑛
7
2 4 8 32 16 32 64
𝑐𝑛
8

𝑖
7
𝑐𝑛
8

𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1 𝑐𝑛
Fig –(b)
Recursion Tree Method (Example 4)
Analysis
First, we find the height of the recursion tree

𝑛 𝑛 𝑛
Hear the problem divide into three subproblem of size 𝑖 , 𝑖 , 𝑎𝑛𝑑 𝑖 .
2 4 8
For calculating the height of the tree, we consider the longest path of the tree. It has
been observed that the node on the left-hand side is the longest path of the tree.
𝑛
Hence the node at depth ′𝑖′ reflects a subproblem of size .
2𝑖

𝑛
i.e. the subproblem size hits 𝑛 = 1, when =1
2𝑖
𝑛
So, if =1
2𝑖
⟹ 𝑛 = 2𝑖 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2 𝑖
⟹ 𝑖 = log 2 𝑛
So the height of the tree is log 2 𝑛.
Recursion Tree Method (Example 4)
𝑛 𝑛 𝑛
Second, we determine the cost of the tree in level ′𝑖′ = + +
2𝑖 4𝑖 8𝑖
So, the total cost of the tree is:
2 3
7 7 7
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯
8 8 8
For simplicity we take ∞ Geometric Series
2 3
7 7 7
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ∞
8 8 8
∞ 𝑖
7
𝑇 𝑛 ≤෍ 𝑐𝑛
8
𝑖=0
1
𝑇 𝑛 ≤ 𝑐𝑛
7
1−
8
8
𝑇 𝑛 ≤ 𝑐𝑛
1
𝑇 𝑛 ≤ 8𝑐𝑛
𝐻𝑒𝑛𝑐𝑒, 𝑇 𝑛 = Ο(𝑛)
Design and Analysis of Algorithm

(Heap Sort)

Lecture -14 - 17
Overview

• O(n lg n) worst case time complexity.

• Sorts in place-like insertion sort.

• O(n) is the tight bound analysis of Build Heap.


Heap data structure
Example
Example

• Given an array of size N. The task is to sort the


array elements by using Heap Sort.
– Input:
– N=10
– Arr[]:{16, 4, 10, 14, 7, 9, 3, 2, 8, 1}
– Output: 1 2 3 4 7 8 9 10 14 16
Example

• Given an array of size N. The task is to sort the


array elements by using Heap Sort.
– Input:
– N = 10
– arr[] = {10,9,8,7,6,5,4,3,2,1}
– Output:1 2 3 4 5 6 7 8 9 10
Heap property

• For max-heaps (largest element at root),


max-heap property: for all nodes i ,
excluding the root, A[PARENT(i )] ≥ A[i ].
• For min-heaps (smallest element at root),
min-heap property: for all nodes i ,
excluding the root, A[PARENT(i )] ≤ A[i ].
Maintaining the heap property
The Time Complexity
of MAX-HEAPIFY (A,i,n)
is Ο(log 𝑛).
[Because any element
insert in a tree
required maximum
log n time]
Building a heap
Example
Building a max-heap from the following unsorted array results in the
first heap example.
Correctness
Initialization:

Initialization: we know that each node n / 2 + 1, n / 2 + 2, . . . , n is a


leaf, which is the root of a trivial max-heap. Since i = n / 2 before the
first iteration of the for loop, the invariant is initially true.

Maintenance: Children of node i are indexed higher than i , so by the


loop invariant, they are both roots of max-heaps. Correctly assuming
that i+1, i+2, . . . , n are all roots of max-heaps, MAX-HEAPIFY makes
node i a max-heap root. Decrementing i re-establishes the loop
invariant at each iteration.

Termination: When i = 0, the loop terminates. By the loop invariant,


each node, notably node 1, is the root of a max-heap.
Analysis
• Simple bound: O(n) calls to MAX-HEAPIFY,
each of which takes O(lg n) time ⇒ O(n lg n).

•Tighter analysis observation:


An n element heap has height log 𝑛 and at
𝑛
most nodes of any height h.
2ℎ +1
Tighter analysis Proof

BUILD-MAX-HEAP(A,n) 9
𝒇𝒐𝒓 𝒊 ← 𝒏/𝟐 𝒅𝒐𝒘𝒏𝒕𝒐 𝟏
do MAX-HEAPIFY (A,i,n) 6 5

0 8 2 1
1 2 3 4 5 6 7 8
9 6 5 0 8 2 1 3
3
Tighter analysis Proof
• For easy understanding, Let us take a complete binary
Tree,

• The height of a node is the number of edges from the


node to the deepest leaf.
• The depth of a node is the no of edges from the root of
the node.
Tighter analysis Proof
• All the leaves are of height 0, therefore
there are 8 nodes at height 0.
• 4 numbers of nodes at height 1.
• 2 numbers of nodes at height 2.
• and one node at height 3.
Tighter analysis Proof
• Hence the question is how many nodes are
there at height ‘h’ in a complete binary tree?
• The answer is :
• If there are n nodes in tree, then at most
𝑛
nodes are available at height h.
2ℎ+1
Tighter analysis Proof
• Now if we apply MAX-HEAPIFY() on any node of any
level, then the time taken by MAX-HEAPIFY() is the
𝑛
height of the node.(i.e. ℎ+1 Ο(ℎ))
2
• Hence in case of root the time taken is log 𝒏.
• Hence
log 𝑛
𝑛
𝑇 𝑛 = ෍ Ο(ℎ)
2ℎ+1
ℎ=0
Tighter analysis Proof
log 𝑛
𝑛
𝑇 𝑛 = ෍ Ο(ℎ)
2ℎ+1
ℎ=0

≤ Ο 𝑛 σ∞
ℎ=0 2ℎ 2
𝑛 ∞ ℎ
≤Ο σℎ=0 ℎ
2 2
𝑛 ∞ 1 ℎ
≤Ο σ ℎ 2
2 ℎ=0

𝑛 1Τ 𝑛
≤Ο 2
⟹ Ο(2 2 )
2 1−1Τ2 2
T(n) ⟹ Ο 𝑛
Hence the running time of BUILD-MAX-HEAP(A,n) is Ο 𝑛 in tight bound .
Tighter analysis Proof

1
෍ 𝑥𝑘 = value of Infanite G P Series
1−𝑥
𝑘=0

෍ 𝑥 𝑘 = (1 − 𝑥)−1
𝑘=0
Differentiate both side:

1
෍ 𝑘. 𝑥 𝑘−1 = −1 1 − 𝑥 −2
−1 =
(1 − 𝑥)2
𝑘=0
Multiply x both side

𝑥
෍ 𝑘. 𝑥 𝑘 ==
(1 − 𝑥)2
𝑘=0
The heapsort algorithm
Given an input array, the heapsort algorithm acts as
follows:
• Builds a max-heap from the array.
• Starting with the root (the maximum element), the
algorithm places the maximum element into the
correct place in the array by swapping it with the
element in the last position in the array.
• “Discard” this last node (knowing that it is in its
correct place) by decreasing the heap size, and calling
MAX-HEAPIFY on the new (possibly incorrectly-placed)
root.
• Repeat this “discarding” process until only one node
(the smallest element) remains, and therefore is in the
correct place in the array.
Example
Analysis
• BUILD-MAX-HEAP: O(n)
• for loop: n − 1 times
• exchange elements: O(1)
• MAX-HEAPIFY: O(lg n)

Total time: O(n lg n).


Heap implementation of
priority queue
• Heaps efficiently implement priority
queues. These notes will deal with max
priority queues implemented with max-
heaps. Min-priority queues are
implemented with min-heaps similarly.
• A heap gives a good compromise between
fast insertion but slow extraction and vice
versa. Both operations take O(lg n) time.
Priority queue
• Maintains a dynamic set S of elements.
• Each set element has a key-an associated value.
• Max-priority queue supports dynamic-set operations:
• INSERT(S, x): inserts element x into set S.
• MAXIMUM(S): returns element of S with largest key.
• EXTRACT-MAX(S): removes and returns element of S
with largest key.
• INCREASE-KEY(S, x, k): increases value of element x’s
key to k. Assume k ≥ x’s current key value.
• Example max-priority queue application: schedule jobs on
shared computer.
• Min-priority queue supports similar operations:
• INSERT(S, x): inserts element x into set S.
• MINIMUM(S): returns element of S with
smallest key.
• EXTRACT-MIN(S): removes and returns
element of S with smallest key.
• DECREASE-KEY(S, x, k): decreases value of
element x’s key to k. Assume k ≤ x’s current
key value.
• Example min-priority queue application:
event - driven simulator.
Finding the maximum element

Getting the maximum element is easy: it’s


the root.
HEAP-MAXIMUM(A)
return A[1]
Finding the maximum element

Getting the maximum element is easy: it’s


the root.
HEAP-MAXIMUM(A)
return A[1]
Time: 𝚶(1).
Extracting max element
Given the array A:
• Make sure heap is not empty.
• Make a copy of the maximum element (the root).
• Make the last node in the tree the new root.
• Re-heapify the heap, with one fewer node.
• Return the copy of the maximum element.
Extracting max element
Given the array A:
• Make sure heap is not empty.
• Make a copy of the maximum element (the root).
• Make the last node in the tree the new root.
• Re-heapify the heap, with one fewer node.
• Return the copy of the maximum element.
HEAP-EXTRACT-MAX(A, n)
if n < 1
then error .heap underflow.
max ← A[1]
A[1] ← A[n]
MAX-HEAPIFY(A, 1, n − 1) remakes heap
return max
Extracting max element
Given the array A:
• Make sure heap is not empty.
• Make a copy of the maximum element (the root).
• Make the last node in the tree the new root.
• Re-heapify the heap, with one fewer node.
• Return the copy of the maximum element.
HEAP-EXTRACT-MAX(A, n) Time Complexity
if n < 1
then error .heap underflow.
is Ο(log 𝑛)
max ← A[1]
A[1] ← A[n]
MAX-HEAPIFY(A, 1, n − 1) remakes heap
return max
• HEAP-INCREASE-KEY(A,i,key)
1. If key<A[i]
2. error” new key is smaller than the
current key”.
3. A[i]= key
4. While i>1 and A[parent(i)]<A[i]
5. swap(A[parent(i)], A[i])
6. i=parent(i)
• HEAP-INCREASE-KEY(A,i,key)
1. If key<A[i]
2. error” new key is smaller than the
current key”.
3. A[i]= key
4. While i>1 and A[parent(i)]<A[i]
5. swap(A[parent(i)], A[i])
6. i=parent(i)
The running time of HEAP-INCREASE-
KEY(A,i,key) is Ο(log 𝑛)
• Example(from own)
• MAX-HEAP-INSERT(A,key)
1. A.heap-size= A.heap-size+1
2. A[heap-size]=-∞
3. HEAP-INCREASE-KEY(A,heap-size,key)
• MAX-HEAP-INSERT(A,key)
1. A.heap-size= A.heap-size+1
2. A[heap-size]=-∞
3. HEAP-INCREASE-KEY(A,heap-size,key)
The running time of MAX-HEAP-INSERT(A,key) is
Ο(log 𝑛).
Design and Analysis of Algorithm

Linear Time Sorting


(Counting Sort)

Lecture -18
Overview

• Running time of counting sort is O(n+k).


• Required extra space for sorting.
• Is a stable sorting.
Counting Sort
• Counting sort is a type of sorting technique
which is based on keys between a specific
range.
• It works by counting the number of objects
having distinct key values (i.e. one kind of
hashing).
Counting Sort
• Consider the input set : 4, 1, 3, 4, 3. Then n=5 and
k=4.
• Counting sort determines for each input element 𝑥,
the number of elements less than 𝑥.
• This information is uses to place element 𝑥 directly
into its position in the output array.
• For example if there exits 17 elements less that x
then x is placed into the 18th position into the
output array.
Counting Sort
• Assumptions:
• n records
• Each record contains keys or data
• All keys are in the range of 0 to k, where k is
the highest key value of the array.
• Space:
For coding this algorithm uses three array:
• Input Array: A[1..n] store input data , where n is the
length of the array.
• Output Array: B[1..n] finally store the sorted data
• Temporary Array: C[0..k] store data temporarily
Counting Sort
• Let us illustrate the counting sort with an example.
Apply the concept of counting sort on the given array.
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
Counting Sort
• Let us illustrate the counting sort with an example.
Apply the concept of counting sort on the given
array.
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3

• First create a new array C[0…..k] , where k is the


highest key value. And initialize with 0(i.e. zero)

for i=0 to k 0 1 2 3 4 5
C 0 0 0 0 0 0
C[i]= 0;
Counting Sort
• Let us illustrate the counting sort with an example.
Apply the concept of counting sort on the given array.
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3

• Find the frequencies of each object and store it in C


array.
for j=1 to A. length
C[ A[j] ] = C[ A[j] ] + 1; 0 1 2 3 4 5
C 2 0 2 3 0 1
Counting Sort
• Let us illustrate the counting sort with an example.
Apply the concept of counting sort on the given array.
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3

• Find the frequencies of each object and store it in C


array.
for j=1 to A. length
0 1 2 3 4 5
C[ A[j] ] = C[ A[j] ] + 1; C 2 0 2 3 0 1

• And then cumulatively add C array.


for i=1 to k 0 1 2 3 4 5
C[i] = C[i] + C[i-1]; C 2 2 4 7 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 7 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 7 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 2 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 2 2 4 6 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 1 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 6 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 4 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 4 5 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 1 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 1 2 3 5 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 0 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 5 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 4 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 7
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 3 4 7 7
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 3 4 7 7

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 2 4 7 7
Counting Sort
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k
3. C[i]= 0;
4. for j=1 to A. length
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Counting Sort
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k [Loop 1]
3. C[i]= 0;
4. for j=1 to A. length [Loop 2]
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k [Loop 3]
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1 [Loop 4]
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Complexity Analysis
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k [Loop 1] 𝚶 𝒌 𝒕𝒊𝒎𝒆𝒔
3. C[i]= 0;
4. for j=1 to A. length [Loop 2] 𝚶 𝒏 𝒕𝒊𝒎𝒆𝒔
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k [Loop 3] 𝚶 𝒌 𝒕𝒊𝒎𝒆𝒔
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1 [Loop 2] 𝚶 𝒏 𝒕𝒊𝒎𝒆𝒔
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Complexity Analysis
• So the counting sort takes a total time of: O(n + k)
• Counting sort is called stable sort.
( A sorting algorithm is stable when numbers with the
same values appear in the output array in the same
order as they do in the input array.)
Pro’s and Con’s of Counting Sort

• Pro’s
• Asymptotically very Fast - O(n + k)
• Simple to code
• Con’s
• Doesn’t sort in place.
• Requires O(n + k) extra storage space.
Design and Analysis of Algorithm

Linear Time Sorting


(Radix Sort and Bucket Sort)

Lecture -19
Linear Time Sorting
(Radix Sort)
Overview

• Running time of counting sort is 𝛩(𝑑 𝑛 + 𝑘 )


• Required extra space for sorting.
• Is a stable sorting.
Radix Sort
• Radix sort is non comparative sorting
method
• Two classifications of radix sorts are least
significant digit (LSD) radix sorts and most
significant digit (MSD) radix sorts.
• LSD radix sorts process the integer
representations starting from the least digit
and move towards the most significant digit.
MSD radix sorts work the other way around.
Radix Sort (Algorithm)

Radix_Sort(A,d)
𝑓𝑜𝑟 𝑖 ← 𝑑 𝑑𝑜𝑤𝑛 𝑡𝑜 1
𝑈𝑠𝑒 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡 𝑡𝑜 𝑠𝑜𝑟𝑡 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦 𝐴 𝑜𝑛 𝑑𝑖𝑔𝑖𝑡 𝑖
(𝑖. 𝑒. 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡)
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅)
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜𝑑
𝑑𝑜 "use a stable sort to sort array A on digit i;
329
457
657
839
436
720
355
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅)
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜𝑑
𝑑𝑜 "use a stable sort to sort array A on digit i;
329 720
457 355
657 436
839 457
436 657
720 329
355 839

i
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅)
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜𝑑
𝑑𝑜 "use a stable sort to sort array A on digit i;
329 720 720
457 355 329
657 436 436
839 457 839
436 657 355
720 329 457
355 839 657

i i
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅)
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜𝑑
𝑑𝑜 "use a stable sort to sort array A on digit i;
329 720 720 329
457 355 329 355
657 436 436 436
839 457 839 457
436 657 355 657
720 329 457 720
355 839 657 839

i i i
Radix Sort (Analysis)
Radix_Sort(A,d)
𝑓𝑜𝑟 𝑖 ← 𝑑 𝑑𝑜𝑤𝑛 𝑡𝑜 1
𝑈𝑠𝑒 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡 𝑡𝑜 𝑠𝑜𝑟𝑡 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦 𝐴 𝑜𝑛 𝑑𝑖𝑔𝑖𝑡 𝑖
(𝑖. 𝑒. 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡)

• 𝐻𝑒𝑟𝑒 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡 𝑒𝑥𝑒𝑐𝑢𝑡𝑒 𝑓𝑜𝑟 𝑑 𝑡𝑖𝑚𝑒𝑠.


• 𝑇ℎ𝑒 𝑟𝑢𝑛𝑛𝑖𝑛𝑔 𝑡𝑖𝑚𝑒 𝑜𝑓 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡 𝑖𝑠 𝛩 𝑛 + 𝑘
• 𝐻𝑒𝑛𝑐𝑒 𝑡ℎ𝑒 𝑟𝑢𝑛𝑛𝑖𝑛𝑔 𝑡𝑖𝑚𝑒 𝑐𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦 𝑜𝑓 𝑅𝑎𝑑𝑖𝑥 𝑆𝑜𝑟𝑡 𝑖𝑠
𝛩(𝑑 𝑛 + 𝑘 )
Linear Time Sorting
(Bucket Sort)
Overview

• The average time complexity is 𝑂(𝑛 + 𝑘).


• The worst time complexity is 𝑂(𝑛²).
• Required extra space for sorting.
• Is a stable sorting.
Bucket Sort
• Bucket sort is a comparison sort algorithm
that operate on elements by dividing them
into different bucket and return the result.
• Buckets are assigned based on each
element’s search key.
• A the time of returning the result, First
concatenate each bucket one by one and
then return the result in a single array.
Bucket Sort
• Some variations
– Make enough buckets so that each will
only hold one element, use a count for
duplicates.
– Use fewer buckets and then sort the
contents of each bucket.
• The more buckets you use, the faster the
algorithm will run but it uses more
memory.
Bucket Sort
• Time complexity is reduced when the number of
items per bucket is evenly distributed and it is
closed to one item per bucket.
• As buckets require extra space, This algorithm
trading increased space consumption for a
lower time complexity.
• In general, Bucket Sort beats all other sorting
techniques in time complexity but can require a
huge of space.
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
0 1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
1 2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
0 1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
2
0 1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
0 2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
0 2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4

0 0 0 1 1 2 2 2 2 3 3 4 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

3 2 4 2 2
0 1 2 3 4

0 0 0 1 1 2 2 2 2 3 3 4 4
Bucket Sort
• One Value per bucket:
Algorithm BucketSort( S )
( values in S are between 0 and m-1 )
for j  0 to m-1 do // initialize m buckets
b[j]  0
for i  0 to n-1 do // place elements in their
b[S[i]]  b[S[i]] + 1 // appropriate buckets
i0
for j  0 to m-1 do // place elements in buckets
for r  1 to b[j] do // back in S (Concatination)
S[i]  j
ii+1
Bucket Sort
One Value per bucket (Analysis)
• Bucket initialization: 𝑂( 𝑚 )
• From array to buckets: 𝑂( 𝑛 )
• From buckets to array: 𝑂( 𝑛 )
• Due to the implementation of dequeue.
• Since 𝑚 will likely be small compared to 𝑛, Bucket
sort is 𝑂( 𝑛 )
• Strictly speaking, time complexity is 𝑂 ( 𝑛 + 𝑚 )
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.20
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.12 .20
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.12 .20 .58


0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.12 .20 .58 .63


0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.64
.12 .20 .58 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.64
.12 .20 .36 .58 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.37 .64
.12 .20 .36 .58 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.37 .64
.12 .20 .36 .47 .58 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.37 .52 .64


.12 .20 .36 .47 .58 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.18 .37 .52 .64


.12 .20 .36 .47 .58 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.18 .37 .52 .64


.12 .20 .36 .47 .58 .63 .88
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.18 .37 .52 .64


.09 .12 .20 .36 .47 .58 .63 .88
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.18 .37 .52 .64


.09 .12 .20 .36 .47 .58 .63 .88 .99
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.18 .37 .52 .64


.09 .12 .20 .36 .47 .58 .63 .88 .99
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.18 .37 .52 .64


.09 .12 .20 .36 .47 .58 .63 .88 .99
0 1 2 3 4 5 6 7 8 9

Apply Internal
sorting(stable)
on highlighted
data
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.18 .37 .58 .64


.09 .12 .20 .36 .47 .52 .63 .88 .99
0 1 2 3 4 5 6 7 8 9
After Internal
sorting(stable)
on highlighted
data
Bucket Sort
• Multiple items per bucket:
.20 .12 .58 .63 .64 .36 .37 .47 .52 .18 .88 .09 .99

.18 .37 .58 .64


.09 .12 .20 .36 .47 .52 .63 .88 .99
0 1 2 3 4 5 6 7 8 9

.09 .12 .18 .20 .36 .37 .47 .52 .58 .63 .64 .88 .99
Bucket Sort
• Multiple items per bucket:
Algorithm BucketSort( S )
1. 𝐿𝑒𝑡 𝐵[0. . (𝑛 − 1)]𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦.
2. 𝑛 ← 𝐴. 𝑙𝑒𝑛𝑔𝑡ℎ
3. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1
4. 𝑚𝑎𝑘𝑒 𝐵 𝑖 𝑎𝑛 𝑒𝑚𝑝𝑡𝑦 𝑙𝑖𝑠𝑡
5. 𝑓𝑜𝑟 𝑖 ←← 1 𝑡𝑜 𝑛
6. 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵[𝑛 ∗ 𝐴 𝑖 ]
7. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1
8. 𝑠𝑜𝑟𝑡 𝑙𝑖𝑠𝑡 𝐵 𝑖 𝑤𝑖𝑡ℎ 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡𝑖𝑛𝑔(𝑖𝑛𝑠𝑒𝑟𝑡𝑖𝑜𝑛 𝑠𝑜𝑟𝑡)
9. 𝐶𝑜𝑛𝑐𝑎𝑡𝑒𝑛𝑎𝑡𝑒 𝑡ℎ𝑒 𝑙𝑖𝑠𝑡 𝐵 0 , 𝐵 1 , 𝐵 2 , … … , 𝐵 𝑛 − 1
𝑡𝑜𝑔𝑒𝑡ℎ𝑒𝑟 𝑖𝑛 𝑜𝑟𝑑𝑒𝑟.
Bucket Sort
• Multiple items per bucket:
Algorithm BucketSort( S )
1. 𝐿𝑒𝑡 𝐵[0. . (𝑛 − 1)]𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦. Ο(1) if all the
elements
2. 𝑛 ← 𝐴. 𝑙𝑒𝑛𝑔𝑡ℎ Ο(1) belongs to
3. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1 one bucket.
4. 𝑚𝑎𝑘𝑒 𝐵 𝑖 𝑎𝑛 𝑒𝑚𝑝𝑡𝑦 𝑙𝑖𝑠𝑡 } Ο(𝑛)
5. 𝑓𝑜𝑟 𝑖 ←← 1 𝑡𝑜 𝑛
6. 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵[𝑛 ∗ 𝐴 𝑖 ] }
Ο(𝑛)
7. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1
}
8. 𝑠𝑜𝑟𝑡 𝑙𝑖𝑠𝑡 𝐵 𝑖 𝑤𝑖𝑡ℎ 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡𝑖𝑛𝑔(𝑖𝑛𝑠𝑒𝑟𝑡𝑖𝑜𝑛 𝑠𝑜𝑟𝑡) Ο(𝑛2 )
9. 𝐶𝑜𝑛𝑐𝑎𝑡𝑒𝑛𝑎𝑡𝑒 𝑡ℎ𝑒 𝑙𝑖𝑠𝑡 𝐵 0 , 𝐵 1 , 𝐵 2 , … … , 𝐵 𝑛 − 1
𝑡𝑜𝑔𝑒𝑡ℎ𝑒𝑟 𝑖𝑛 𝑜𝑟𝑑𝑒𝑟. } Ο(𝑛)
Bucket Sort
Multiple items per bucket (Analysis)
• It was observed that except line no 8 all other lines
take Ο 𝑛 time in worst case.
• Line no. 8 (i.e. insertion sort) takes Ο 𝑛2 , if all the
elements belongs to one bucket.
• The average time complexity for Bucket Sort is
𝑂(𝑛 + 𝑘) in uniform distribution of data.
Bucket Sort
Characteristics of Bucket Sort
• Bucket sort assumes that the input is drawn from a
uniform distribution.
• The computational complexity estimates involve the
number of buckets.
• Bucket sort can be exceptionally fast because of the
way elements are assigned to buckets, typically using
an array where the index is the value.
Bucket Sort
Characteristics of Bucket Sort
• This means that more auxiliary memory is required
for the buckets at the cost of running time than more
comparison sorts.
• The average time complexity is 𝑂(𝑛 + 𝑘).
• The worst time complexity is 𝑂(𝑛²).
• The space complexity for Bucket Sort is 𝑂(𝑛 + 𝑘).
Design and Analysis of Algorithm

Linear Time Sorting


(Shell Sort)

Lecture -20
Overview

• Running time of Shell sort in worst case is 𝑂 𝑛2 or


float between 𝑂 𝑛 log 𝑛 𝑎𝑛𝑑 𝑂 𝑛2 .
• Running time of Shell sort in best case is Ο(𝑛 lg 𝑛) .
And 𝑂(𝑛 ) if the total number of comparisons for
each interval (or increment) is equal to the size of
the array.
• Is not a stable sorting.
Shell Sort

• Designed by Donald Shell and named the


sorting algorithm after himself in 1959.
• Shell sort works by comparing elements
that are distant rather than adjacent
elements in an array or list where adjacent
elements are compared.
• Shell sort is also known as diminishing
increment sort.
Shell Sort
• Shell sort improves on the efficiency of
insertion sort by quickly shifting values to
their destination.
• This algorithm tries to decreases the
distance between comparisons (i.e. gap) as
the sorting algorithm runs and reach to its
last phase where, the adjacent elements are
compared only.
Shell Sort
• The distance of comparisons (i.e. gap) is
maintained by the following methods:
• divide by 2(Two) [Designed by Donaled
Shell)
• Knuth Method(𝑖. 𝑒. 𝑔𝑎𝑝 ← 𝑔𝑎𝑝 ∗ 3 + 1)
(𝑖𝑛𝑖𝑡𝑖𝑎𝑙𝑙𝑦 𝑡ℎ𝑒 𝑔𝑎𝑝 𝑠𝑡𝑎𝑟𝑡 𝑤𝑖𝑡ℎ 1)
Shell Sort
• Let’s execute an example with the help of
Knuth’s gap method on the following array.

35 33 42 10 14 19 27 44

• At the beginning the gap is initialized as 1


• Hence the new gap value for iteration 1 is
calculated as follows:
𝑔𝑎𝑝 = 𝑔𝑎𝑝 ∗ 3 + 1
=1∗3+1=4
Shell Sort
Swap count =0

35 33 42 10 14 19 27 44

swap
Shell Sort
Swap count =1

14 33 42 10 35 19 27 44

swap
Shell Sort
Swap count =1

14 33 42 10 35 19 27 44

swap
Shell Sort
Swap count =2

14 19 42 10 35 33 27 44

swap
Shell Sort
Swap count =2

14 19 42 10 35 33 27 44

swap
Shell Sort
Swap count =3

14 19 27 10 35 33 42 44

swap
Shell Sort
Swap count =3

14 19 27 10 35 33 42 44

No swap required
Shell Sort
• After the first iteration the array looks like as
follows.

14 19 27 10 35 33 42 44

• Again we finding the gap value for nest iteration.


𝑔𝑎𝑝 = 𝑔𝑎𝑝 ∗ 3 + 1
• We can write the above equation as follows:
𝑔𝑎𝑝 − 1
𝑔𝑎𝑝 =
3
𝑔𝑎𝑝 −1 4 −1
• So, the new gap is 𝑔𝑎𝑝 = = =1
3 3
Shell Sort
Swap count =3

14 19 27 10 35 33 42 44

No swap
Shell Sort
Swap count =3

14 19 27 10 35 33 42 44

No swap
Shell Sort
Swap count =3

14 19 27 10 35 33 42 44

swap
Shell Sort
Swap count =4

14 19 10 27 35 33 42 44

swap
Shell Sort
Swap count =4

14 19 10 27 35 33 42 44

swap

14 19 10 27 35 33 42 44

swap
Shell Sort
Swap count =5

14 19 10 27 35 33 42 44

swap

14 10 19 27 35 33 42 44

swap
Shell Sort
Swap count =5

14 19 10 27 35 33 42 44

swap

14 10 19 27 35 33 42 44

swap
14 10 19 27 35 33 42 44

swap
Shell Sort
Swap count =6

14 19 10 27 35 33 42 44

swap

14 10 19 27 35 33 42 44

swap
10 14 19 27 35 33 42 44

swap
Shell Sort
Swap count =6

10 14 19 27 35 33 42 44

No swap
Shell Sort
Swap count =6

10 14 19 27 35 33 42 44

swap
Shell Sort
Swap count =7

10 14 19 27 33 35 42 44

swap
Shell Sort
Swap count =7

10 14 19 27 33 35 42 44

No swap
Shell Sort
Swap count =7

10 14 19 27 33 35 42 44

No swap
Shell Sort
Swap count =7

10 14 19 27 33 35 42 44

No swap
• Hence ,total number of swap required in
• 1st iteration= 3
• 2nd iteration= 4
• So total 7 numbers of swap required to sort the
array by shell sort.
Shell Sort
Algorithm Shell sort (Knuth Method)
1. gap=1
2. while(gap < A.length/3)
3. gap=gap*3+1
4. while( gap>0)
5. for(outer=gap; outer<A.length; outer++)
6. Ins_value=A[outer]
7. inner=outer
8. while(inner>gap-1 && A[inner-gap]≥ Ins_value)
9. A[inner]=A[inner-gap]
10. inner=inner-gap
11. A[inner]=Ins_value
12. gap=(gap-1)/3
Shell Sort
• Let us dry run the shell sort algorithm with the same example
as already discussed.
35 33 42 10 14 19 27 44
At the beginning
A .length=8 and gap=1
After first three line execution the gap value changed to 4
Now, gap>0 (i.e. 4>0)
Now in for loop outer=4;outer<8;outer++
Ins_value=A[outer]=A[4]=14
inner=outer i.e. inner=4
Now the line no 8 is true⟹ 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑐𝑐𝑢𝑟𝑒𝑑 and the updated array is
looked as follow
14 33 42 10 35 19 27 44

Swap
Shell Sort

Now in for loop outer=5 ;outer<8; outer++


Ins_value=A[outer]=A[5]=19
inner=outer i.e. inner=5
Now the line no 8 is true⟹ 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑐𝑐𝑢𝑟𝑒𝑑 and the updated
array is looked as follow

14 19 42 10 35 33 27 44

swap
Shell Sort

Now in for loop outer=6 ;outer<8; outer++


Ins_value=A[outer]=A[6]=27
inner=outer i.e. inner=6
Now the line no 8 is true⟹ 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑐𝑐𝑢𝑟𝑒𝑑 and the updated
array is looked as follow

14 19 27 10 35 33 42 44

swap
Shell Sort

Now in for loop outer=7 ;outer<8; outer++


Ins_value=A[outer]=A[7]=44
inner=outer i.e. inner=7
Now the line no 8 is False ⟹ no change in array and the updated
array is looked as follow

14 19 27 10 35 33 42 44

No swap required
Shell Sort

• Now again gap value will be calculated .


• The new gap value is 1. And again the same procedure will be
continued .
Shell Sort

• Now again gap value will be calculated .


• The new gap value is 1. And again the same procedure will be
continued .
Shell Sort

• Now again gap value will be calculated .


• The new gap value is 1. And again the same procedure will be
continued . And finally the sorted array looks as given below
with 7(seven) number of swap

10 14 19 27 33 35 42 44
Shell Sort
Analysis:
• Shell sort is efficient for medium size lists.
• For bigger list, this algorithm is not the best choice.
• But it is the fastest of all Ο(𝑛2 ) sorting algorithm.
• The best case in shell sort is when the array is already sorted in
the right order i.e. Ο(𝑛)
• The worst case time complexity is based on the gap sequence.
That’s why various scientist give their gap intervals. They are:
1. Donald Shell give the gap interval 𝑛Τ2. ⟹ Ο(𝑛 log 𝑛)

2. Knuth give the gap interval 𝑔𝑎𝑝 ← 𝑔𝑎𝑝 ∗ 3 + 1 ⟹ Ο(𝑛 2 )
3. Hibbard give the gap interval 2𝑘−1 ⟹ Ο(𝑛 log 𝑛)
Shell Sort
Analysis:
In General
• Shell sort is an unstable sorting algorithm because this
algorithm does not examine the elements lying in between the
intervals.
• Worst Case Complexity: less than or equal to 𝑂 𝑛2 or float
between 𝑂 𝑛 log 𝑛 𝑎𝑛𝑑 𝑂 𝑛2 .
• Best Case Complexity: 𝑂(𝑛 log 𝑛)
When the array is already sorted, the total number of
comparisons for each interval (or increment) is equal to 𝑂(𝑛 )
i.e. the size of the array.
• Average Case Complexity: 𝑂(𝑛 log 𝑛)
It is around 𝑂 𝑛1.25 .
Shell Sort
Analysis:
In General
• Shell sort is an unstable sorting algorithm because this
algorithm does not examine the elements lying in between the
intervals.
• Worst Case Complexity: less than or equal to 𝑂 𝑛2 or float
between 𝑂 𝑛 log 𝑛 𝑎𝑛𝑑 𝑂 𝑛2 .
• Best Case Complexity: 𝑂(𝑛 log 𝑛)
When the array is already sorted, the total number of
comparisons for each interval (or increment) is equal to 𝑂(𝑛 )
i.e. the size of the array.
• Average Case Complexity: 𝑂(𝑛 log 𝑛)
It is around 𝑂 𝑛1.25 .

(Remark: Accurate model not yet been discovered)


Design and Analysis of Algorithm

Divide and Conquer strategy


(Merge Sort)
Lecture -21
Overview
• Learn the technique of “divide and conquer”
in the context of merge sort with analysis.
A Sorting Problem
(Divide and Conquer Approach)

• Divide the problem into a number of sub


problems.
• Conquer the sub problems by solving them
recursively.
– Base case: If the sub problems are small
enough, just solve them by brute force.
• Combine the sub problem solutions to give a
solution to the original problem.
Merge sort
• A sorting algorithm based on divide and conquer. Its worst-case
running time has a lower order of growth than insertion sort.
• Because we are dealing with sub problems, we state each sub
problem as sorting a sub array A[p . . r ].
• Initially, p = 1 and r = n, but these values change as we recurse
through sub problems.
To sort A[p . . r ]:
• Divide by splitting into two sub arrays A[p . . q] and A[q + 1 . . r ],
where q is the halfway point of A[p . . r ].
• Conquer by recursively sorting the two sub arrays A[p . . q] and
A[q + 1 . . r ].
• Combine by merging the two sorted sub arrays A[p . . q] and
A[q + 1 . . r ] to produce a single sorted sub array A[p . . r ]. To
accomplish this step, we’ll define a procedure MERGE(A, p, q, r ).
Example
Bottom-up view for n = 8: [Heavy lines demarcate subarrays used in
subproblems.]
Example
Bottom-up view for n = 11: [Heavy lines demarcate subarrays used in
subproblems.]
Merge Sort (Algorithm)
The recursion bottoms out when the subarray has just 1
element, so that it’s trivially sorted.
Example [A call of MERGE(9, 12, 16)]
Example [A call of MERGE(9, 12, 16)]
Example [A call of MERGE(9, 12, 16)]
Example [A call of MERGE(9, 12, 16)]
Merging
Input: Array A and indices p, q, r such that
• p≤q<r.
• Subarray A[p . . q] is sorted and subarray A[q + 1 . . r ] is
sorted. By the restrictions on p, q, r , neither subarray is
empty.

Output: The two subarrays are merged into a single sorted


subarray in A[p . . r ].

We implement it so that it takes (n) time, where


n = r − p + 1 = the number of elements being merged.
Pseudocode (Merging)
Analyzing divide-and-conquer algorithms
Analyzing merge sort
Recursion tree (Step 1)
Recursion tree (Step 2)
Recursion tree (Step n)
Home Assignment

• Solve the Recurrence of Merge Sort with the


help of Iteration method.
Design and Analysis of Algorithm

Divide and Conquer strategy


(Quick Sort)
Lecture -22
Overview

• Worst-case running time: Θ(𝑛2 )

• Expected running time: Θ(𝑛 lg 𝑛)

• Constants hidden in Θ(𝑛 lg 𝑛) are small.

• Sorts in place.
Description of quicksort
Performance of quicksort

The running time of quicksort depends on


the partitioning of the subarrays:
• If the subarrays are balanced, then
quicksort can run as fast as mergesort.
• If they are unbalanced, then quicksort
can run as slowly as insertion sort.
Randomized version of quicksort
• We have assumed that all input permutations are equally likely.
• This is not always true.
• To correct this, we add randomization to quicksort.
• We could randomly permute the input array.
• Instead, we use random sampling, or picking one element at
random.
• Don’t always use A[r ] as the pivot. Instead, randomly pick an
element from the subarray that is being sorted.

We add this randomization by not always using A[r ] as the pivot,


but instead randomly picking an element from the subarray that is
being sorted
Analysis of quicksort

We will analyze
• the worst-case running time of QUICKSORT
and RANDOMIZED-QUICKSORT (the same),
and
• the expected (average-case) running time of
RANDOMIZED-QUICKSORT is O(n lg n) .
Design and Analysis of Algorithm

Divide and Conquer strategy


(Maximum Sub-array Problem)
Lecture -23
Overview
• Learn the technique of “divide and conquer”
in the context of the maximum sub-array with
analysis.
The Maximum subarray Problem
(A Divide and Conquer Approach)

• Divide the problem into a number of sub


problems.
• Conquer the sub problems by solving them
recursively.
– Base case: If the sub problems are small
enough, just solve them by brute force.
• Combine the sub problem solutions to give a
solution to the original problem.
The Maximum subarray problem
➢ Problem: In a share market you can buy a unit of
stock, only one time, then sell it at a later date
➢ Buy/sell at end of day
➢ Strategy: buy low, sell high
➢ The lowest price may appear after the highest price
➢ Assume you know future prices
➢ Objective: Can you maximize profit by buying at
lowest price and selling at highest price?
The Maximum subarray problem
➢ Example 1:
Day 0 1 2 3 4
Price 10 11 7 10 6

Daywise stock price information


12

10

0
0 1 2 3 4

Concept: Buy lowest sell highest


Objective : Maximize the profit
The Maximum subarray problem
➢ Transformation of Example 1
➢ Find sequence of days so that:
➢ the net change from last to first is maximized
➢ Look at the daily change in price
➢ 𝐶ℎ𝑎𝑛𝑔𝑒 𝑜𝑛 𝑑𝑎𝑦 𝑖 = 𝑝𝑟𝑖𝑐𝑒 𝑜𝑛 𝑑𝑎𝑦(𝑖) − 𝑝𝑟𝑖𝑐𝑒 𝑑𝑎𝑦 (𝑖 − 1)
➢ We now have an array of changes (numbers),
Day 0 1 2 3 4
Price 10 11 7 10 6
Changes 1 -4 3 -4
➢ Hence the changes are : 1, -4, 3, -4
➢ Find contiguous subarray with largest sum
➢ maximum subarray–E.g.: buy after day 2, sell after day 3
The Maximum subarray problem
➢ Example 2:
Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97

Day wise stock price information


120

100

80

60

40

20

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Concept: Buy lowest sell highest Objective : Maximize the profit


The Maximum subarray problem
➢ Transformation of Example 2:
➢ Find sequence of days so that:
➢ the net change from last to first is maximized
➢ Look at the daily change in price
➢ 𝐶ℎ𝑎𝑛𝑔𝑒 𝑜𝑛 𝑑𝑎𝑦 𝑖 = 𝑝𝑟𝑖𝑐𝑒 𝑜𝑛 𝑑𝑎𝑦(𝑖) − 𝑝𝑟𝑖𝑐𝑒 𝑑𝑎𝑦 (𝑖 − 1)
➢ We now have an array of changes (numbers),
Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97
Changes 13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
➢ Hence the changes are : 13, -3, -25, 20, -3, -16, -23, 18, 20, -7, 12, -5,
-22, 15, -4, and 7
➢ Find contiguous subarray with largest sum
➢ maximum subarray–E.g.: buy after day 7, sell after day 11
The Maximum subarray problem

➢ Question
➢ How many buy/sell pairs are possible over ‘n’ days?
(i.e. search every possible pair of buy and sell dates in
which the buy date precedes the sell date)

➢ Brute force Approach


➢ Evaluate each pair and keep track of maximum.
The Maximum subarray problem
➢ Brute force Approach
Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
S[0,1] [1] 13
S[1,2] [2] -3
Gap between day=1

S[2,3] [3] -25


S[3,4] [4] 20
S[4,5] [5] -3
S[5,6] [6] -16
S[6,7] [7] -23
S[7,8] [8] 18
S[8,9] [9] 20
S[9,10] [10] -7
S[10,11] [11] 12
S[11,12] [12] -5
S[12,13] [13] -22
S[13,14] [14] 15
S[14,15] [15] -4
S[15,16] [16] 7
The Maximum subarray problem
➢ Brute force Approach
Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
S[0,2] [1] 13 10
S[1,3] [2] -3 -28
Gap between day=2

S[2,4] [3] -25 -5


S[3,5] [4] 20 17
S[4,6] [5] -3 -19
S[5,7] [6] -16 -39
S[6,8] [7] -23 -5
S[7,9] [8] 18 38
S[8,10] [9] 20 13
S[9,11] [10] -7 5
S[10,12] [11] 12 7
S[11,13] [12] -5 -27
S[12,14] [13] -22 -7
S[13,15] [14] 15 11
S[14,16] [15] -4 3
[16] 7
The Maximum subarray problem
➢ Brute force Approach
Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
S[0,3] [1] 13 10 -15
S[1,4] [2] -3 -28 -8
Gap between day=3

S[2,5] [3] -25 -5 -8


S[3,6] [4] 20 17 1
S[4,7] [5] -3 -19 -42
S[5,8] [6] -16 -39 -21
S[6,9] [7] -23 -5 15
S[7,10] [8] 18 38 31
S[8,11] [9] 20 13 25
S[9,12] [10] -7 5 0
S[10,13] [11] 12 7 -15
S[11,14] [12] -5 -27 -12
S[12,15] [13] -22 -7 -11
S[13,16] [14] 15 11 18
[15] -4 3
[16] 7
The Maximum subarray problem
➢ Brute force Approach
Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
S[0,4] [1] 13 10 -15 2
S[1,5] [2] -3 -28 -8 -27
Gap between day=4

S[2,6] [3] -25 -5 -8 -47


S[3,7] [4] 20 17 1 -4
S[4,8] [5] -3 -19 -42 -24
S[5,9] [6] -16 -39 -21 -1
S[6,10] [7] -23 -5 15 8
S[7,11] [8] 18 38 31 43
S[8,12] [9] 20 13 25 20
S[9,13] [10] -7 5 0 -22
S[10,14] [11] 12 7 -15 0
S[11,15] [12] -5 -27 -12 -16
S[12,16] [13] -22 -7 -11 -4
[14] 15 11 18
[15] -4 3
[16] 7
The Maximum subarray problem
➢ Brute force Approach
Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97
SUB STRING ARRAY (i.e. S Array)
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
[1] 13 10 -15 5 2 -14 -37 -19 1 -6 6 1 -21 -6 -10 -3
[2] -3 -28 -8 -11 -27 -50 -32 -12 -19 -7 -12 -34 -19 -23 -16
[3] -25 -5 -8 -24 -47 -29 -9 -16 -4 -9 -31 -16 -20 -13
[4] 20 17 1 -22 -4 16 9 21 16 -6 9 5 12
[5] -3 -19 -42 -24 -4 -11 1 -4 -26 -11 -15 -8
[6] -16 -39 -21 -1 -8 4 -1 -23 -8 -12 -5
[7] -23 -5 15 8 20 15 -7 8 4 11
[8] 18 38 31 43 38 16 31 27 34
[9] 20 13 25 20 -2 13 9 16
[10] -7 5 0 -22 -7 -11 -4
[11] 12 7 -15 0 -4 3
[12] -5 -27 -12 -16 -9
[13] -22 -7 -11 -4
[14] 15 11 18
[15] -4 3
[16] 7
The Maximum subarray problem
➢ Brute force Approach
Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97
SUB STRING ARRAY (i.e. S Array)
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
[1] 13 10 -15 5 2 -14 -37 -19 1 -6 6 1 -21 -6 -10 -3
[2] -3 -28 -8 -11 -27 -50 -32 -12 -19 -7 -12 -34 -19 -23 -16
[3] -25 -5 -8 -24 -47 -29 -9 -16 -4 -9 -31 -16 -20 -13
[4] 20 17 1 -22 -4 16 9 21 16 -6 9 5 12
[5] -3 -19 -42 -24 -4 -11 1 -4 -26 -11 -15 -8
[6] -16 -39 -21 -1 -8 4 -1 -23 -8 -12 -5
[7] -23 -5 15 8 20 15 -7 8 4 11
[8] 18 38 31 43 38 16 31 27 34
[9] 20 13 25 20 -2 13 9 16
[10] -7 5 0 -22 -7 -11 -4
[11] 12 7 -15 0 -4 3
[12] -5 -27 -12 -16 -9
[13] -22 -7 -11 -4
[14] 15 11 18
[15] -4 3
[16] 7

Hence, maximum subarray–E.g.: buy after day 7, sell on day 11


The Maximum subarray problem
➢ Brute force Approach
𝑛
➢ The total number of pairs (Combinations) are .
2
Hence the complexity is Θ(𝑛2 )
➢ Can we do better?
The Maximum subarray problem
➢ Brute force Approach
𝑛
➢ The total number of pairs (Combinations) are .
2
Hence the complexity is Θ(𝑛2 )
➢ Can we do better?

Let’s rewrite the problem again:


The Maximum subarray problem
The maximum sum subarray problem is the task to find a
contiguous subarray with the largest sum of a given one-
dimensional array 𝐴𝑟𝑟[1. . 𝑛] of numbers. The task is to find
indices ′𝒊′ and ′𝒋′ with the condition 𝟏 ≤ 𝒊 ≤ 𝒋 ≤ 𝒏, such that:
𝒋

෍ 𝑨𝒓𝒓[𝒙]
𝒙=𝒊
Is as large as possible.
(Note: The number of the input array may be positive,
negative or zero)
The Maximum sum subarray problem
• Input: an array A[1..n] of n numbers
– Assume that some of the numbers are negative, because this
problem is trivial when all numbers are nonnegative
• Output: a nonempty subarray A[i..j] having the largest sum
S[i, j] = Ai + Ai+1 +... + Aj
Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97
Changes 13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
A

maximum subarray
The Maximum subarray problem
➢ Divide and Conquer Approach
• Subproblem: Find a maximum subarray of A[low .. high]
In initial call, low =1 and high= n.
• Divide: the subarray into two subarrays of as equal size as
possible. Find the midpoint mid of the subarrays, and consider
the subarrays A[low ..mid] and A[mid+1 .. high] .
• Conquer: by finding the maximum subarrays of A[low .. mid] and
A[mid+1..high] .
• Combine: by finding a maximum subarray that crosses the
midpoint, and using the best solution out of the three (the
subarray crossing the midpoint and the two solutions found in
the conquer step).
The Maximum subarray problem

➢ Divide and Conquer Approach


Possible locations of a maximum subarray A[i..j] of
A[low..high], where mid = (low+high)/2
• entirely in A[low..mid] (low  i  j  mid)

• entirely in A[mid+1..high] (mid < i  j  high)

• crossing the midpoint (low  i  mid < j  high)


The Maximum subarray problem
➢Divide and Conquer Approach
crosses the midpoint

low mid high

mid +1
entirely in A[low..mid] entirely in A[mid+1..high]
Fig (a): Possible locations of subarrays of A[low..high]
A[mid+1..j]

low
i mid high

mid +1 j
A[i..mid]

Fig (b): A[i..j] comprises two subarrays A[i..mid] and A[mid+1..j]


The Maximum subarray problem
➢Divide and Conquer Approach
crosses the midpoint

low mid high

mid +1
entirely in A[low..mid] entirely in A[mid+1..high]

For example :
-1 3 4 -5 9 -2
The Maximum subarray problem
➢Divide and Conquer Approach
crosses the midpoint

low mid high

mid +1
entirely in A[low..mid] entirely in A[mid+1..high]

For example :
-1 3 4 -5 9 -2

Left Sum = -1 + 3 + 4 = 6
Right Sum= -5 + 9 + -2 = 2
Cross Midpoint Sum = 3 + 4 + -5 + 9 = 11

Hence, Max sum =11 and sequence is ( 3 , 4, -5, 9)


The Maximum subarray problem
Changes(A) 13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7

indices 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
S[8..8] 𝑚𝑎𝑥 − 𝑙𝑒𝑓𝑡 ⇒ 18 20 S[9..9]
S[7..8] -5 13 S[9..10]

S[6..8] -21 25 ⟸ 𝑚𝑎𝑥 − 𝑟𝑖𝑔ℎ𝑡 S[9..11]

S[5..8] -24 20 S[9..12]


S[4..8] -4 -2 S[9..13]

S[3..8] -29 13 S[9..14]

S[2..8] -32 9 S[9..15]


S[1..8] -19 16 S[9..16]
⟹ 𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑠𝑢𝑏𝑎𝑟𝑟𝑎𝑦 𝑐𝑟𝑜𝑠𝑠𝑖𝑛𝑔 𝑚𝑖𝑑 𝑖𝑠 𝑆[8. . 11] = 18 + 25 = 43
Find-Max-Crossing-Subarray(A, low, mid, high)
left-sum=-∞ Changes(
Find maximum subarray of the

sum=0 13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


A)
form A[low..mid]

for i=mid downto low indices 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16


sum=sum+A[i]
if sum>left-sum S[8..8] 𝑚𝑎𝑥 − 𝑙𝑒𝑓𝑡 ⇒ 18 20 S[9..9]

left-sum=sum S[7..8] -5 13 S[9..10]


max-left=i
S[6..8] -21 25 ⟸ 𝑚𝑎𝑥 − 𝑟𝑖𝑔ℎ𝑡 S[9..11]

S[5..8] -24 20 S[9..12]

S[4..8] -4 -2 S[9..13]

S[3..8] -29 13 S[9..14]

S[2..8] -32 9 S[9..15]

S[1..8] -19 16 S[9..16]


Find-Max-Crossing-Subarray(A, low, mid, high)
left-sum=-∞ Changes(
Find maximum subarray of the

sum=0 13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


A)
form A[low..mid]

for i=mid downto low indices 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16


sum=sum+A[i]
if sum>left-sum S[8..8] 𝑚𝑎𝑥 − 𝑙𝑒𝑓𝑡 ⇒ 18 20 S[9..9]

left-sum=sum S[7..8] -5 13 S[9..10]


max-left=i
S[6..8] -21 25 ⟸ 𝑚𝑎𝑥 − 𝑟𝑖𝑔ℎ𝑡 S[9..11]

S[5..8] -24 20 S[9..12]


right-sum=-∞
Find maximum subarray of the

sum=0 S[4..8] -4 -2 S[9..13]


form A[mid+1..high]

for j=mid+1 to high


S[3..8] -29 13 S[9..14]
sum=sum+A[j]
if sum>right-sum S[2..8] -32 9 S[9..15]
right-sum=sum
S[1..8] -19 16 S[9..16]
max-right=j
// Return the indices and the sum of two subarrays
return(max-left, max-right, left-sum+right-sum)
The Maximum subarray problem
The Maximum subarray problem

Initial call: FIND-MAXIMUM-SUBARRAY(A,1,n)


• Divide by computing mid.

• Conquer by the two recursive calls to FIND-MAXIMUM-SUBARRAY.

• Combine by calling FIND-MAX-CROSSING-SUBARRAY and then determining

• which of the three results gives the maximum sum.

• Base case is when the subarray has only 1 element.


The Maximum subarray problem(Analysis)
𝑇(𝑛)
Θ(1)

Θ(1)
𝑛
𝑇
2
𝑛
𝑇
2
Θ(𝑛)

Θ(1)
The Maximum subarray problem(Analysis)
𝑇(𝑛)
Θ(1)

Θ(1)
𝑛
𝑇
Type equation here. 2
𝑛
𝑇
2
Θ(𝑛)

Θ(1)

𝒏
𝑻 𝒏 = 𝟐𝑻 + 𝜣(𝒏)
𝟐
The Maximum subarray problem(Analysis)
𝑇(𝑛)
Θ(1)

Θ(1)
𝑛
𝑇
Type equation here. 2
𝑛
𝑇
2
Θ(𝑛)

Θ(1)

𝒏
𝑻 𝒏 = 𝟐𝑻 + 𝜣(𝒏) ⟹ 𝚯(𝒏𝒍𝒈 𝒏)
𝟐
Analysing Maximum subarray problem
Analysing Maximum subarray problem
Analysing Maximum subarray problem

A complete example given in next slide for easy under


standing.
The Maximum subarray problem
➢Divide and Conquer Approach Start Division
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The Maximum subarray problem
➢Divide and Conquer Approach
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The Maximum subarray problem
➢Divide and Conquer Approach
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Divide

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The Maximum subarray problem
➢Divide and Conquer Approach
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Divide

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The Maximum subarray problem
➢Divide and Conquer Approach
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Divide

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Divide
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The Maximum subarray problem
➢Divide and Conquer Approach
Start Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The Maximum subarray problem
➢Divide and Conquer Approach

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=-23 LS=-4 RS=7
LS=13 RS=-3 LS=-25 RS=20 LS=-3 RS=-16 RS=18 LS=20 RS=-7 LS=12 RS=-5 LS=-22 RS=15

CS=10 CS=-5
CS=-19 CS=-5 CS=13 CS=7 CS=-7 CS=3 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
The Maximum subarray problem
➢Divide and Conquer Approach

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=13 RS=-3

CS=10 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
The Maximum subarray problem
➢Divide and Conquer Approach

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=-23 LS=-4 RS=7
LS=13 RS=-3 LS=-25 RS=20 LS=-3 RS=-16 RS=18 LS=20 RS=-7 LS=12 RS=-5 LS=-22 RS=15

CS=10 CS=-5
CS=-19 CS=-5 CS=13 CS=7 CS=-7 CS=3 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
The Maximum subarray problem
➢Divide and Conquer Approach

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=13 RS=20 LS=-3 RS=18 LS=20 RS=7
Conquer
RS=12 LS=15

CS=5 CS=-21 CS=25 CS=18

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=-23 LS=-4 RS=7
LS=13 RS=-3 LS=-25 RS=20 LS=-3 RS=-16 RS=18 LS=20 RS=-7 LS=12 RS=-5 LS=-22 RS=15

CS=10 CS=-5
CS=-19 CS=-5 CS=13 CS=7 CS=-7 CS=3 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
The Maximum subarray problem
➢Divide and Conquer Approach

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=20 RS=18 LS=25 RS=18

CS=17 CS=16
Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=13 RS=20 LS=-3 RS=18 LS=20 RS=7
Conquer
RS=12 LS=15

CS=5 CS=-21 CS=25 CS=18

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=-23 LS=-4 RS=7
LS=13 RS=-3 LS=-25 RS=20 LS=-3 RS=-16 RS=18 LS=20 RS=-7 LS=12 RS=-5 LS=-22 RS=15

CS=10 CS=-5
CS=-19 CS=-5 CS=13 CS=7 CS=-7 CS=3 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
The Maximum subarray problem
➢Divide and Conquer Approach
maximum subarray

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Conquer
LS=20 RS=25

CS=43

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=20 RS=18 LS=25 RS=18

CS=17 CS=16
Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=13 RS=20 LS=-3 RS=18 LS=20 RS=7
Conquer
RS=12 LS=15

CS=5 CS=-21 CS=25 CS=18

13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
LS=-23 LS=-4 RS=7
LS=13 RS=-3 LS=-25 RS=20 LS=-3 RS=-16 RS=18 LS=20 RS=-7 LS=12 RS=-5 LS=-22 RS=15

CS=10 CS=-5
CS=-19 CS=-5 CS=13 CS=7 CS=-7 CS=3 Conquer
13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[Note: Where LS (Left Sum), RS (Right Sum) and CS (Cross Sum)]
Home Assignment
• Solve the Maximum Subarray problem in
Θ 𝑛 time.
Design and Analysis of Algorithm

Divide and Conquer strategy


(Matrix Multiplication
by Strassen’s Algorithm)
Lecture -24
Overview

• Learn the implementation techniques


of “divide and conquer” in the context
of the Strassen’s Matrix multiplication
with analysis.
• 𝐶𝑜𝑛𝑣𝑒𝑛𝑡𝑖𝑜𝑛𝑎𝑙 𝑠𝑡𝑟𝑎𝑡𝑒𝑔𝑦 ⇒ Ο(𝑛3 ).
• 𝐷𝑖𝑣𝑖𝑑𝑒 𝑎𝑛𝑑 𝐶𝑜𝑞𝑢𝑒𝑠 𝑠𝑡𝑟𝑎𝑡𝑒𝑔𝑦 ⇒ Ο(𝑛3 ).
• 𝑆𝑡𝑟𝑎𝑠𝑠𝑒𝑛′ 𝑠 𝑠𝑡𝑟𝑎𝑡𝑒𝑔𝑦 ⇒ Ο(𝑛2.81 ).
Matrix Multiplication
• Problem definition:
Matrix Multiplication
• Conventional strategy:
Matrix Multiplication
• Question is
𝐼𝑠 Θ(𝑛3 ) is the best or we can multiply
the matrix in 𝜊 𝑛3 time?
(i.e. can we solve it in < Θ(𝑛3 ) )
• Let’s see with Divide and Conquer
strategy………
Matrix Multiplication
• Divide-and-conquer strategy :
➢ As with the other divide-and-conquer algorithms, assume that n is
a power of 2(i.e. 2𝑛 ).
𝒏 𝒏
➢ Partition each of A,B, C into four 𝟐 × 𝟐 matrices:

𝐴11 𝐴12 𝐵11 𝐵12 𝐶 𝐶12


𝐴= ,𝐵 = , 𝐶 = 11
𝐴21 𝐴22 𝐵21 𝐵22 𝐶21 𝐶22

For multiplication we can write 𝐶 = 𝐴 . 𝐵 as


𝐶11 𝐶12 𝐴 𝐴12 𝐵11 𝐵12
𝐶21 𝐶22
= 11
𝐴21 𝐴22
. 𝐵21 𝐵22
Matrix Multiplication
• Divide-and-conquer strategy :
For multiplication we can write 𝐶 = 𝐴 . 𝐵 as
𝐶11 𝐶12 𝐴 𝐴12 𝐵11 𝐵12
𝐶21 𝐶22
= 11
𝐴21 𝐴22
. 𝐵21 𝐵22

Which create four equations. They are


𝐶11 = 𝐴11 . 𝐵11 + 𝐴12 . 𝐵21
𝐶12 = 𝐴11 . 𝐵12 + 𝐴12 . 𝐵22
𝐶21 = 𝐴21 . 𝐵11 + 𝐴22 . 𝐵21
𝐶22 = 𝐴21 . 𝐵12 + 𝐴22 . 𝐵22
𝒏 𝒏
Each of these equations multiplies two × matrices and then adds their
𝟐 𝟐
𝒏 𝒏
× products.
𝟐 𝟐
Matrix Multiplication
• Divide-and-conquer strategy :
By using the equations of previous slide we ca write the Divide and conquer
algorithm.
REC-MAT-MULT (A, B, n)
Let C be a n x n matrix
If n== 1
C11 = A11 x B11
𝒏 𝒏
else partition A,B, and C into × submatrices.
𝟐 𝟐
C11 = REC−MAT−MULT A11 , B11 , NΤ2 + REC−MAT−MULT A12 , B21 , NΤ2
C12 = REC−MAT−MULT A11 , B12 , NΤ2 + REC−MAT−MULT A12 , B22 , NΤ2
C21 = REC−MAT−MULT A21 , B11 , NΤ2 + REC−MAT−MULT A22 , B21 , NΤ2
C22 = REC−MAT−MULT A21 , B12 , NΤ2 + REC−MAT−MULT A22 , B22 , NΤ2
Return C
Matrix Multiplication
• Analysis of Divide-and-conquer strategy :
𝒏 𝒏
Let T(n) be the time to multiply two × matrices.
𝟐 𝟐
Base Case: n=1. Perform one scalar multiplication: Θ(1).
Recursive Case: n>1
• Dividing takes b Θ(1) time, using index calculations.
𝒏 𝒏
• Conquering makes 8 recursive calls, each multiplying × matrices.
𝟐 𝟐
(i.e. 8𝑇 𝑛Τ2 )
𝒏 𝒏
• Combining Takes Θ 𝑛2 time to add × matrices four items.
𝟐 𝟐
Hence the Recurrence is
Θ 1 𝑖𝑓 𝑛 = 1
𝑇 𝑛 =ቐ 𝑛
8𝑇 + Θ 𝑛2 𝑖𝑓𝑛 > 1
2
The Complexity is Θ(𝑛3 ) (Apply Master Method)
Matrix Multiplication
• Analysis of Divide-and-conquer strategy :
𝒏 𝒏
Let T(n) be the time to multiply two × matrices.
𝟐 𝟐
Base Case: n=1. Perform one scalar multiplication: Θ(1).
Recursive Case: n>1
• Dividing takes b Θ(1) time, using index calculations.
𝒏 𝒏
• Conquering makes 8 recursive calls, each multiplying × matrices.
𝟐 𝟐
(i.e. 8𝑇 𝑛Τ2 )
𝒏 𝒏
• Combining Takes Θ 𝑛2 time to add × matrices four items.
𝟐 𝟐
Hence the Recurrence is
Θ 1 𝑖𝑓 𝑛 = 1
𝑇 𝑛 =ቐ 𝑛
8𝑇 + Θ 𝑛2 𝑖𝑓𝑛 > 1
2
The Complexity is Θ(𝑛3 ) (Apply Master Method)
Can we do better?
Matrix Multiplication
• Strassen’s strategy :
The Idea:

➢ Make the recursion tree less bushy.

➢ Perform only 7(seven) recursive multiplications


of n/2 x n/2 matrices, rather than 8(Eight).
Matrix Multiplication
• Strassen’s strategy :
The Algorithm:
1. As in the recursive method, partition each of the
𝒏 𝒏
matrices into four 𝟐 × 𝟐 submatrices. Time: Θ 1
2. Compute 7 matrix products 𝑃, 𝑄, 𝑅, 𝑆, 𝑇, 𝑈, 𝑉 for each
𝒏 𝒏
× .
𝟐 𝟐
𝒏 𝒏
3. Compute 𝟐 × 𝟐 submatrices of C by adding and
subtracting various combinations of the 𝑃𝑖 . Time:
Θ(𝑛2 ) .
Matrix Multiplication
• Strassen’s strategy :
Details of Step 2:
Compute 7 matrix products:
P=(𝐀𝟏𝟏 + 𝐀𝟐𝟐 ). (𝑩𝟏𝟏 +𝑩𝟐𝟐 ) U= (𝑨𝟐𝟏 − 𝑨𝟏𝟏 ).(𝑩𝟏𝟏 + 𝑩𝟏𝟐 )

Q=(𝑨𝟐𝟏 + 𝑨𝟐𝟐 ). 𝑩𝟏𝟏 V=(𝑨𝟏𝟐 − 𝑨𝟐𝟐 ).(𝑩𝟐𝟏 + 𝑩𝟐𝟐 )

R= 𝐀𝟏𝟏 .(𝑩𝟏𝟐 − 𝑩𝟐𝟐 )

S= 𝑨𝟐𝟐 .(𝑩𝟐𝟏 − 𝑩𝟏𝟏 )

T=(𝑨𝟏𝟏 + 𝑨𝟏𝟐 ). 𝑩𝟐𝟐


Matrix Multiplication
• Strassen’s strategy :
Details of Step 3:
Compute C with 4 adding and subtracting :
𝑪𝟏𝟏 = 𝑷 + 𝑺 − 𝑻 + 𝑽
𝑪𝟏𝟐 = 𝑹 + 𝑻
𝑪𝟐𝟏 = 𝑸 + 𝑺
𝑪𝟐𝟐 = 𝑷 + 𝑹 − 𝑸 + 𝑼
Matrix Multiplication
• Strassen’s strategy :
Analysis:
The Recurrence is
Θ 1 𝑖𝑓 𝑛 = 1
𝑇 𝑛 =ቐ 𝑛
7𝑇 + Θ 𝑛2 𝑖𝑓𝑛 > 1
2
The Complexity is Θ(𝑛log2 7 ) = Θ(𝑛2.81 ) (By using
Master Method)
Matrix Multiplication
Example 1
• Compute Matrix multiplication of the following two
matrices with the help of Strassen’s strategy
1 2 5 6
𝐴= and 𝐵 =
3 4 7 8
Matrix Multiplication
Example 1
• Compute Matrix multiplication of the following two
matrices with the help of Strassen’s strategy
1 2 5 6
𝐴= and 𝐵 =
3 4 7 8
Ans:
𝐴11 =1, 𝐴12 =2, 𝐴21 =3, and 𝐴22 =4
𝐵11 =5, 𝐵12 =6, 𝐵21 =7, and 𝐵22 =8
Matrix Multiplication
Example 1
Calculate the value of P, Q, R, S, T, U and V
P=(A11 + A22 ). (𝐵11 +𝐵22 ) = (1+4)(5+8)=5 x 13=65
Q=(𝐴21 + 𝐴22 ). 𝐵11 = (3+4)5=7 x 5=35
R= A11 .(𝐵12 − 𝐵22 )=1(6-8)=1 x -2=-2
S= 𝐴22 .(𝐵21 − 𝐵11 ) = 4(7-5)=4 x 2=8
T=(𝐴11 + 𝐴12 ). 𝐵22 =(1+2)8=3 x 8=24
U= (𝐴21 − 𝐴11 ).(𝐵11 + 𝐵12 )=(3-1)(5+6)=2 x 11=22
V=(𝐴12 − 𝐴22 ).(𝐵21 + 𝐵22 )=(2-4)(7+8)=-2 x 15=-30
Matrix Multiplication
Example 1
Compute 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 :
𝐶11 = 𝑃 + 𝑆 − 𝑇 + 𝑉= 65+8-24-30=19
𝐶12 = 𝑅 + 𝑇=-2+24=22
𝐶21 = 𝑄 + 𝑆=35+8=43
𝐶22 = 𝑃 + 𝑅 − 𝑄 + 𝑈=65-2-35+22=50
Hence,
19 22
𝐶=
43 50
Matrix Multiplication
Example 2
Compute Matrix multiplication of the following two matrices with
the help of Strassen’s strategy.

4 2 0 1 2 1 3 2
𝐴= 3 1 2 5 ,B= 5 4 2 3
3 2 1 4 1 4 0 2
5 2 6 7 3 2 4 1
Matrix Multiplication
Example 2
First we partition the input matrices into sub matrices as shown
below:

4 2 0 1 2 1 3 2
𝐴11 = , 𝐴12 = 𝐵11 = , 𝐵12 =
3 1 2 5 5 4 2 3

3 2 1 4 1 4 0 2
𝐴21 = , 𝐴22 = 𝐵21 = , 𝐵22 =
5 2 6 7 3 2 4 1
Matrix Multiplication
Example 2
Calculate the value of P, Q, R, S, T, U and V
𝑃 = (A11 + A22 ). (𝐵11 +𝐵22 )

5 6 2 3
=
9 8 9 5
Q=(𝐴21 + 𝐴22 ). 𝐵11
64 45
=
90 67 4 6 2 1
=
11 9 5 4
38 28
=
67 47
Matrix Multiplication
Example 2
R= A11 .(𝐵12 − 𝐵22 )

4 2 3 0
=
3 1 −2 2
8 4
=
7 2 S= 𝐴22 .(𝐵21 − 𝐵11 )

1 4 −1 3
=
6 7 −2 −2
−9 −5
=
−20 4
Matrix Multiplication
Example 2
T=(𝐴11 + 𝐴12 ). 𝐵22 U= (𝐴21 − 𝐴11 ).(𝐵11 + 𝐵12 )

4 3 0 2 −1 0 5 3
= =
5 6 4 1 2 1 7 7
12 11 −5 −3
= =
24 16 17 13
V=(𝐴12 − 𝐴22 ).(𝐵21 + 𝐵22 )

−1 −3 1 6
=
−4 −2 7 3
−22 −15
=
−18 −30
Matrix Multiplication
Example 2
Now, Compute 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 :

𝐶11 = 𝑃 + 𝑆 − 𝑇 + 𝑉
64 45 −9 −5 12 11 −22 −15 21 14
𝐶11 = + - + =
90 67 −20 4 24 16 −18 −30 28 25

𝐶12 = 𝑅 + 𝑇
8 4 12 11 20 15
𝐶12 = + =
7 2 24 16 31 18
Matrix Multiplication
Example 2
Now, Compute 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 :

𝐶21 = 𝑄 + 𝑆
38 28 −9 −5 29 23
𝐶21 = + =
67 47 −20 4 47 51

𝐶22 = 𝑃 + 𝑅 − 𝑄 + 𝑈
64 45 8 4 38 28 −5 −3 29 18
𝐶22 = + - + =
90 67 7 2 67 47 17 13 47 35
Matrix Multiplication
Example 2
So the values of 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 are:
21 14 20 15 29 23 29 18
𝐶11 = , 𝐶12 = , 𝐶21 = and 𝐶22 =
28 25 31 18 47 51 47 35
Hence the resultant Matrix C is =
21 14 20 15
𝐶11 𝐶12
𝐶= = 28 25 31 18
𝐶21 𝐶22 29 23 29 18
47 51 47 35
Design and Analysis of Algorithm

Divide and Conquer strategy


(Convex Hull Problem)

Lecture -25
Overview

• Learn the implementation techniques


of “divide and conquer” in the context
of the Convex Hull Problem with
analysis.
Convex Hull
• Given a set of pins on a pinboard, and a rubber
band aroundthem .
• How does the rubber band look when it snaps
tight? Just imagine.
Convex Hull
• Given a set of pins on a pinboard, and a rubber band around
them .
• How does the rubber band look when it snaps tight? Just
imagine. 4
5 3

6
2

7 1
• We represent the convex hull as the sequence of points on
the convex hull polygon, in counter- clockwise order.
Convex Hull
• Definition:
➢ Informal: Convex hull of a set of points in
plane is the shape taken by a rubber band
stretched around the nails pounded into
the plane at each point.
➢ Formal: The convex hull of a set of planar
points is the smallest convex polygon
containing all of the points.
Graham Scan
• Concept:
➢Start at point guaranteed to be on the hull.
(the point with the minimum y value)
➢Sort remaining points by polar angles of
vertices relative to the first point.
➢Go through sorted points, keeping vertices of
points that have left turns and dropping
points that have right turns.
Graham Scan
• Concept:
➢Start at point guaranteed to be on the hull.
(the point with the minimum y value)
➢Sort remaining points by polar angles of
vertices relative to the first point.
➢Go through sorted points, keeping vertices of
points that have left turns and dropping
points that have right turns.
Graham Scan - Example
Graham Scan - Example

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Example
p10

p9 p6

p12 p7 p5

p4 p3
p11
p8 p1
p2

p0
Graham Scan - Algorithm
Graham Scan - Algorithm

−−−−− −𝜪(𝒏)

− −𝜪(𝒏𝒍𝒐𝒈𝒏)

−−−−− −𝜪(𝟏)
−−−−− −𝜪(𝟏)
−−−−− −𝜪(𝟏)

−𝜪(𝒏)
Graham Scan - Algorithm

−−−−− −𝜪(𝒏)

− −𝜪(𝒏𝒍𝒐𝒈𝒏)

−−−−− −𝜪(𝟏)
−−−−− −𝜪(𝟏)
−−−−− −𝜪(𝟏)

−𝜪(𝒏)

n
T n = Ο n + 2T +Ο 𝑛 + Ο 1 + Ο 1 + Ο 1 + Ο(𝑛)
2
𝐓 𝐧 = 𝚶(𝒏 𝒍𝒈 𝒏)+ 𝜪 𝒏
Graham Scan - Algorithm

• Time complexity of Graham’s scan:


➢ 𝑂(𝑛 log 𝑛) time required to sort of angles in step 2.

➢ 𝑂(𝑛) time required for visiting n points.(step 6 to step 9)

Hence we can write the complexity of Graham’s scan as:


𝑇 𝑛 = Ο(𝑛 lg 𝑛) + Ο(𝑛) = Ο(𝑛 lg 𝑛)
Divide and Conquer (Quickhull)
 QuickHull uses a Divide and Conquer approach
similar to the Quick Sort algorithm.
 Benchmarks showed it is quite fast in most
average cases.
 Recursive nature allows a fast and yet
clean implementation.
Divide and Conquer (Quickhull)
Initial Input
• The initial input to the algorithm is an arbitrary set of
points as shown in the figure.
Divide and Conquer (Quickhull)
First Two Points on the Convex Hull
• Starting with the given set of points the first operation
done is the calculation of the two maximal points on the
horizontal axis. i.e.
Divide and Conquer (Quickhull)
Recursively Divide
 Next the line formed by these two points is usedto divide the set
into two different parts.
 Everything left from this line is considered onepart, everything
right of it is considered anotherone.
 Both of these parts are processed recursively.
Divide and Conquer (Quickhull)
Max Distance Search
 To determine the next point on the convex hull a search for the
point with the greatest distance from the dividing line isdone.
 This point, together with the line start and end point forms a
triangle.
Divide and Conquer (Quickhull)
Point Exclusion
 All points inside this triangle can not be part of the convex hull
polygon, as they are obviously lying in the convex hull of the three
selected points.
 Therefore, these points can be ignored for every further
processing step.
Divide and Conquer (Quickhull)
Recursively Divide
 Having this in mind the recursive processingcan take placeagain.
 Everything right of the triangle is used as one subset, everything
leftof it as anotherone.
Divide and Conquer (Quickhull)
Recursively Divide
 Having this in mind the recursive processingcan take placeagain.
 Everything right of the triangle is used as one subset, everything
leftof it as anotherone.
Divide and Conquer (Quickhull)
Recursively Divide
 Having this in mind the recursive processingcan take placeagain.
 Everything right of the triangle is used as one subset, everything
leftof it as anotherone.
Divide and Conquer (Quickhull)
Recursively Divide
 Having this in mind the recursive processingcan take placeagain.
 Everything right of the triangle is used as one subset, everything
leftof it as anotherone.
Divide and Conquer (Quickhull)
Recursively Divide
 Having this in mind the recursive processingcan take placeagain.
 Everything right of the triangle is used as one subset, everything
leftof it as anotherone.
Divide and Conquer (Quickhull)
Abort Condition
 At some point the recursively processed point subset does only
contain the start and end point of the dividing line.
 If this is case this line has to be a segment of the searched hull
polygon and the recursion can come to an end.
Divide and Conquer (Quickhull)
Algorithm
Quick Hull(S)
//Find convex hull from the set S on n points Convex
Hull= {}//
1. Find left and right most points, say A & B and add 𝐴𝐵
to Convex Hull.
2. Segment 𝐴𝐵 divides the remaining (n-2) points into
two groups 𝑆1 and 𝑆2 . Where 𝑆1 are points in S that
are on the right side of the oriented line from 𝐴 𝑡𝑜 𝐵.
And 𝑆2 are points in S that are on the right side of the
oriented line from 𝐵 𝑡𝑜 𝐴.
3. Find Hull (𝑆1 ,A, B)
4. Find Hull (𝑆2 ,B, A)
Divide and Conquer (Quickhull)
Algorithm
Find Hull(𝑆𝑘 ,P, Q)
//{Find points on Convex Hull from the set 𝑆𝑘 points, that are
on the right side of the oriented from P to Q}//
If 𝑆𝑘 has no point
then return
➢ From the given set of points in 𝑆𝑘 , find farthest point, say
C. from segment PQ.
➢ Add point C to the Convex Hull at the location between P
and Q. Three points P, Q, and C partition the remaining
points of 𝑆𝑘 into three subsets 𝑆0 , 𝑆1 , and 𝑆2 .
Divide and Conquer (Quickhull)
Algorithm
➢ Where 𝑆0 are points inside triangle PCQ, and 𝑆1 are points
on the right side of the oriented line from P to C, and 𝑆2 are
the points on the right side of the oriented line from C to Q.
➢ Find Hull (𝑆1 ,P, C)
➢ Find Hull (𝑆2 ,C, Q)
c
𝑆1 𝑆2
𝑆0
P Q
Divide and Conquer (Quickhull)
Time Complexity of Quickhull
• The running time of Quickhull, as with QuickSort, depends
on how evenly the points are split at each stage.
• If we assume that the points are ``evenly'' distributed, the
running time will solve to 𝑂(𝑛 log 𝑛).

• if the splits are not balanced, then the running time can
easily increase to𝑂(𝑛2 ).
Divide and Conquer (Quickhull)
Time Complexity of Quickhull
𝑇 𝑛 =𝑇 𝑙 +𝑇 𝑛−𝑙 +Ο 𝑛
Where,
• 𝑇 𝑙 ⟶ Point in left side of AB.
• 𝑇 𝑛 − 𝑙 ⟶ Point in right side of AB.
• Ο 𝑛 ⟶ To find the farthest point.
Assume that T(l) contain (n/2) points and T(n-l) contain (n/2) points.
Hence ,
𝑛
𝑇 𝑛 = 2𝑇 +Ο 𝑛
2
After applying Master Method
𝑇 𝑛 = Θ(𝑛 lg 𝑛) in average case
𝑇 𝑛 = Θ 𝑛2 in worst case.
Algorithm Analysis and Design

Recurrence Equation
(Solving Recurrence using
Master Method)

Lecture – 26 and 27
Overview
• A recurrence is a function is defined in terms of
– one or more base cases, and
– itself, with smaller arguments.
Examples:
Overview
• Many technical issues:
• Floors and ceilings
[Floors and ceilings can easily be removed and don’t affect
the solution to the recurrence. They are better left to a
discrete math course.]
• Exact vs. asymptotic functions
• Boundary conditions
Overview

In algorithm analysis, the recurrence and it’s solution are


expressed by the help of asymptotic notation.
• Example: 𝑇 (𝑛) = 2𝑇 (𝑛/2) + 𝛩(𝑛), with solution
𝑇 (𝑛) = 𝛩(𝑛 lg 𝑛).
• The boundary conditions are usually expressed as
𝑇 (𝑛) = 𝛰(1) for sufficiently small n..
• But when there is a desire of an exact, rather than
an asymptotic, solution, the need is to deal with
boundary conditions.
• In practice, just use asymptotics most of the time,
and ignore boundary conditions.
Recursive Function
• Example
𝐴(𝑛)
{
𝐼𝑓(𝑛 > 1)
𝑛
𝑅𝑒𝑡𝑢𝑟𝑛 𝐴
2
}
The relation is called recurrence relation
The Recurrence relation of given function is written as follows.
𝑛
𝑇(𝑛) = 𝑇 +1
2
Recursive Function
• To solve the Recurrence relation the following methods
are used:
1. Iteration method
2. Recursion-Tree method
3. Master Method
4. Substitution Method
Master Method
The master method provides a "cookbook" method for solving
recurrences of the form

𝑇(𝑛) = 𝑎𝑇(𝑛/𝑏) + 𝑓(𝑛)

where 𝑎 ≥ 1 and 𝑏 > 1 are constants and 𝑓 (𝑛) is an asymptotically


positive function. The master method requires memorization of three
cases.

“The beauty of Master Method is the solution of many recurrences can


be determined quite easily, often without pencil and paper.”
Master Method
• Definition
Let 𝑎 ≥ 1 and 𝑏 > 1 be constants, let 𝑓 (𝑛) be a function, and let 𝑇 (𝑛) be
defined on the nonnegative integers by the recurrence
𝑇(𝑛) = 𝑎𝑇(𝑛/𝑏) + 𝑓(𝑛),
where we interpret 𝑛/𝑏 to mean either ⌊𝑛/𝑏⌋ or ⌈𝑛/𝑏⌉. Then 𝑇 (𝑛) can be
bounded asymptotically as follows.
1. 𝐼𝑓 𝑓 𝑛 = Ο 𝑛log𝑏 𝑎−𝜀 𝑓𝑜𝑟 𝑠𝑜𝑚𝑒 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝜖 > 0, 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 )
2. 𝐼𝑓 𝑓 𝑛 = Θ 𝑛log𝑏 𝑎 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 lg 𝑛).
𝑛
3. 𝐼𝑓 𝑓 𝑛 = Ω 𝑛log𝑏 𝑎+𝜀 𝑓𝑜𝑟 𝑠𝑜𝑚𝑒 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝜖 > 0, 𝑎𝑛𝑑 𝑖𝑓 𝑎𝑓 ≤ 𝑐𝑓 𝑛 𝑓𝑜𝑟
𝑏
𝑠𝑜𝑚𝑒 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝑐 < 1 𝑎𝑛𝑑 𝑎𝑙𝑙 𝑠𝑢𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡𝑙𝑦 𝑙𝑎𝑟𝑔𝑒 𝑛, 𝑡ℎ𝑎𝑛 𝑇 𝑛 = Θ(𝑓 𝑛 )
Master Method
Example 1
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 5𝑇 + Θ 𝑛2
2
Master Method
Example 1
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 5𝑇 + Θ 𝑛2
2
2
𝐻𝑒𝑎𝑟, 𝑎 = 5, 𝑏 = 2 𝑎𝑛𝑑 𝑓 𝑛 = 𝑛
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
𝑆𝑜, 𝑛log𝑏 𝑎 = 𝑛log2 5 = 𝑛2.32
𝑏𝑢𝑡, 𝑓 𝑛 = 𝑛2
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒, 𝑛log𝑏 𝑎 > 𝑓 𝑛 , (𝜀 = 0.32)
Hence as per the definition of master theorem Case 1
𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 ) = Θ(𝑛2.32 )
Master Method
Example 2
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 9𝑇 +Θ 𝑛
3
Master Method
Example 2
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 9𝑇 +Θ 𝑛
3
𝐻𝑒𝑎𝑟, 𝑎 = 9, 𝑏 = 3 𝑎𝑛𝑑 𝑓 𝑛 = 𝑛
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
𝑆𝑜, 𝑛log𝑏 𝑎 = 𝑛log3 9 = 𝑛2
𝑏𝑢𝑡, 𝑓 𝑛 = 𝑛
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒, 𝑛log𝑏 𝑎 > 𝑓 𝑛 , (𝜀 = 1)
Hence as per the definition of master theorem Case 1
𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 ) = Θ(𝑛2 )
Master Method
Example 3
Solve the following recurrence by using Master Method
2𝑛
𝑇 𝑛 =𝑇 +1
3
Master Method
Example 3
Solve the following recurrence by using Master Method
2𝑛
𝑇 𝑛 =𝑇 +1
3
3
𝐻𝑒𝑎𝑟, 𝑎 = 1, 𝑏 = 𝑎𝑛𝑑 𝑓 𝑛 = 1
2
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
log3 1
𝑆𝑜, 𝑛log𝑏 𝑎
= 𝑛 2 = 𝑛0 = 1
𝑎𝑛𝑑, 𝑓 𝑛 = 1
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒, 𝑛log𝑏 𝑎 = 𝑓 𝑛 ,
Hence as per the definition of master theorem Case 2
𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 lg 𝑛) = Θ(lg 𝑛)
Master Method
Example 4
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 3𝑇 + 𝑛 lg 𝑛
4
Master Method
Example 4
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 3𝑇 + 𝑛 lg 𝑛
4
𝐻𝑒𝑎𝑟, 𝑎 = 3, 𝑏 = 4 𝑎𝑛𝑑 𝑓 𝑛 = 𝑛 lg 𝑛
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
𝑆𝑜, 𝑛log𝑏 𝑎 = 𝑛log4 3 = 𝑛0.793
𝑏𝑢𝑡, 𝑓 𝑛 = 𝑛𝑙𝑔 𝑛
Since ,𝑓 𝑛 = Ω(𝑛log4 3+𝜀 ) where 𝜀 ≈ 0.2, case 3 applies if we can show that the
regularity condition holds for 𝑓 (𝑛). For sufficiently large 𝑛,
⟹ 𝑎𝑓 (𝑛/𝑏) ≤ 𝑐𝑓 (𝑛)
⟹ 3(𝑛/4)𝑙𝑔(𝑛/4) ≤ (3/4)𝑛 𝑙𝑔 𝑛 (𝑓𝑜𝑟 𝑐 = 3/4)
Which is true by Master Method Case 3
Hence, the solution to the recurrence is 𝑇 𝑛 = Θ 𝑓 𝑛 = Θ(𝑛 lg 𝑛)
Master Method
Example 5
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 2𝑇 + 𝑛 lg 𝑛
2
Master Method
Example 5
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 2𝑇 + 𝑛 lg 𝑛
2
𝐻𝑒𝑎𝑟, 𝑎 = 2, 𝑏 = 2 𝑎𝑛𝑑 𝑓 𝑛 = 𝑛 lg 𝑛
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
𝑆𝑜, 𝑛log𝑏 𝑎 = 𝑛log2 2 = 𝑛1 = 𝑛
𝑏𝑢𝑡, 𝑓 𝑛 = 𝑛 𝑙𝑔 𝑛
𝑊ℎ𝑖𝑐ℎ 𝑙𝑜𝑜𝑘𝑠 , 𝑛log𝑏 𝑎 < 𝑓 𝑛 , and we might mistakenly think that case 3 of
master method should apply. But The problem is that it is not polynomially
larger.
𝑓 𝑛 𝑛 log 𝑛
Because the ratio 𝑖. 𝑒. log𝑏 𝑎 = = log 𝑛 is asymptotically less than 𝑛𝜀 for
𝑛 𝑛
any positive constant 𝜀.
Hence master method is not applicable to the recurrence.
Master Method
Example 6
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 4𝑇 +𝑛
2
Master Method
Example 6
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 4𝑇 +𝑛
2
𝐻𝑒𝑎𝑟, 𝑎 = 4, 𝑏 = 2 𝑎𝑛𝑑 𝑓 𝑛 = 𝑛
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
𝑆𝑜, 𝑛log𝑏 𝑎 = 𝑛log2 4 = 𝑛2
𝑏𝑢𝑡, 𝑓 𝑛 = 𝑛2
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒, 𝑛log𝑏 𝑎 > 𝑓 𝑛 , (𝜀 = 1)
Hence as per the definition of master theorem Case 1
𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 ) = Θ(𝑛2 )
Master Method
Example 7
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 2𝑇 + 𝑛
4
Master Method
Example 7
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 2𝑇 + 𝑛
4
𝐻𝑒𝑟𝑒, 𝑎 = 2, 𝑏 = 4 𝑎𝑛𝑑 𝑓 𝑛 = 𝑛
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
1
log𝑏 𝑎 log4 2
𝑆𝑜, 𝑛 =𝑛 =𝑛 = 𝑛
2

𝑎𝑛𝑑, 𝑓 𝑛 = 𝑛
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒, 𝑛log𝑏 𝑎 = 𝑓 𝑛 ,
Hence as per the definition of master theorem Case 2
𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 lg 𝑛) = Θ( 𝑛 lg 𝑛)
Master Method
• Recurrence (Changing Variable)
Example 8
Solve the following recurrence by using Master Method
𝑇 𝑛 = 2𝑇( 𝑛 ) + lg 𝑛
Due to, a little algebraic manipulation the above recurrence looks very difficult.
These recurrences can be simplified by using change of variable. For convenience,
we shall not worry about rounding off values, such as 𝑛 , to be integers.
First, Renaming 𝑚 = log 𝑛
⟹ 𝑛 = 2𝑚
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑛 𝑎𝑛𝑑 𝑚 𝑜𝑛 𝑡ℎ𝑒 𝑎𝑏𝑜𝑣𝑒 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒.
Hence the above recurrence can be written as follows
𝑇 2𝑚 = 2𝑇( 2𝑚 ) + 𝑚
1
𝑚 𝑚2
⟹ 𝑇 2 = 2𝑇(2 )+𝑚
𝑚Τ
⟹ 𝑇 2𝑚 = 2𝑇 2 2 +𝑚
Master Method
𝑊𝑒 𝑐𝑎𝑛 𝑛𝑜𝑤 𝑟𝑒𝑛𝑎𝑚𝑒
𝑆 𝑚 = 𝑇 2𝑚
𝑚
⟹ 𝑆(𝑚/2) = 𝑇 2 ൗ2
Now put these values in above equation
𝑆(𝑚) = 2𝑆(𝑚/2) + 𝑚
Now apply master method for solve the above equation
𝐻𝑒𝑎𝑟, 𝑎 = 2, 𝑏 = 2 𝑎𝑛𝑑 𝑓 𝑚 = 𝑚
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
𝑆𝑜, 𝑚log𝑏 𝑎 = 𝑚log2 2 = 𝑚1 = 𝑚
𝑎𝑛𝑑, 𝑓 𝑚 = 𝑚
Master Method
Hence as per the definition of master theorem Case 2
𝑆 𝑚 = Θ(𝑚log𝑏 𝑎 lg 𝑚)
⟹ 𝑆(𝑚) = Θ(𝑚 lg 𝑚)
⟹ 𝑆(𝑚) = Θ(𝑚 lg 𝑚)
⟹ 𝑇 2𝑚 = Θ 𝑚 lg 𝑚 𝑎𝑠 𝑆 𝑚 = 𝑇 2𝑚
⟹ 𝑇 𝑛 = Θ log 𝑛 lg log 𝑛 𝑎𝑠 𝑛 = 2𝑚 , 𝑎𝑛𝑑 𝑚 = log 𝑛
Hence the complexity of above recurrence is Θ log 𝑛 lg log 𝑛
Master Method
Example 9
Solve the following recurrence by using Master Method
𝑇 𝑛 = 2𝑇( 𝑛 ) + 1
Master Method
Example 9
Solve the following recurrence by using Master Method
𝑇 𝑛 = 2𝑇( 𝑛 ) + 1
Due to, a little algebraic manipulation the above recurrence looks very difficult.
These recurrences can be simplified by using change of variable. For convenience,
we shall not worry about rounding off values, such as 𝑛 , to be integers.
First, Renaming 𝑚 = log 𝑛
⟹ 𝑛 = 2𝑚
⟹ 𝑛1Τ2 = 2mΤ2
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑛 𝑜𝑛 𝑡ℎ𝑒 𝑎𝑏𝑜𝑣𝑒 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒.
Hence the above recurrence can be written as follows
𝑇 2𝑚 = 2𝑇( 2𝑚 ) + 1
1/2
⟹ 𝑇 2𝑚 = 2𝑇(2𝑚 )+1
⟹ 𝑇 2𝑚 = 2𝑇 2𝑚Τ2 + 1
Master Method
𝑊𝑒 𝑐𝑎𝑛 𝑛𝑜𝑤 𝑟𝑒𝑛𝑎𝑚𝑒
𝑆 𝑚 = 𝑇 2𝑚
𝑚
⟹ 𝑆(𝑚/2) = 𝑇 2 ൗ2
Now put these values in above equation
𝑆(𝑚) = 2𝑆(𝑚/2) + 1
Now apply master method for solve the above equation
𝐻𝑒𝑎𝑟, 𝑎 = 2, 𝑏 = 2 𝑎𝑛𝑑 𝑓 𝑚 = 1
𝐹𝑖𝑟𝑠𝑡 𝑤𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑛log𝑏 𝑎 𝑎𝑛𝑑 𝑡ℎ𝑒𝑛 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑤𝑖𝑡ℎ 𝑓(𝑛)
𝑆𝑜, 𝑚log𝑏 𝑎 = 𝑚log2 2 = 𝑚1 = 𝑚 𝑎𝑛𝑑, 𝑓 𝑚 = 1
Hence as per the definition of master theorem Case 1:
𝑚log𝑏 𝑎 > 𝑓 𝑚
⇒𝑚>1
⇒ 𝑚1−𝜀 = 1 𝑤ℎ𝑒𝑟𝑒 𝜀 = 1
Master Method
Hence, 𝑆 𝑚 = Θ(𝑚log𝑏 𝑎 )
⟹ 𝑆 𝑚 = Θ(𝑚log𝑏 𝑎 ) = Θ(𝑚)
⟹𝑆 𝑚 =Θ 𝑚
⟹ 𝑇 2𝑚 = Θ log 𝑛 𝑎𝑠 𝑆 𝑚 = 𝑇 2𝑚 𝑎𝑛𝑑 𝑚 = log 𝑛
⟹ 𝑇 𝑛 = Θ log 𝑛 𝑎𝑠 𝑛 = 2𝑚

Hence the complexity of above recurrence is Θ log 𝑛


Master Method
Problems Solved by Students:
Q1. 𝑇 𝑛 = 𝑇 𝑛Τ2 + 2𝑛 .
Q2. 𝑇 𝑛 = 2𝑛 𝑇 𝑛Τ2 + 𝑛𝑛 .
Q3. 𝑇 𝑛 = 3𝑇 𝑛Τ2 + 𝑛2 .
Q4. 𝑇 𝑛 = 16𝑇 𝑛Τ4 + 𝑛.
Q5. 𝑇 𝑛 = 3𝑇 𝑛Τ2 + 𝑛2 log 𝑛 .
𝑛
Q6. 𝑇 𝑛 = 2𝑇 𝑛Τ2 + log 𝑛 .

log2 2𝑛
Q7. 𝑇 𝑛 = 2𝑇 𝑛 + .
log log2 2𝑛
(Solving Recurrence using
Advanced version of Master Method)
(For GATE questions only)
Master Method (GATE)
Definition (Advance Version)
Let 𝑎 ≥ 1, 𝑏 > 1, 𝑘 ≥ 0 𝑎𝑛𝑑 𝑝 𝑖𝑠 𝑎 𝑟𝑒𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 and let 𝑇 (𝑛) be defined on the
nonnegative integers by the recurrence
𝑇(𝑛) = 𝑎𝑇(𝑛/𝑏) + Θ(𝑛𝑘 𝑙𝑜𝑔𝑝 𝑛)
𝑤ℎ𝑒𝑟𝑒 𝑤𝑒 𝑖𝑛𝑡𝑒𝑟𝑝𝑟𝑒𝑡 𝑛/𝑏 𝑡𝑜 𝑚𝑒𝑎𝑛 𝑒𝑖𝑡ℎ𝑒𝑟 ⌊𝑛/𝑏⌋ 𝑜𝑟 ⌈𝑛/𝑏⌉.
𝑇ℎ𝑒𝑛 𝑇 (𝑛) 𝑐𝑎𝑛 𝑏𝑒 𝑏𝑜𝑢𝑛𝑑𝑒𝑑 𝑎𝑠𝑦𝑚𝑝𝑡𝑜𝑡𝑖𝑐𝑎𝑙𝑙𝑦 𝑏𝑦 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑛𝑔 𝑎 𝑤𝑖𝑡ℎ 𝑏 𝑘 𝑎𝑠 𝑓𝑜𝑙𝑙𝑜𝑤𝑠.
1.𝑖𝑓 𝑎 > 𝑏 𝑘 , 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ( 𝑛log𝑏 𝑎 )
2. 𝐼𝑓 𝑎 = 𝑏 𝑘 𝑎𝑛𝑑
Option 1 : 𝑖𝑓 𝑝 < −1, 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 )
Option 2 :𝑖𝑓 𝑝 = −1, 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 . 𝑙𝑜𝑔2 𝑛)

Option 3 : 𝑖𝑓 𝑝 > −1, 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ(𝑛log𝑏 𝑎 . 𝑙𝑜𝑔𝑝+1 𝑛)


3. 𝐼𝑓 𝑎 < 𝑏 𝑘 𝑎𝑛𝑑
Option 1 :𝑖𝑓 𝑝 < 0, 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ(𝑛𝑘 )

Option 2 : 𝑖𝑓 𝑝 ≥ 0, 𝑡ℎ𝑒𝑛 𝑇 𝑛 = Θ(𝑛𝑘 . 𝑙𝑜𝑔𝑝 𝑛)


Master Method (GATE)
Example 10
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 3𝑇 + Θ(𝑛2 )
2
Master Method (GATE)
Example 10
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 3𝑇 + Θ(𝑛2 )
2
𝐻𝑒𝑎𝑟, 𝑎 = 3, 𝑏 = 2, 𝑘 = 2, 𝑝 = 0
𝑁𝑜𝑤 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑎 𝑤𝑖𝑡ℎ 𝑏𝑘 .
𝑆𝑜, 𝑎 = 3 𝑎𝑛𝑑 𝑏𝑘 = 22 = 4
𝑇ℎ𝑒 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑠𝑢𝑙𝑡 𝑠ℎ𝑜𝑤𝑠 𝑡ℎ𝑎𝑡 𝑎 < 𝑏𝑘
Hence as per the definition of Advanced version of Master Method case 3
(option 2)
𝑇 𝑛 = Θ 𝑛𝑘 . 𝑙𝑜𝑔𝑝 𝑛 = Θ 𝑛2 . 𝑙𝑜𝑔0 𝑛 = Θ(𝑛2 )
Master Method (GATE)
Example 11
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 4𝑇 + Θ(𝑛2 )
2
Master Method (GATE)
Example 11
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 4𝑇 + Θ(𝑛2 )
2
𝐻𝑒𝑎𝑟, 𝑎 = 4, 𝑏 = 2, 𝑘 = 2, 𝑝 = 0
𝑁𝑜𝑤 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑎 𝑤𝑖𝑡ℎ 𝑏 𝑘 .
𝑆𝑜, 𝑎 = 4 𝑎𝑛𝑑 𝑏 𝑘 = 22 = 4
𝑇ℎ𝑒 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑠𝑢𝑙𝑡 𝑠ℎ𝑜𝑤𝑠 𝑡ℎ𝑎𝑡 𝑎 = 𝑏 𝑘
Hence as per the definition of Advanced version of Master Method case 2 (option 3)
𝑇 𝑛 = Θ 𝑛log𝑏 𝑎 . 𝑙𝑜𝑔𝑝+1 𝑛 = Θ 𝑛log2 4 . 𝑙𝑜𝑔0+1 𝑛 = Θ 𝑛2 . 𝑙𝑜𝑔1 𝑛 = Θ(𝑛2 . log 𝑛)
Master Method (GATE)
Example 12
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 =𝑇 + Θ(𝑛2 )
2
Master Method (GATE)
Example 12
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 =𝑇 + Θ(𝑛2 )
2
𝐻𝑒𝑎𝑟, 𝑎 = 1, 𝑏 = 2, 𝑘 = 2, 𝑝 = 0
𝑁𝑜𝑤 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑎 𝑤𝑖𝑡ℎ 𝑏𝑘 .
𝑆𝑜, 𝑎 = 1 𝑎𝑛𝑑 𝑏𝑘 = 22 = 4
𝑇ℎ𝑒 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑠𝑢𝑙𝑡 𝑠ℎ𝑜𝑤𝑠 𝑡ℎ𝑎𝑡 𝑎 < 𝑏𝑘
Hence as per the definition of Advanced version of Master Method case 3 (option 2)
𝑇 𝑛 = Θ 𝑛𝑘 . 𝑙𝑜𝑔𝑝 𝑛 = Θ 𝑛2 . 𝑙𝑜𝑔0 𝑛 = Θ(𝑛2 )
Master Method (GATE)
Example 13
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 2𝑛 𝑇 + Θ(𝑛𝑛 )
2
Master Method (GATE)
Example 13
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 2𝑛 𝑇 + Θ(𝑛𝑛 )
2
𝐻𝑒𝑎𝑟, 𝑛
𝑎 = 2 , 𝑏 = 2, 𝑘 = 𝑛, 𝑝 = 0
𝑇ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑎 𝑚𝑢𝑠𝑡 𝑏𝑒 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝑛𝑢𝑚𝑏𝑒𝑟. 𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑛𝑜𝑡 𝑡𝑟𝑢𝑒 𝑖𝑛 𝑡ℎ𝑖𝑠 𝑐𝑎𝑠𝑒.
Hence Master method can’t be applied hear.
Master Method (GATE)
Example 14
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 2𝑇 + Θ(𝑛 log 𝑛)
2
Master Method (GATE)
Example 14
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 2𝑇 + Θ(𝑛 log 𝑛)
2
𝐻𝑒𝑎𝑟, 𝑎 = 2, 𝑏 = 2, 𝑘 = 1, 𝑝 = 1
𝑁𝑜𝑤 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑎 𝑤𝑖𝑡ℎ 𝑏𝑘 .
𝑆𝑜, 𝑎 = 2 𝑎𝑛𝑑 𝑏𝑘 = 21 = 2
𝑇ℎ𝑒 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑠𝑢𝑙𝑡 𝑠ℎ𝑜𝑤𝑠 𝑡ℎ𝑎𝑡 𝑎 = 𝑏𝑘
Hence as per the definition of Advanced version of Master Method case 2
(option 3)
𝑇 𝑛 = Θ 𝑛log𝑏 𝑎 . 𝑙𝑜𝑔𝑝+1 𝑛 = Θ 𝑛log2 2 . 𝑙𝑜𝑔1+1 𝑛 = Θ 𝑛. 𝑙𝑜𝑔2 𝑛
Master Method (GATE)
Example 15
Solve the following recurrence by using Master Method
𝑛 𝑛
𝑇 𝑛 = 2𝑇 +Θ
2 log 𝑛
Master Method (GATE)
Example 15
Solve the following recurrence by using Master Method
𝑛 𝑛
𝑇 𝑛 = 2𝑇 +Θ
2 log 𝑛
𝐻𝑒𝑎𝑟, 𝑎 = 2, 𝑏 = 2, 𝑘 = 1, 𝑝 = −1
𝑁𝑜𝑤 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑎 𝑤𝑖𝑡ℎ 𝑏𝑘 .
𝑆𝑜, 𝑎 = 2 𝑎𝑛𝑑 𝑏𝑘 = 21 = 2
𝑇ℎ𝑒 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑠𝑢𝑙𝑡 𝑠ℎ𝑜𝑤𝑠 𝑡ℎ𝑎𝑡 𝑎 = 𝑏𝑘
Hence as per the definition of Advanced version of Master Method case 2
(option 2)
𝑇 𝑛 = Θ 𝑛log𝑏 𝑎 . 𝑙𝑜𝑔𝑝+1 𝑛 = Θ 𝑛log2 2 . 𝑙𝑜𝑔2 𝑛 = Θ 𝑛. 𝑙𝑜𝑔2 𝑛
Master Method (GATE)
Example 16
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 2𝑇 + Θ(𝑛0.51 )
4
Master Method (GATE)
Example 16
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 2𝑇 + Θ(𝑛0.51 )
4
𝐻𝑒𝑎𝑟, 𝑎 = 2, 𝑏 = 4, 𝑘 = 0.51, 𝑝 = 0
𝑁𝑜𝑤 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑎 𝑤𝑖𝑡ℎ 𝑏𝑘 .
𝑆𝑜, 𝑎 = 2 𝑎𝑛𝑑 𝑏𝑘 = 40.51
𝑇ℎ𝑒 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑠𝑢𝑙𝑡 𝑠ℎ𝑜𝑤𝑠 𝑡ℎ𝑎𝑡 𝑎 < 𝑏𝑘
Hence as per the definition of Advanced version of Master Method case 3
(option 2)
𝑇 𝑛 = Θ(𝑛𝑘 . 𝑙𝑜𝑔𝑝 𝑛) = Θ 𝑛0.51 . 𝑙𝑜𝑔0 𝑛 = Θ 𝑛0.51
Master Method (GATE)
Example 17
Solve the following recurrence by using Master Method
𝑛 1
𝑇 𝑛 = 0.5𝑇 +
2 𝑛
Master Method (GATE)
Example 17
Solve the following recurrence by using Master Method
𝑛 1
𝑇 𝑛 = 0.5𝑇 +
2 𝑛
𝐻𝑒𝑎𝑟, 𝑎 = 0.5, 𝑏 = 2, 𝑘 = −1, 𝑝 = 0
As per the definition of master method the value of ‘a’ should be greater
than 1.
But hear the value of a is less than 1. Hence Master method can’t be
applied hear.
Master Method (GATE)
Example 18
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 6𝑇 + Θ(𝑛2 log 𝑛)
3
Master Method (GATE)
Example 18
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 6𝑇 + Θ(𝑛2 log 𝑛)
3
𝐻𝑒𝑎𝑟, 𝑎 = 6, 𝑏 = 3, 𝑘 = 2, 𝑝 = 1
𝑁𝑜𝑤 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑎 𝑤𝑖𝑡ℎ 𝑏𝑘 .
𝑆𝑜, 𝑎 = 6 𝑎𝑛𝑑 𝑏𝑘 = 32 = 9
𝑇ℎ𝑒 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑠𝑢𝑙𝑡 𝑠ℎ𝑜𝑤𝑠 𝑡ℎ𝑎𝑡 𝑎 < 𝑏𝑘
Hence as per the definition of Advanced version of Master Method case 3
(option 2)
𝑇 𝑛 = Θ 𝑛𝑘 . 𝑙𝑜𝑔𝑝 𝑛 = Θ 𝑛2 . 𝑙𝑜𝑔1 𝑛 = Θ(𝑛2 . log 𝑛)
Master Method (GATE)
Example 19
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 64𝑇 − Θ(𝑛2 lg 𝑛)
8
Master Method (GATE)
Example 19
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 64𝑇 − Θ(𝑛2 lg 𝑛)
8
This recurrence says that without any execution the problem is divided in
to sub problems. Because the term (−𝑛2 lg 𝑛) is not valid. Hence it is an
invalid representation.
Master Method (GATE)
Example 20
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 4𝑇 + Θ(log 𝑛)
3
Master Method (GATE)
Example 20
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 4𝑇 + Θ(log 𝑛)
3
𝐻𝑒𝑎𝑟, 𝑎 = 4, 𝑏 = 2, 𝑘 = 0, 𝑝 = 1
𝑁𝑜𝑤 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑎 𝑤𝑖𝑡ℎ 𝑏𝑘 .
𝑆𝑜, 𝑎 = 4 𝑎𝑛𝑑 𝑏𝑘 = 20 = 1
𝑇ℎ𝑒 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑠𝑢𝑙𝑡 𝑠ℎ𝑜𝑤𝑠 𝑡ℎ𝑎𝑡 𝑎 > 𝑏𝑘
Hence as per the definition of Advanced version of Master Method case 1
𝑇(𝑛) = 𝛩( 𝑛log𝑏 𝑎 ) = Θ 𝑛log3 4
Master Method (GATE)
Example 21
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 27𝑇 + Θ 𝑛3 lg 𝑛
3
Master Method (GATE)
Example 21
Solve the following recurrence by using Master Method
𝑛
𝑇 𝑛 = 27𝑇 + Θ 𝑛3 lg 𝑛
3
𝐻𝑒𝑎𝑟, 𝑎 = 27, 𝑏 = 3, 𝑘 = 3, 𝑝 = 1
𝑁𝑜𝑤 𝑐𝑜𝑚𝑝𝑎𝑟𝑒 𝑎 𝑤𝑖𝑡ℎ 𝑏𝑘 .
𝑆𝑜, 𝑎 = 27 𝑎𝑛𝑑 𝑏𝑘 = 33 = 1
𝑇ℎ𝑒 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑠𝑢𝑙𝑡 𝑠ℎ𝑜𝑤𝑠 𝑡ℎ𝑎𝑡 𝑎 = 𝑏𝑘
Hence as per the definition of Advanced version of Master Method
case 2(option 3)
𝑇 𝑛 = 𝛩 𝑛log𝑏 𝑎 . 𝑙𝑜𝑔𝑝+1 𝑛 = Θ 𝑛log3 27 . 𝑙𝑜𝑔1+1 𝑛 = 𝛩 𝑛3 . 𝑙𝑜𝑔2 𝑛
= 𝛩 𝑛3 . log log 𝑛

You might also like