0% found this document useful (0 votes)
36 views390 pages

DAA (College)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views390 pages

DAA (College)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 390

Design and Analysis of Algorithm

Linear Time Sorting


(Counting Sort)

Lecture -21
Overview

• Running time of counting sort is O(n+k).


• Required extra space for sorting.
• Is a stable sorting.
Counting Sort
• Counting sort is a type of sorting technique
which is based on keys between a specific
range.
• It works by counting the number of objects
having distinct key values (i.e. one kind of
hashing).
Counting Sort
• Consider the input set : 4, 1, 3, 4, 3. Then n=5 and
k=4.
• Counting sort determines for each input element 𝑥,
the number of elements less than 𝑥.
• This information is uses to place element 𝑥 directly
into its position in the output array.
• For example if there exits 17 elements less that x
then x is placed into the 18th position into the
output array.
Counting Sort
• Assumptions:
• n records
• Each record contains keys or data
• All keys are in the range of 0 to k, where k is
the highest key value of the array.
• Space:
For coding this algorithm uses three array:
• Input Array: A[1..n] store input data , where n is the
length of the array.
• Output Array: B[1..n] finally store the sorted data
• Temporary Array: C[0..k] store data temporarily
Counting Sort
• Let us illustrate the counting sort with an example.
Apply the concept of counting sort on the given array.
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
Counting Sort
• Let us illustrate the counting sort with an example. Apply the
concept of counting sort on the given array.
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3

• First create a new array C[0…..k] , where k is the highest


key value. And initialize with 0(i.e. zero)
for i=0 to k
0 1 2 3 4 5
C[i]= 0; C 0 0 0 0 0 0
Counting Sort
• Let us illustrate the counting sort with an example.
Apply the concept of counting sort on the given array.
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3

• Find the frequencies of each object and store it in C


array.
for j=1 to A. length
C[ A[j] ] = C[ A[j] ] + 1; 0 1 2 3 4 5
C 2 0 2 3 0 1
Counting Sort
• Let us illustrate the counting sort with an example.
Apply the concept of counting sort on the given array.
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3

• Find the frequencies of each object and store it in C


array.
for j=1 to A. length
0 1 2 3 4 5
C[ A[j] ] = C[ A[j] ] + 1; C 2 0 2 3 0 1

• And then cumulatively add C array.


for i=1 to k 0 1 2 3 4 5
C[i] = C[i] + C[i-1]; C 2 2 4 7 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 7 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 7 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 2 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 2 2 4 6 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 1 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 6 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 4 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 4 5 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 1 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 1 2 3 5 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 0 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 5 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 4 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 8

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 7
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 3 4 7 7
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 3 4 7 7

1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j

1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 2 4 7 7
Counting Sort
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k
3. C[i]= 0;
4. for j=1 to A. length
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Counting Sort
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k [Loop 1]
3. C[i]= 0;
4. for j=1 to A. length [Loop 2]
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k [Loop 3]
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1 [Loop 4]
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Complexity Analysis
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k [Loop 1] 𝚶 𝒌 𝒕𝒊𝒎𝒆𝒔
3. C[i]= 0;
4. for j=1 to A. length [Loop 2] 𝚶 𝒏 𝒕𝒊𝒎𝒆𝒔
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k [Loop 3] 𝚶 𝒌 𝒕𝒊𝒎𝒆𝒔
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1 [Loop 2] 𝚶 𝒏 𝒕𝒊𝒎𝒆𝒔
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Complexity Analysis
• So the counting sort takes a total time of: O(n + k)
• Counting sort is called stable sort.
( A sorting algorithm is stable when numbers with the
same values appear in the output array in the same
order as they do in the input array.)
Pro’s and Con’s of Counting Sort

• Pro’s
• Asymptotically fast Fast - O(n + k)
• Simple to code
• Con’s
• Doesn’t sort in place.
• Requires O(n + k) extra storage space.
Design and Analysis of Algorithm

Recurrence Equation
(Solving Recurrence using
Iteration Methods)

Lecture – 6 and 7
Overview
• A recurrence is a function is defined in terms of
– one or more base cases, and
– itself, with smaller arguments.
Examples:
Overview
• Many technical issues:
• Floors and ceilings
[Floors and ceilings can easily be removed and don’t affect
the solution to the recurrence. They are better left to a
discrete math course.]
• Exact vs. asymptotic functions
• Boundary conditions
Head Recursion: the recursive call is made before
any other operations in the function. Uses more
stack space as each call must wait for the
subsequent calls to complete. Generally harder to
convert into iterative loops.
Tail Recursion: the recursive call is the last
operation in the function. More efficient in terms
of stack space as the compiler can optimize tail-
recursive calls to reuse the same stack frame.
Easier to convert into iterative loops.
Selection Sort • Recursive (Tail recursion)
• Iterative • void selectionSort(int array[]) {sort(array,0); }
• SelectionSort(A[0..n-1]) • void sort(int[] array, int i) {
• # sort given array by • if (i < array.length - 1)
selection sort • { int j = smallest(array, i); # T(n)
• # input: array A[0..n-1] of • int temp = array[i]; array[i] = array[j];
orderable elements
• array[j] = temp; sort(array, i + 1); # T(n)
• # output: array A[0..n-1] }}
sorted in non-decreasing
order • int smallest(int[] array, int j) # T(n - k)
• for i=0 to n-2 do • { if (j == array.length - 1)
• min = i • return array.length - 1;
• for j=i+1 to n-1 do • int k = smallest(array, j + 1);
• if A[j] < A[min]: • return array[j] < array[k] ? j : k; }
• min = j 𝑇 𝑛 − 1 + 𝑂(𝑛) 𝑖𝑓 𝑛 > 1
• 𝑇 𝑛 =
• swap A[i] and A[min] 1 𝑖𝑓 𝑛 = 1
• Try to Recall Recursive version of Linear Search
𝑇 𝑛 − 1 + 𝑂(𝑛) 𝑖𝑓 𝑛 > 1
• 𝑇 𝑛 =
1 𝑖𝑓 𝑛 = 1

• T(n)=T(n-1)+ cn , c>0 ………………(1)


• T(n-1)=T(n-2)+ c(n-1) ……………….(2)

• Putting eq 2 in 1
• T(n)= T(n-2)+ c(n-1)+cn…………..(3)
• T(n-2)=T(n-3)+c(n-2) …………(4)

• Putting eq 4 in 3
• T(n)= T(n-3)+c(n-2)+ c(n-1)+c(n)
• kth Term can be T(n)= T(n-k)+c(n+(n-1)+(n-2)+………..+1)
• Putting n-k=1 => k=n-1
• T(n)=T(1)+c(n(n+1)/2) = 1+c(n2+n) = O(n2)
Overview

In algorithm analysis, the recurrence and it’s solution are


expressed by the help of asymptotic notation.
• Example: 𝑇 (𝑛) = 2𝑇 (𝑛/2) + 𝛩(𝑛), with solution
𝑇 (𝑛) = 𝛩(𝑛 lg 𝑛).
• The boundary conditions are usually expressed as
𝑇 (𝑛) = 𝛰(1) for sufficiently small n..
• But when there is a desire of an exact, rather than
an asymptotic, solution, the need is to deal with
boundary conditions.
• In practice, just use asymptotic most of the time,
and ignore boundary conditions.
Recursive Function
• Example
𝐴(𝑛)
{
𝐼𝑓(𝑛 > 1)
𝑅𝑒𝑡𝑢𝑟𝑛(𝐴 𝑛 − 1 )
}
The relation is called recurrence relation
The Recurrence relation of given function is written as follows.
𝑇(𝑛) = 𝑇(𝑛 − 1) + 1
Recursive Function
• To solve the Recurrence relation the following methods
are used:
1. Substitution method (Forward and Backward)
2. Recursion-Tree method
3. Master Method
Backward Substitution Method(
Example 1)
• In Iteration method the basic idea is to expand the recurrence
and express it as a summation of terms dependent only on ‘n’
(i.e. the number of input) and the initial conditions.
Example 1:
Solve the following recurrence relation by using Iteration method.
𝑇 𝑛−1 +1 𝑖𝑓 𝑛 > 1
𝑇 𝑛 =
1 𝑖𝑓 𝑛 = 1
Backward Substitution Method(
Example 1)
It means 𝑇 𝑛 = 𝑇 𝑛 − 1 + 1 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 −− −(1)
𝑃𝑢𝑡 𝑛 = 𝑛 − 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛−1 =𝑇 𝑛−2 +1
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇(𝑛 − 1) 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛 = 𝑇 𝑛 − 2 + 2 −−−−−−−−−−−−−−−− −(2)
𝑃𝑢𝑡 𝑛 = 𝑛 − 2 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛−2 =𝑇 𝑛−3 +1
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇(𝑛 − 2) 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
𝑇 𝑛 = 𝑇 𝑛 − 3 + 3 −−−−−−−−−−−−−−−− −(3)
……….
𝑇 𝑛 = 𝑇 𝑛 − 𝑘 + 𝑘 −−−−−−−−−−−−−−−− −(𝑘)
Iteration Method ( Example 1)
𝐿𝑒𝑡 𝑇(𝑛 − 𝑘) = 𝑇(1) = 1 (𝐴𝑠 𝑝𝑒𝑟 𝑡ℎ𝑒 𝑏𝑎𝑠𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑜𝑓 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒)
𝑆𝑜 𝑛 − 𝑘 = 1
⇒𝑘 =𝑛−1
𝑁𝑜𝑤 𝑝𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑘 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 𝑘
𝑇 𝑛 =𝑇 𝑛− 𝑛−1 +𝑛−1
𝑇 𝑛 =𝑇 1 +𝑛−1
𝑇 𝑛 =1+𝑛−1
𝑇 𝑛 =𝑛
∴ 𝑇 𝑛 =Θ 𝑛
Iteration Method ( Example 2)
Example 1:
Solve the following recurrence relation by using Iteration method.
𝑛
2𝑇 + 3𝑛 𝑖𝑓 𝑛 > 1
𝑇 𝑛 = 2
11 𝑖𝑓 𝑛 = 1
Iteration Method ( Example 2)
𝑛
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 2𝑇 + 3𝑛 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 11 𝑤ℎ𝑒𝑛 𝑛 = 1 − −(1)
2
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 𝑛
𝑇 = 2𝑇 +3
2 4 2
𝑛 𝑛 𝑛
𝑇 = 2𝑇 +3
2 2 2
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛
𝑇 𝑛 = 2 2𝑇 +3 + 3𝑛
2 2
𝑛 𝑛
𝑇 𝑛 =2 𝑇 + 2.3 + 3𝑛
2 4
𝑛 𝑛
𝑇 𝑛 =2 𝑇 + 3 + 3𝑛 −−−−−−−−−−−−− −(2)
2 2
Iteration Method ( Example 2)
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛
𝑇 = 2𝑇 +3
4 8 4
𝑛 𝑛 𝑛
𝑇 = 2𝑇 +3
4 2 4
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛
𝑇 𝑛 = 2 2𝑇 +3 + 3 +3𝑛
8 16 2
𝑛 𝑛 𝑛
𝑇 𝑛 = 2 2𝑇 +3 + 3 +3𝑛
2 4 2
𝑛 𝑛 𝑛
𝑇 𝑛 =2 𝑇 + 4.3 + 3 +3𝑛
2 16 2
Iteration Method ( Example 2)
𝑛 𝑛 𝑛
𝑇 𝑛 =2 𝑇 + 3 + 3 +3𝑛 −−−−−−−−−−−−− −(3)
2 2 2
…….
𝑛 𝑛 𝑛
𝑇 𝑛 =2𝑇 + ⋯ + ⋯ + ⋯ + 3 + 3 +3𝑛 −−−−−−−− −(𝑖 𝑡𝑒𝑟𝑚)
2 2 2
𝑛
𝑎𝑛𝑑 𝑡ℎ𝑒 𝑠𝑒𝑟𝑖𝑒𝑠 𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒 𝑤ℎ𝑒𝑛 =1
2
⇒𝑛= 2
𝑇𝑎𝑘𝑖𝑛𝑔 log 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⇒ log 𝑛 = 𝑖 log 2
⇒ 𝑖 = log 𝑛 (𝑏𝑒𝑐𝑎𝑢𝑠𝑒 log 2 = 1)
Iteration Method ( Example 2)
𝐻𝑒𝑛𝑐𝑒 𝑤𝑒 𝑐𝑎𝑛 𝑤𝑟𝑖𝑡𝑒 𝑡ℎ𝑒 𝑖 𝑡𝑒𝑟𝑚 𝑎𝑠 𝑓𝑜𝑙𝑙𝑜𝑤𝑠
𝑛 𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛 + 3 + 3 + ⋯ + ⋯ + ⋯ + 2 𝑇
2 2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛 + 3 + 3 + ⋯ + ⋯ + ⋯ + 2 𝑇 1
2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛 + 3 + 3 + ⋯ + ⋯ + ⋯ + 2 . 11
2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛 + 3 + 3 + ⋯ + ⋯ + ⋯ + 𝑛 . 11 [ 𝐴𝑠 log 2 = 1]
2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛 + 3 + 3 + ⋯ + ⋯ + ⋯ + 𝑛. 11
2 2
𝑛 𝑛
⇒ 𝑇 𝑛 = 3𝑛 + 3 + 3 + ⋯ + ⋯ + ⋯ + 11. 𝑛
2 2
1 1
⇒ 𝑇 𝑛 = 3𝑛 1 + + + ⋯ + ⋯ + ⋯ + 11. 𝑛
2 2
Iteration Method ( Example 2)
𝐴𝑠 𝑤𝑒 𝑘𝑛𝑜𝑤 𝑡ℎ𝑎𝑡 𝑆𝑢𝑚 𝑜𝑓 𝑖𝑛𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑠𝑒𝑟𝑖𝑒𝑠 𝑖𝑠
1 𝑎
= 𝑎 + 𝑎𝑟 + 𝑎𝑟 + . . . + 𝑎𝑟 ( )= 𝑎𝑟 = 𝑎 =
1−𝑟 1−𝑟
1
⇒ 𝑇 𝑛 ≤ 3𝑛 + 11 𝑛
1
1−
2
⇒ 𝑇 𝑛 ≤ 3𝑛 . 2 + 11 𝑛
⇒ 𝑇 𝑛 ≤ 6𝑛 + 11 𝑛
𝐻𝑒𝑛𝑐𝑒 𝑇 𝑛 = Ο(𝑛 )
Iteration Method ( Example 3)
Example 3:
Solve the following recurrence relation by using Iteration method.
𝑛
8𝑇 +𝑛 𝑖𝑓 𝑛 > 1
𝑇 𝑛 = 2
1 𝑖𝑓 𝑛 = 1
Iteration Method ( Example 3)
𝑛
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 8𝑇 + 𝑛 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 −−− −(1)
2
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 𝑛
𝑇 = 8𝑇 +
2 4 2
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛
𝑇 𝑛 = 8 8𝑇 + +𝑛
4 2
𝑛 𝑛
𝑇 𝑛 =8 𝑇 + 8 + 𝑛 −−−−−−−−−−−−− −(2)
4 4
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛
𝑇 = 8𝑇 +
4 8 4
Iteration Method ( Example 3)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛
𝑇 𝑛 =8 8𝑇 + +8 + 𝑛
8 4 4
𝑇 𝑛 =8 𝑇 +8 +8 + 𝑛 −−−−−−−−−−−−− −(3)
……
𝑇 𝑛
𝑛 𝑛 𝑛 𝑛
=8 𝑇 +8 + ⋯+ ⋯+ ⋯+ 8 + 8 + 𝑛 −−− −(𝑘 𝑡𝑒𝑟𝑚)
2 4 4 4
𝑛 8 8 8 8
𝑇 𝑛 =8 𝑇 + 𝑛 + …+ ⋯+⋯+ + + 1
2 4 4 4 4
𝑛
𝑇 𝑛 =8 𝑇 + 𝑛 2 +2 … + ⋯ + ⋯ + 2 + 2 + 1 −−−− −(4)
2
Iteration Method ( Example 3)
𝑛
𝑎𝑛𝑑 𝑡ℎ𝑒 𝑠𝑒𝑟𝑖𝑒𝑠 𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒 𝑤ℎ𝑒𝑛 =1
2
⇒𝑛= 2
𝑇𝑎𝑘𝑖𝑛𝑔 log 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⇒ log 𝑛 = 𝑘 log 2
⇒ 𝑘 = log 𝑛 (𝑏𝑒𝑐𝑎𝑢𝑠𝑒 log 2 = 1)
𝑛
𝑁𝑜𝑤, 𝑎𝑝𝑝𝑙𝑦 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑘 = log 𝑛 𝑎𝑛𝑑 = 1 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 4
2
𝑇 𝑛 =8 𝑇 1 + 𝑛 2 +2 + ⋯ + 2 + 2 + 1 − −(5)
Iteration Method ( Example 3)
𝐴𝑠 𝑤𝑒 𝑘𝑛𝑜𝑤 𝑡ℎ𝑎𝑡 𝑆𝑢𝑚 𝑜𝑓 𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑠𝑒𝑟𝑖𝑒𝑠 𝑖𝑠
𝑟 −1
= 𝑎 + 𝑎𝑟 + 𝑎𝑟 + . . . + 𝑎𝑟 = 𝑎𝑟 = 𝑎
𝑟−1
𝐻𝑒𝑟𝑒, 𝑛(𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑟𝑚𝑠) = log 𝑛 , 𝑎 = 1 𝑎𝑛𝑑 𝑟 = 2.

𝐻𝑒𝑛𝑐𝑒 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 5 𝑐𝑎𝑛 𝑏𝑒 𝑤𝑟𝑖𝑡𝑡𝑒𝑛 𝑎𝑠 𝑓𝑜𝑙𝑙𝑜𝑤𝑠

𝑇 𝑛 =8 𝑇 1 + 𝑛 2 +2 …+ ⋯+ ⋯+ 2 + 2 + 1

2 −1
𝑇 𝑛 =8 + 𝑛
2−1
Iteration Method ( Example 3)
𝑛 −1
𝑇 𝑛 =𝑛 + 𝑛
1
𝑇 𝑛 =𝑛 + 𝑛 𝑛 −1
𝑇 𝑛 =𝑛 + 𝑛 𝑛 −1 As log 8 = 3 𝑎𝑛𝑑 log 2 = 1
𝑇 𝑛 =𝑛 + 𝑛 𝑛 −1
𝑇 𝑛 =𝑛 + 𝑛 𝑛−1
𝑇 𝑛 =𝑛 +𝑛 +𝑛
𝑇 𝑛 = 2𝑛 + 𝑛
𝐻𝑒𝑛𝑐𝑒 𝑇 𝑛 = Ο(𝑛 )
Iteration Method ( Example 4)
Example 4:
Solve the following recurrence relation by using Iteration method.
𝑛
7𝑇 +𝑛 𝑖𝑓 𝑛 > 1
𝑇 𝑛 = 2
1 𝑖𝑓 𝑛 = 1
(𝑖. 𝑒. 𝑆𝑡𝑟𝑎𝑠𝑠𝑖𝑜𝑛 𝐴𝑙𝑔𝑜𝑟𝑖𝑡ℎ𝑚)
Iteration Method ( Example 4)
𝑛
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 7𝑇 + 𝑛 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 −−− −(1)
2
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 𝑛
𝑇 = 7𝑇 +
2 4 2
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛
𝑇 𝑛 = 7 7𝑇 + +𝑛
4 2
𝑛 𝑛
𝑇 𝑛 =7 𝑇 + 7 + 𝑛 −−−−−−−−−−−−− −(2)
4 4
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛
𝑇 = 7𝑇 +
4 8 4
Iteration Method ( Example 4)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛
𝑇 𝑛 = 7 7𝑇 + +7 + 𝑛
8 4 4
𝑛 𝑛 𝑛
𝑇 𝑛 =7 𝑇 +7 + 7 + 𝑛 −−−−−−−−−−−−− −(3)
8 4 4
……
𝑛 𝑛 𝑛 𝑛
𝑇 𝑛 =7 𝑇 +7 + ⋯+ ⋯+ ⋯+ 7 + 7 + 𝑛 −− −(𝑘 𝑡𝑒𝑟𝑚)
2 4 4 4
𝑛 7 7 7 7
𝑇 𝑛 =7 𝑇 + 𝑛 + …+ ⋯+⋯+ + + 1
2 4 4 4 4

𝑛 7
𝑇 𝑛 =7 𝑇 + 𝑛 −−−− −(4)
2 4
Iteration Method ( Example 4)
𝐴𝑠 𝑤𝑒 𝑘𝑛𝑜𝑤 𝑡ℎ𝑎𝑡 𝑆𝑢𝑚 𝑜𝑓 𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑠𝑒𝑟𝑖𝑒𝑠 𝑖𝑠
𝑟 −1
= 𝑎 + 𝑎𝑟 + 𝑎𝑟 + . . . + 𝑎𝑟 = 𝑎𝑟 = 𝑎
𝑟−1
𝐻𝑒𝑟𝑒, 𝑛(𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑟𝑚𝑠) = log 𝑛 , 𝑎 = 1 𝑎𝑛𝑑 𝑟 = 7/4.

𝐻𝑒𝑛𝑐𝑒 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 4 𝑐𝑎𝑛 𝑏𝑒 𝑤𝑟𝑖𝑡𝑡𝑒𝑛 𝑎𝑠 𝑓𝑜𝑙𝑙𝑜𝑤𝑠


7
−1
4
𝑇 𝑛 =7 + 𝑛
0.75
Iteration Method ( Example 4)
𝑛 −1
𝑇 𝑛 =𝑛 + 𝑛
0.75
𝑛 −1
𝑇 𝑛 =𝑛 + 𝑛
0.75
.
𝑇 𝑛 =𝑛 , + 𝑛 As log 7 = 2.80 𝑎𝑛𝑑 log 4 = 2
.
𝑛−1 .
.
𝑇 𝑛 =𝑛 + 𝑛
0.75
𝐻𝑒𝑛𝑐𝑒 𝑇 𝑛 = Ο(𝑛 . )
Design and Analysis of Algorithm

Divide and Conquer strategy


(Heap Sort)

Lecture -10 - 11
Overview

• O(n lg n) worst case-like merge sort.

• Sorts in place-like insertion sort.

• Combines the best of both algorithms.


Heap data structure
Example
Example

• Given an array of size N. The task is to sort the


array elements by using Heap Sort.
– Input:
– N=10
– Arr[]:{16, 4, 10, 14, 7, 9, 3, 2, 8, 1}
– Output: 1 2 3 4 7 8 9 10 14 16
Example

• Given an array of size N. The task is to sort the


array elements by using Heap Sort.
– Input:
– N = 10
– arr[] = {10,9,8,7,6,5,4,3,2,1}
– Output:1 2 3 4 5 6 7 8 9 10
Heap property

• For max-heaps (largest element at root),


max-heap property: for all nodes i ,
excluding the root, A[PARENT(i )] ≥ A[i ].
• For min-heaps (smallest element at root),
min-heap property: for all nodes i ,
excluding the root, A[PARENT(i )] ≤ A[i ].
Maintaining the heap property
Building a heap
Example
Building a max-heap from the following unsorted array results in the
first heap example.
Correctness
Initialization:

Initialization: we know that each node n / 2  1, n / 2  2, . . . , n is a


leaf, which is the root of a trivial max-heap. Since i = n / 2 before the
first iteration of the for loop, the invariant is initially true.

Maintenance: Children of node i are indexed higher than i , so by the


loop invariant, they are both roots of max-heaps. Correctly assuming
that i+1, i+2, . . . , n are all roots of max-heaps, MAX-HEAPIFY makes
node i a max-heap root. Decrementing i re-establishes the loop
invariant at each iteration.

Termination: When i = 0, the loop terminates. By the loop invariant,


each node, notably node 1, is the root of a max-heap.
Analysis
• Simple bound: O(n) calls to MAX-HEAPIFY,
each of which takes O(lg n) time ⇒ O(n lg n).

•Tighter analysis observation:


An n element heap has height log 𝑛 and at

most nodes of any height h.


Tighter analysis Proof

BUILD-MAX-HEAP(A,n) 9
𝒇𝒐𝒓 𝒊 ← 𝒏/𝟐 𝒅𝒐𝒘𝒏𝒕𝒐 𝟏
do MAX-HEAPIFY (A,i,n) 6 5

0 8 2 1
1 2 3 4 5 6 7 8
9 6 5 0 8 2 1 3
3
Tighter analysis Proof
• For easy understanding, Let us take a complete binary
Tree,

• The height of a node is the number of edges from the


node to the deepest leaf.
• The depth of a node is the no of edges from the root of
the node.
Tighter analysis Proof
• All the leaves are of height 0, therefore
there are 8 nodes at height 0.
• 4 numbers of nodes at height 1.
• 2 numbers of nodes at height 2.
• and one node at height 3.
Tighter analysis Proof
• Hence the question is how many nodes are
there at height ‘h’ in a complete binary tree?
• The answer is :
• If there are n nodes in tree, then at most
nodes are available at height h.
Tighter analysis Proof
• Now if we apply MAX-HEAPIFY() on any node of any
level, then the time taken by MAX-HEAPIFY() is the
height of the node.(i.e. Ο(ℎ))
• Hence in case of root the time taken is log 𝒏.
• Hence
𝑛
𝑇 𝑛 = Ο(ℎ)
2
Tighter analysis Proof
𝑛
𝑇 𝑛 = Ο(ℎ)
2

⟹ Ο 𝑛∑

⟹Ο ∑

⟹Ο ∑ ℎ

⟹Ο ⁄
⟹ Ο( 2 )
T(n) ⟹ Ο 𝑛
Hence the running time of BUILD-MAX-HEAP(A,n) is Ο 𝑛 in tight bound .
Tighter analysis Proof
1
𝑥 = value of Infanite G P Series
1−𝑥

𝑥 = (1 − 𝑥)

Differentiate both side:


1
𝑘. 𝑥 = −1 1 − 𝑥 −1 =
(1 − 𝑥)
Multiply x both side
𝑥
𝑘. 𝑥 ==
(1 − 𝑥)
The heapsort algorithm
Given an input array, the heapsort algorithm acts as
follows:
• Builds a max-heap from the array.
• Starting with the root (the maximum element), the
algorithm places the maximum element into the
correct place in the array by swapping it with the
element in the last position in the array.
• “Discard” this last node (knowing that it is in its
correct place) by decreasing the heap size, and calling
MAX-HEAPIFY on the new (possibly incorrectly-placed)
root.
• Repeat this “discarding” process until only one node
(the smallest element) remains, and therefore is in the
correct place in the array.
Example
Analysis
• BUILD-MAX-HEAP: O(n)
• for loop: n − 1 times
• exchange elements: O(1)
• MAX-HEAPIFY: O(lg n)

Total time: O(n lg n).


Heap implementation of
priority queue
• Heaps efficiently implement priority
queues. These notes will deal with max
priority queues implemented with max-
heaps. Min-priority queues are
implemented with min-heaps similarly.
• A heap gives a good compromise between
fast insertion but slow extraction and vice
versa. Both operations take O(lg n) time.
Priority queue
• Maintains a dynamic set S of elements.
• Each set element has a key-an associated value.
• Max-priority queue supports dynamic-set operations:
• INSERT(S, x): inserts element x into set S.
• MAXIMUM(S): returns element of S with largest key.
• EXTRACT-MAX(S): removes and returns element of S
with largest key.
• INCREASE-KEY(S, x, k): increases value of element x’s
key to k. Assume k ≥ x’s current key value.
• Example max-priority queue application: schedule jobs on
shared computer.
• Min-priority queue supports similar operations:
• INSERT(S, x): inserts element x into set S.
• MINIMUM(S): returns element of S with
smallest key.
• EXTRACT-MIN(S): removes and returns
element of S with smallest key.
• DECREASE-KEY(S, x, k): decreases value of
element x’s key to k. Assume k ≤ x’s current
key value.
• Example min-priority queue application:
event - driven simulator.
Finding the maximum element

Getting the maximum element is easy: it’s


the root.
HEAP-MAXIMUM(A)
return A[1]
Time: (1).
Extracting max element
Given the array A:
• Make sure heap is not empty.
• Make a copy of the maximum element (the root).
• Make the last node in the tree the new root.
• Re-heapify the heap, with one fewer node.
• Return the copy of the maximum element.
HEAP-EXTRACT-MAX(A, n)
if n < 1
then error .heap underflow.
max ← A[1]
A[1] ← A[n]
MAX-HEAPIFY(A, 1, n − 1) remakes heap
return max
• HEAP-INCREASE-KEY(A,i,key)
1. If key<A[i]
2. error” new key is smaller than the
current key”.
3. A[i]= key
4. While i>1 and A[parent(i)]<A[i]
5. swap(A[parent(i)], A[i])
6. i=parent(i)
The running time of HEAP-INCREASE-
KEY(A,i,key) is Ο(log 𝑛)
• MAX-HEAP-INSERT(A,key)
1. A.heap-size= A.heap-size+1
2. A[heap-size]=-∞
3. HEAP-INCREASE-KEY(A,heap-size,key)
The running time of MAX-HEAP-INSERT(A,key) is
Ο(log 𝑛).
• Example
Design and Analysis of Algorithm

Divide and Conquer strategy


(Matrix Multiplication
by Strassen’s Algorithm)
Lecture -16
Overview

• 𝐶𝑜𝑛𝑣𝑒𝑛𝑡𝑖𝑜𝑛𝑎𝑙 𝑠𝑡𝑟𝑎𝑡𝑒𝑔𝑦 ⇒ Ο(𝑛 ).


• 𝐷𝑖𝑣𝑖𝑑𝑒 𝑎𝑛𝑑 𝐶𝑜𝑞𝑢𝑒𝑠 𝑠𝑡𝑟𝑎𝑡𝑒𝑔𝑦 ⇒ Ο(𝑛 ).
• 𝑆𝑡𝑟𝑎𝑠𝑠𝑒𝑛 𝑠 𝑠𝑡𝑟𝑎𝑡𝑒𝑔𝑦 ⇒ Ο(𝑛 . ).
Matrix Multiplication
• Problem definition:
Matrix Multiplication
• Conventional strategy:
Matrix Multiplication
• Question is
𝐼𝑠 Θ(𝑛 ) is the best or we can multiply
the matrix in 𝜊 𝑛 time?
(i.e. can we solve it in < Θ(𝑛 ) )
• Let’s see with Divide and Conquer
strategy………
Matrix Multiplication
• Divide-and-conquer strategy :
 As with the other divide-and-conquer algorithms, assume that n is
a power of 2(i.e. 2 ).
𝒏 𝒏
 Partition each of A,B, C into four × matrices:
𝟐 𝟐

𝐴 𝐴 𝐵 𝐵 𝐶 𝐶
𝐴= ,𝐵 = ,𝐶 =
𝐴 𝐴 𝐵 𝐵 𝐶 𝐶

For multiplication we can write 𝐶 = 𝐴 . 𝐵 as


𝐶 𝐶 𝐴 𝐴 𝐵 𝐵
𝐶 𝐶
=
𝐴 𝐴
. 𝐵 𝐵
Matrix Multiplication
• Divide-and-conquer strategy :
For multiplication we can write 𝐶 = 𝐴 . 𝐵 as
𝐶 𝐶 𝐴 𝐴 𝐵 𝐵
𝐶 𝐶
=
𝐴 𝐴
. 𝐵 𝐵

Which create four equations. They are


𝐶 =𝐴 .𝐵 +𝐴 .𝐵
𝐶 =𝐴 .𝐵 +𝐴 .𝐵
𝐶 =𝐴 .𝐵 +𝐴 .𝐵
𝐶 =𝐴 .𝐵 +𝐴 .𝐵
𝒏 𝒏
Each of these equations multiplies two × matrices and then adds their
𝟐 𝟐
𝒏 𝒏
× products.
𝟐 𝟐
Matrix Multiplication
• Divide-and-conquer strategy :
By using the equations of previous slide we ca write the Divide and conquer
algorithm.
REC-MAT-MULT (A, B, n)
Let C be a n x n matrix
If n== 1
C = A xB
𝒏 𝒏
else partition A,B, and C into × submatrices.
𝟐 𝟐
C = REC−MAT−MULT A ,B , N⁄2 + REC−MAT−MULT A ,B , N⁄2
C = REC−MAT−MULT A ,B , N⁄2 + REC−MAT−MULT A ,B , N⁄2
C = REC−MAT−MULT A ,B , N⁄2 + REC−MAT−MULT A ,B , N⁄2
C = REC−MAT−MULT A ,B , N⁄2 + REC−MAT−MULT A ,B , N⁄2
Return C
Matrix Multiplication
• Analysis of Divide-and-conquer strategy :
𝒏 𝒏
Let T(n) be the time to multiply two × matrices.
𝟐 𝟐
Base Case: n=1. Perform one scalar multiplication: Θ(1).
Recursive Case: n>1
• Dividing takes b Θ(1) time, using index calculations.
𝒏 𝒏
• Conquering makes 8 recursive calls, each multiplying × matrices.
𝟐 𝟐
(i.e. 8𝑇 𝑛⁄2 )
𝒏 𝒏
• Combining Takes Θ 𝑛 time to add × matrices four items.
𝟐 𝟐
Hence the Recurrence is
Θ 1 𝑖𝑓 𝑛 = 1
𝑇 𝑛 = 𝑛
8𝑇 +Θ 𝑛 𝑖𝑓𝑛 > 1
2
The Complexity is Θ(𝑛 ) (Apply Master Method)
Matrix Multiplication
• Analysis of Divide-and-conquer strategy :
𝒏 𝒏
Let T(n) be the time to multiply two × matrices.
𝟐 𝟐
Base Case: n=1. Perform one scalar multiplication: Θ(1).
Recursive Case: n>1
• Dividing takes b Θ(1) time, using index calculations.
𝒏 𝒏
• Conquering makes 8 recursive calls, each multiplying × matrices.
𝟐 𝟐
(i.e. 8𝑇 𝑛⁄2 )
𝒏 𝒏
• Combining Takes Θ 𝑛 time to add × matrices four items.
𝟐 𝟐
Hence the Recurrence is
Θ 1 𝑖𝑓 𝑛 = 1
𝑇 𝑛 = 𝑛
8𝑇 +Θ 𝑛 𝑖𝑓𝑛 > 1
2
The Complexity is Θ(𝑛 ) (Apply Master Method)
Can we do better?
Matrix Multiplication
• Strassen’s strategy :
The Idea:

 Make the recursion tree less bushy.

 Perform only 7(seven) recursive multiplications


of n/2 x n/2 matrices, rather than 8(Eight).
Matrix Multiplication
• Strassen’s strategy :
The Algorithm:
1. As in the recursive method, partition each of the
𝒏 𝒏
matrices into four × submatrices. Time: Θ 1
𝟐 𝟐
2. Compute 7 matrix products 𝑃, 𝑄, 𝑅, 𝑆, 𝑇, 𝑈, 𝑉 for each
𝒏 𝒏
× .
𝟐 𝟐
𝒏 𝒏
3. Compute × submatrices of C by adding and
𝟐 𝟐
subtracting various combinations of the 𝑃 . Time:
Θ(𝑛 ) .
Matrix Multiplication
• Strassen’s strategy :
Details of Step 2:
Compute 7 matrix products:
P=(𝐀𝟏𝟏 + 𝐀 𝟐𝟐 ). (𝑩𝟏𝟏 +𝑩𝟐𝟐 ) U= (𝑨𝟐𝟏 − 𝑨𝟏𝟏 ).(𝑩𝟏𝟏 + 𝑩𝟏𝟐 )

Q=(𝑨𝟐𝟏 + 𝑨𝟐𝟐 ). 𝑩𝟏𝟏 V=(𝑨𝟏𝟐 − 𝑨𝟐𝟐 ).(𝑩𝟐𝟏 + 𝑩𝟐𝟐 )

R= 𝐀𝟏𝟏 .(𝑩𝟏𝟐 − 𝑩𝟐𝟐 )

S= 𝑨𝟐𝟐 .(𝑩𝟐𝟏 − 𝑩𝟏𝟏 )

T=(𝑨𝟏𝟏 + 𝑨𝟏𝟐 ). 𝑩𝟐𝟐


Matrix Multiplication
• Strassen’s strategy :
Details of Step 3:
Compute C with 4 adding and subtracting :
𝑪𝟏𝟏 = 𝑷 + 𝑺 − 𝑻 + 𝑽
𝑪𝟏𝟐 = 𝑹 + 𝑻
𝑪𝟐𝟏 = 𝑸 + 𝑺
𝑪𝟐𝟐 = 𝑷 + 𝑹 − 𝑸 + 𝑼
Matrix Multiplication
• Strassen’s strategy :
Analysis:
The Recurrence is
Θ 1 𝑖𝑓 𝑛 = 1
𝑇 𝑛 = 𝑛
7𝑇 +Θ 𝑛 𝑖𝑓𝑛 > 1
2
.
The Complexity is Θ(𝑛 ) = Θ(𝑛 ) (By using
Master Method)
Matrix Multiplication
Example 1
• Compute Matrix multiplication of the following two
matrices with the help of Strassen’s strategy
1 2 5 6
𝐴= and 𝐵 =
3 4 7 8
Matrix Multiplication
Example 1
• Compute Matrix multiplication of the following two
matrices with the help of Strassen’s strategy
1 2 5 6
𝐴= and 𝐵 =
3 4 7 8
Ans:
𝐴 =1, 𝐴 =2, 𝐴 =3, and 𝐴 =4
𝐵 =5, 𝐵 =6, 𝐵 =7, and 𝐵 =8
Matrix Multiplication
Example 1
Calculate the value of P, Q, R, S, T, U and V
P=(A + A ). (𝐵 +𝐵 ) = (1+4)(5+8)=5 x 13=65
Q=(𝐴 + 𝐴 ). 𝐵 = (3+4)5=7 x 5=35
R= A .(𝐵 − 𝐵 )=1(6-8)=1 x -2=-2
S= 𝐴 .(𝐵 − 𝐵 ) = 4(7-5)=4 x 2=8
T=(𝐴 + 𝐴 ). 𝐵 =(1+2)8=3 x 8=24
U= (𝐴 − 𝐴 ).(𝐵 + 𝐵 )=(3-1)(5+6)=2 x 11=22
V=(𝐴 − 𝐴 ).(𝐵 + 𝐵 )=(2-4)(7+8)=-2 x 15=-30
Matrix Multiplication
Example 1
Compute 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 :
𝐶 = 𝑃 + 𝑆 − 𝑇 + 𝑉= 65+8-24-30=19
𝐶 = 𝑅 + 𝑇=-2+24=22
𝐶 = 𝑄 + 𝑆=35+8=43
𝐶 = 𝑃 + 𝑅 − 𝑄 + 𝑈=65-2-35+22=50
Hence,
19 22
𝐶=
43 50
Matrix Multiplication
Example 2
Compute Matrix multiplication of the following two matrices with
the help of Strassen’s strategy.

4 2 0 1 2 1 3 2
𝐴= 3 1 2 5 ,B= 5 4 2 3
3 2 1 4 1 4 0 2
5 2 6 7 3 2 4 1
Matrix Multiplication
Example 2
First we partition the input matrices into sub matrices as shown
below:

4 2 0 1 2 1 3 2
𝐴 = ,𝐴 = 𝐵 = ,𝐵 =
3 1 2 5 5 4 2 3

3 2 1 4 1 4 0 2
𝐴 = ,𝐴 = 𝐵 = ,𝐵 =
5 2 6 7 3 2 4 1
Matrix Multiplication
Example 2
Calculate the value of P, Q, R, S, T, U and V
𝑃 = (A + A ). (𝐵 +𝐵 )

5 6 2 3
=
9 8 9 5
Q=(𝐴 + 𝐴 ). 𝐵
64 45
= 4 6 2 1
90 67 =
11 9 5 4

38 28
=
67 47
Matrix Multiplication
Example 2
R= A .(𝐵 −𝐵 )

4 2 3 0
=
3 1 −2 2

8 4
=
7 2 S= 𝐴 .(𝐵 −𝐵 )

1 4 −1 3
=
6 7 −2 −2

−9 −5
=
−20 4
Matrix Multiplication
Example 2
T=(𝐴 + 𝐴 ). 𝐵 U= (𝐴 − 𝐴 ).(𝐵 +𝐵 )

4 3 0 2 −1 0 5 3
= =
5 6 4 1 2 1 7 7
12 11 −5 −3
= =
24 16 17 13
V=(𝐴 − 𝐴 ).(𝐵 +𝐵 )

−1 −3 1 6
=
−4 −2 7 3

−22 −15
=
−18 −30
Matrix Multiplication
Example 2
Now, Compute 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 :

𝐶 =𝑃+𝑆−𝑇+𝑉
64 45 −9 −5 12 11 −22 −15 21 14
𝐶 = + - + =
90 67 −20 4 24 16 −18 −30 28 25

𝐶 =𝑅+𝑇
8 4 12 11 20 15
𝐶 = + =
7 2 24 16 31 18
Matrix Multiplication
Example 2
Now, Compute 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 :

𝐶 =𝑄+𝑆
38 28 −9 −5 29 23
𝐶 = + =
67 47 −20 4 47 51

𝐶 =𝑃+𝑅−𝑄+𝑈
64 45 8 4 38 28 −5 −3 29 18
𝐶 = + - + =
90 67 7 2 67 47 17 13 47 35
Matrix Multiplication
Example 2
So the values of 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 are:
21 14 20 15 29 23 29 18
𝐶 = ,𝐶 = ,𝐶 = and 𝐶 =
28 25 31 18 47 51 47 35
Hence the resultant Matrix C is =
21 14 20 15
𝐶 𝐶
𝐶= = 28 25 31 18
𝐶 𝐶 29 23 29 18
47 51 47 35
Design and Analysis of Algorithm

Divide and Conquer strategy


(Merge Sort)
Lecture -13
Overview
• Learn the technique of “divide and conquer”
in the context of merge sort with analysis.
A Sorting Problem
(Divide and Conquer Approach)

• Divide the problem into a number of sub


problems.
• Conquer the sub problems by solving them
recursively.
– Base case: If the sub problems are small
enough, just solve them by brute force.
• Combine the sub problem solutions to give a
solution to the original problem.
Merge sort
• A sorting algorithm based on divide and conquer. Its worst-case
running time has a lower order of growth than insertion sort.
• Because we are dealing with sub problems, we state each sub
problem as sorting a sub array A[p . . r ].
• Initially, p = 1 and r = n, but these values change as we recurse
through sub problems.
To sort A[p . . r ]:
• Divide by splitting into two sub arrays A[p . . q] and A[q + 1 . . r ],
where q is the halfway point of A[p . . r ].
• Conquer by recursively sorting the two sub arrays A[p . . q] and
A[q + 1 . . r ].
• Combine by merging the two sorted sub arrays A[p . . q] and
A[q + 1 . . r ] to produce a single sorted sub array A[p . . r ]. To
accomplish this step, we’ll define a procedure MERGE(A, p, q, r ).
Merge Sort (Algorithm)
The recursion bottoms out when the subarray has just 1
element, so that it’s trivially sorted.
Example
Bottom-up view for n = 8: [Heavy lines demarcate subarrays used in
subproblems.]
Example
Bottom-up view for n = 11: [Heavy lines demarcate subarrays used in
subproblems.]
Merging
Input: Array A and indices p, q, r such that
• p≤q<r.
• Subarray A[p . . q] is sorted and subarray A[q + 1 . . r ] is
sorted. By the restrictions on p, q, r , neither subarray is
empty.

Output: The two subarrays are merged into a single sorted


subarray in A[p . . r ].

We implement it so that it takes (n) time, where


n = r − p + 1 = the number of elements being merged.
Pseudocode (Merging)
Example [A call of MERGE(9, 12, 16)]
Analyzing divide-and-conquer algorithms
Analyzing merge sort
Recursion tree (Step 1)
Recursion tree (Step 2)
Recursion tree (Step n)
Home Assignment

• Solve the Recurrence of Merge Sort with the


help of Master method.
Design and Analysis of Algorithm

Growth of Functions &


Asymptotic Notations

Lecture – 4 & 5
Overview
• A way to describe behaviour of functions in the limit. We’re
studying Asymptotic efficiency.
• Describe growth of functions.(i.e. The order of growth of the
running time of an algorithm)
• Focus on what’s important by abstracting away low-order
terms and constant factors.
• How we indicate running times of algorithms.
• A way to compare “sizes” of functions through different
notations (i.e. Asymptotic Notations):
  
  
  
  
  
Overview

 Growth of function

 Discuss Complexity of Algorithms

 Asymptotic Notations
Growth of function
• The growth of function refers to how the
size of a function’s values change as the
input increases.
• Growth is determined by the highest
order term among the multiple term of
the algorithm.
• Growth function estimate how many
steps an algorithm will take as the input
increases.
Some common types of growth
functions include:
 Constant complexity: O(1)
 Logarithmic complexity: O(log(n))
 Radical complexity: O(√n)
 Linear complexity: O(n)
 Linearithmic complexity: O(nlog(n))
 Quadratic complexity: O(n2)
 Cubic complexity: O(n3)
 Exponential complexity: O(bn), b>1
 Factorial complexity: O(n!)
Constant <= log(log(n)) <= log n <= log n2 <= (log
n)k <= √n <= n <= nlog(n) <= (n)√n <= nlog n <= n2
<= n2log n <= n3 <= 2n <= 3n <= n! <= nn

profravik
log variant | Desmos
Complexity of an Algorithm
• Complexity of an Algorithm is a function, f(n) which
gives the running time and storage space requirement of
the algorithm in terms of size n of the input data.

Main measure for the efficiency of an algorithm are Time


and space.

• Space Complexity: Amount of memory needed by an


algorithm to run its completion.
– The space need by a program has the following
components:
• Instruction space: is the space needed to store the
compiled version of the program instructions. The
amount of instructions space that is needed depends
on factors such as:
• The compiler used to complete the program into
machine code.
• The compiler options in effect at the time of
compilation.
• The target computer.
• Data space: Data space is the space needed to store all
constants, and variable values. Data space has two
components:
• Space needed by constants , temporary and
simple variables in the program.
• Space needed by dynamically allocated objects
such as arrays and class instances.

• Environment stack space: The environment stack is


used to save information needed to resume execution
of partially completed functions.

• Time complexity: Amount of time that it needs to


complete itself.
Cases in Complexity Theory

• Best Case: Minimum time


• Worst Case: Maximum amount of time
• Average Case: Expected / Average value of the
function f(n).
Asymptotic notation (Big Oh )
Asymptotic notation (Big Oh )

n= 1 2 3 10 1000
c >=2 >=1 >=0.667 >= 0.2 >=0.002
Asymptotic notation (Big Oh )
Example 1
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
Asymptotic notation (Big Oh )
Example 1
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)

2𝑛 + 3 ≤ 𝑐𝑔 𝑛

2𝑛 + 3 ≤ 5𝑛

n= 1 2 3 10 100
c >=5 >=3.5 >=3 >=2.3 >=2.03
Asymptotic notation (Big Oh )
Example 1
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
⟹ 2𝑛 + 3 ≤ 𝑐𝑔 𝑛
⟹ 2𝑛 + 3 ≤ 𝑐 𝑛
⟹ 2𝑛 + 3 ≤ 5𝑛, 𝑛 ≥ 1, 𝑐 ≥ 5
𝐻𝑒𝑛𝑐𝑒𝑓 𝑛 = 𝑂 𝑛

𝐹𝑜𝑟, 𝑓 𝑛 = 𝑂 𝑛 𝑖𝑠 𝑎𝑙𝑠𝑜 𝑡𝑟𝑢𝑒


𝐹𝑜𝑟, 𝑓 𝑛 = 𝑂 2 𝑖𝑠 𝑎𝑙𝑠𝑜 𝑡𝑟𝑢𝑒
𝐵𝑢𝑡 𝑓𝑜𝑟, 𝑓 𝑛 = 𝑂 lg 𝑛 𝑖𝑠 𝑛𝑜𝑡 𝑡𝑟𝑢𝑒
Because
1 < lg 𝑛 < 𝑛 < 𝑛 < 𝑛 < 𝑛 < ⋯ < 2 < 3 < ⋯ < 𝑛

n= 1 2 3 10 100
c >=5 >=3.5 >=3 >=2.3 >=2.03
Asymptotic notation (Big Oh )
Example 2

𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3𝑛 + 4 ∈ 𝑂 𝑛
Asymptotic notation (Big Oh )
Example 2

𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3𝑛 + 4 ∈ 𝑂 𝑛
⟹ 2𝑛 + 3𝑛 + 4 ≤ 2𝑛 + 3𝑛 + 4𝑛
⟹ 2𝑛 + 3𝑛 + 4 ≤ 9𝑛 , 𝑤ℎ𝑒𝑟𝑒 𝑐 ≥ 9 𝑎𝑛𝑑 𝑛 ≥ 1
𝐻𝑒𝑛𝑐𝑒𝑓 𝑛 = 𝑂 𝑛
Asymptotic notation (Big Oh )
Example 3
𝐼𝑓 𝑓 𝑛 = 2 𝑎𝑛𝑑 𝑔 𝑛 = 2 𝑡ℎ𝑒 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ Ο(𝑔 𝑛 )
Asymptotic notation (Big Oh )
Example 3
𝐼𝑓 𝑓 𝑛 = 2 𝑎𝑛𝑑 𝑔 𝑛 = 2 𝑡ℎ𝑒 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ Ο(𝑔 𝑛 )
⟹2 = 2 .2
So, as per the definition of Big Oh
𝑓 𝑛 ≤ 𝑐𝑔(𝑛)
Hence
⟹ 2 ≤ 𝑐. 2
⟹ 2.2 ≤ 𝑐. 2 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 > 0 𝑎𝑛𝑑 𝑐 > 1
𝐻𝑒𝑛𝑐𝑒, 𝑓 𝑛 ∈ Ο(𝑔 𝑛 )
Asymptotic notation (Big Omega )
Asymptotic notation (Big Omega )
Asymptotic notation (Big Omega )

Example 4
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3𝑛 + 4 ∈ Ω 𝑛
Asymptotic notation (Big Omega )

Example 4
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3𝑛 + 4 ∈ Ω 𝑛

f(n)>=cg(n)
2𝑛 + 3𝑛 + 4 ≥ 𝑐 ∗ 𝑛
𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 = 𝛺 𝑛 𝑓𝑜𝑟 𝑐 ≤ 2 𝑎𝑛𝑑 𝑛 ≥ 1

n 1 2 3 4 10 100
c 9 4 3 3 2 2
Asymptotic notation (Big Omega )

Example 5
𝐼𝑓 𝑓 𝑛 = 3𝑛 + 2, 𝑔 𝑛 = 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
Asymptotic notation (Big Omega )

Example 5
𝐼𝑓 𝑓 𝑛 = 3𝑛 + 2, 𝑔 𝑛 = 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 ≥𝑐
→ 𝑔 𝑛

⟹ 𝑙𝑖𝑚 ≥1

2
𝑛 3+
𝑛
⟹ 𝑙𝑖𝑚 ≥1
→ 𝑛
2
3+
𝑛
⟹ 𝑙𝑖𝑚 ≥1
→ 𝑛
⟹ 0 ≥ 1 𝑖𝑠 𝑓𝑎𝑙𝑠𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
Asymptotic notation (Big Omega )

Example 6

𝐼𝑓 𝑓 𝑛 = 2 + 𝑛 𝑎𝑛𝑑 𝑔 𝑛 = 2 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ Ω(𝑔 𝑛 )


Asymptotic notation (Big Omega )

Example 6

𝐼𝑓 𝑓 𝑛 = 2 + 𝑛 𝑎𝑛𝑑 𝑔 𝑛 = 2 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ Ω(𝑔 𝑛 )

⟹ 2 + 𝑛 ≥ 𝑐. 2

𝑡𝑟𝑢𝑒 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 1 𝑎𝑛𝑑 𝑐 = 1

𝐻𝑒𝑛𝑐𝑒, 𝑓 𝑛 ∈ Ω 𝑔 𝑛 𝑖𝑠 𝑡𝑟𝑢𝑒

n= 1 2 3 5 10
c <=1.5 <=2 <=2.125 <=1.7812 <=1.0976
Asymptotic notation (Theta)
Asymptotic notation (Theta)
Example 7
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 10𝑛 + 5𝑛 + 17 ∈ Θ 𝑛
Asymptotic notation (Theta)
Example 7
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 10𝑛 + 5𝑛 + 17 ∈ Θ 𝑛
As per the definition of 𝜃 notation 𝐶 𝑔 𝑛 ≤ 𝑓 𝑛 ≤ 𝐶 𝑔 𝑛
⟹ 10𝑛 ≤ 10𝑛 + 5𝑛 + 17 ≤ 10𝑛 + 5𝑛 + 17𝑛
⟹ 10𝑛 ≤ 10𝑛 + 5𝑛 + 17 ≤ 32𝑛
𝑆𝑜, 𝐶 = 10 and 𝐶 = 32 for all n ≥ 1
𝐻𝑒𝑛𝑐𝑒, 𝑃𝑟𝑜𝑣𝑒𝑑
Asymptotic notation (Theta)
Example 8
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = (𝑛 + 𝑎) ∈ Θ 𝑛
Asymptotic notation (Theta)
Example 8
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = (𝑛 + 𝑎) ∈ Θ 𝑛

As per the definition of 𝜃 notation ⟹ 𝑙𝑖𝑚 = 𝑐 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 1 𝑎𝑛𝑑 𝑐 > 0



(𝑛 + 𝑎)
⟹ 𝑙𝑖𝑚
→ 𝑛
𝑎
𝑛 1+
⟹ 𝑙𝑖𝑚 𝑛
→ 𝑛
𝑎 𝑎
⟹ 𝑙𝑖𝑚 1 + ∴ =0
→ 𝑛 ∞
⟹ 1 𝑤ℎ𝑖𝑐ℎ 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
𝐻𝑒𝑛𝑐𝑒 , 𝑓 𝑛 = (𝑛 + 𝑎) ∈ Θ 𝑛 is true
Asymptotic notation (Little Oh )
Asymptotic notation (Little Oh )

Example 9
𝐼𝑓 𝑓 𝑛 = 2𝑛, 𝑔 𝑛 = 𝑛 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )

Example 9
𝐼𝑓 𝑓 𝑛 = 2𝑛, 𝑔 𝑛 = 𝑛 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =0
→ 𝑔 𝑛

2𝑛
⟹ 𝑙𝑖𝑚
→ 𝑛
2
⟹ 𝑙𝑖𝑚
→ 𝑛

⟹ 0
𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑇𝑟𝑢𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )

Example 10
𝐼𝑓 𝑓 𝑛 = 2𝑛 , 𝑔 𝑛 = 𝑛 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )

Example 10
𝐼𝑓 𝑓 𝑛 = 2𝑛 , 𝑔 𝑛 = 𝑛 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =0
→ 𝑔 𝑛

2𝑛
⟹ 𝑙𝑖𝑚
→ 𝑛
⟹ 𝑙𝑖𝑚 2

⟹ 2≠0
𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑇𝑟𝑢𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
Asymptotic notation (Little omega )
Asymptotic notation (Little omega )

Example 11
𝐼𝑓 𝑓 𝑛 = 2𝑛 + 16 𝑎𝑛𝑑 𝑔 𝑛 = 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
Asymptotic notation (Little omega )

Example 11
𝐼𝑓 𝑓 𝑛 = 2𝑛 + 16 𝑎𝑛𝑑 𝑔 𝑛 = 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =∞
→ 𝑔 𝑛
16
𝑛 2+
𝑛
⟹ 𝑙𝑖𝑚
→ 𝑛
16
⟹ 𝑙𝑖𝑚 2 +
→ 𝑛
⟹ 𝑙𝑖𝑚 2 + 0

⟹ 𝑙𝑖𝑚 2

𝑆𝑜 2 ≠ ∞ 𝑖𝑠 𝑡𝑟𝑢𝑒 , 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
Asymptotic notation (Little Oh omega )
Example 12
𝐼𝑓 𝑓 𝑛 = 𝑛 𝑎𝑛𝑑 𝑔 𝑛 = log 𝑛, 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ 𝜔 𝑔 𝑛
Asymptotic notation (Little Oh omega )
Example 12
𝐼𝑓 𝑓 𝑛 = 𝑛 𝑎𝑛𝑑 𝑔 𝑛 = log 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ 𝜔 𝑔 𝑛
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =∞
→ 𝑔 𝑛
𝑛
⟹ 𝑙𝑖𝑚
→ log 𝑛

Apply L Hospital Rule


2𝑛
⟹ 𝑙𝑖𝑚
→ 1
𝑛
⟹ 𝑙𝑖𝑚 2𝑛

⟹ 𝑙𝑖𝑚 ∞

𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑡𝑟𝑢𝑒 𝑎𝑠 𝑝𝑒𝑟 𝜔 − 𝑛𝑜𝑡𝑎𝑡𝑖𝑜𝑛, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ∈ 𝜔 𝑔 𝑛
Comparisons of functions
Standard notations and common functions
log variant | Desmos
Example 1 Example 2
A() A()
{ {
int i; int i,j;
for(i=1;i<=n;i++) for(i=1;i<=n;i++)
{ {
printf(“ABCD”); for(j=1;j<=n;j++)
} {
} printf(“ABCD”);
}
}
}
Complexity of Example 1

T = O (n)

Complexity of Example 2

T = O (n2)
Example 3
A() A()
{ {
int i=1 ,s=1; scanf(“%d”, &n);
scanf(“%d”, &n); for(i=1, s=1 ; s<n ; i++, s=s+i)
while(s<n) {
{ printf(“abcd”);
i++; }
s=s+i; }
printf(“abcd”);
}
}
Complexity of Example 3


Example 4
A()
{
int i=1;
for(i=1; i pow 2<=n; i++)
{
printf(“abcd”);
}
}
Complexity of Example 4

Example 5
Complexity of Example 5
Design and Analysis of Algorithm
(BCS503)

Definition of Sorting problem through


exhaustive search and analysis of
Selection Sort through iteration
Method

Lecture - 2
Exhaustive Search Method

• Exhaustive search is a brute-force approach


applied to combinatorial problems. It generates
all possible combinations and checks if they
satisfy problem constraints.
• Lets there are 4 key in a array 5 12 7 1
• We have to sort them using brute-force, so we
check all the permutation possible.
5 12 7 1 1 12 7 5
5 12 1 7 1 12 5 7
5 7 12 1 1 7 12 5
5 7 1 12 1 7 5 12
5 1 12 7 1 5 7 12
5 12 7 1
5 1 7 12 1 5 12 7
7 12 5 1 12 7 5 1
7 12 1 5 12 7 1 5
7 5 12 1 12 5 7 1
7 5 1 12 12 5 1 7
7 1 5 12 12 1 7 5
7 1 12 5 12 1 5 7
n
Pr=n!/(n-r)!

n=4 and r=4


4
P4=4!/(4-4)!=4!/0!
=4!=24

So total permutation possible are n! for n


distinct numbers.
Selection Sort (Iterative)

• scan entire list for smallest element


• swap this element with the first element
• repeat from second element, third element, …
• after n−1 passes, list is sorted
Pseudocode for Selection Sort
• SelectionSort(A[0..n-1])
• # sort given array by selection sort
• # input: array A[0..n-1] of orderable elements
• # output: array A[0..n-1] sorted in non-decreasing order
• for i=0 to n-2 do
• min = i
• for j=i+1 to n-1 do
• if A[j] < A[min]:
• min = j
• swap A[i] and A[min]
(T(n) = (n-1) + .. + 1 = n(n-1)/2
=(1/2)*n^2 - (1/2)*n
= O(n^2)

it is in a quadratic polynomial.
Analysis of Selection Sort
• basic operation: key comparison
• number of times executed depends only on array
size
• C(n)=∑i=0n−2∑j=i+1n−11 = ∑i=0n−2[(n−1)−(i+1)+1]
=∑i=0n−2(n−1−i) = n(n−1)/2 ∈ Θ(n2)
• selection sort is Θ(n2) for all inputs
• number of key swaps is only Θ(n) which makes it
suitable for swapping small number of large items
Design and Analysis of Algorithm

Divide and Conquer strategy


(Quick Sort)
Lecture -14
Overview

• Worst-case running time: Θ(𝑛 )

• Expected running time: Θ(𝑛 lg 𝑛)

• Constants hidden in Θ(𝑛 lg 𝑛) are small.

• Sorts in place.
Description of quicksort
Performance of quicksort

The running time of quicksort depends on


the partitioning of the subarrays:
• If the subarrays are balanced, then
quicksort can run as fast as mergesort.
• If they are unbalanced, then quicksort
can run as slowly as insertion sort.
Randomized version of quicksort
• We have assumed that all input permutations are equally likely.
• This is not always true.
• To correct this, we add randomization to quicksort.
• We could randomly permute the input array.
• Instead, we use random sampling, or picking one element at
random.
• Don’t always use A[r ] as the pivot. Instead, randomly pick an
element from the subarray that is being sorted.

We add this randomization by not always using A[r ] as the pivot,


but instead randomly picking an element from the subarray that is
being sorted
Analysis of quicksort

We will analyze
• the worst-case running time of QUICKSORT
and RANDOMIZED-QUICKSORT (the same),
and
• the expected (average-case) running time of
RANDOMIZED-QUICKSORT is O(n lg n) .
Design and Analysis of Algorithm
(BCS503)

Introduction to Algorithms and its


analysis mechanism with the help of
Linear Search

Lecture - 1
Overview
• Discussion on algorithms and its
characteristics.
• Analysis of Algorithms and its significance
in the context of real world software
development.
• Discussion on the iterative version and
recursive version of Linear search.
What is an Algorithm ?
• An algorithm is a finite set of instructions that
if followed, accomplishes a particular task.
• An algorithm is any well defined computational
procedure that takes some value or set of
values, as input and produces some value or
set of values as output.
• We can also view an algorithm as a tool for
solving a well specified computational
problem.
Characteristics of an Algorithm
RP1
RP0
• Input: Zero or more quantities are externally
supplied.
• Output: At least one quantity is produced.
RP2

• Definiteness: Each instruction is clear, and


unambiguous.
• Finiteness: It must have finite number of steps
and it will end after a finite time.
• Effectiveness: Every instruction must be very
basic so that it can be carried out, in principle,
by a person using only paper and pen.
Slide 4

RP0 Define Quality of Algo


Ravi Krishan Pandey, 2024-09-12T03:38:15.240

RP1 In respect to achieve goal


Ravi Krishan Pandey, 2024-09-12T03:39:18.606

RP2 Proper sequencing & structured


Ravi Krishan Pandey, 2024-09-12T03:40:06.768
• Language Independent: instructions must be
language-independent, i.e. it can be
implemented in any language, and yet the
output will be the same, as expected.

• Feasibility: instructions must be generic, and


practical, such that it can be executed with
the available resources.
Analysis of an Algorithm
RP0
What is Analyse of Algorithm?
It generally focused on CPU (time) usage, Memory
usage, Disk usage, and Network usage for
accomplishment of a algorithm. The most concern factor
is the CPU time.
Performance: How much time/memory/disk/etc. is used
when a program is run. This depends on the machine,
compiler, etc. as well as the code we write.
Complexity: How do the resource requirements of a
program or algorithm scale, i.e. what happens as the size
of the problem being solved by the code gets larger.

Note: Complexity affects performance but not vice-versa.


Slide 6

RP0 Note: Complexity affects performance but not vice-versa.


Ravi Krishan Pandey, 2024-09-14T09:47:00.845
Importance of Analyse of Algorithm
• To predict the behavior of an algorithm without
implementing it on a specific computer.
• It is much more convenient to have simple measures
for the efficiency of an algorithm than to implement
the algorithm and test the efficiency every time a
certain parameter in the underlying computer system
changes.
• The analysis is only an approximation, it is impossible
to predict the exact behavior of an algorithm because
there are too many influencing factors.
• More importantly, by analyzing different algorithms, we
can compare them to determine the best one for our
purpose.
Analysis of an Algorithm
• Loop Invariant technique was done in three steps:
– Initialization
– Maintenance
– Termination
• It deals with predicting the resources that an
algorithm requires to its completion such as
memory and CPU time.
Problem: Search (Linear search)
• We are given with a list of records.
• Each record has an associated key.
• Give efficient algorithm for searching for a
record containing a particular key.
• Efficiency is quantified in terms of average
time analysis (number of comparisons) to
retrieve an item.
Search
[0] [1] [2] [3] [4] [ 700 ]


Number 701466868 Number 281942902 Number 233667136 Number 580625685 Number 506643548 Number 155778322

Each record in list has an associated key.


In this example, the keys are ID numbers. Number 580625685

Given a particular key, how can we efficiently retrieve the


record from the list?
Serial Search

• Step through array of records, one at a


time.
• Look for record with matching key.
• Search stops when
– record with matching key is found

– or when search has examined all records


without success.
Pseudocode for Serial Search
Iterative Version
// Search for a desired item in the n array elements
// starting at a[first].
// Returns pointer to desired record if found.
// Otherwise, return NULL

for(i = first; i < n; ++i )
if(a[first+i] is desired item)
return &a[first+i];

// if we drop through loop, then desired item was not found


return NULL;
Serial Search Analysis

• What are the worst and average case


running times for serial search?
• We must determine the O-notation for the
number of operations required in search.
• Number of operations depends on n, the
number of entries in the list.

T(n) = 1 + 1 + ..+ 1 (n times) = n = O(n),


which means linear time.
Worst Case Time for Serial Search
• For an array of n elements, the worst case time for
serial search requires n array accesses: O(n).
• Consider cases where we must loop over all n
records:
– desired record appears in the last position of
the array
– desired record does not appear in the array at
all
Average Case for Serial Search
Assumptions:
1. All keys are equally likely in a search
2. We always search for a key that is in the array
Example:
• We have an array of 10 records.
• If search for the first record, then it requires 1 array
access; if the second, then 2 array accesses. etc.
The average of all these searches is:
(1+2+3+4+5+6+7+8+9+10)/10 = 5.5
Average Case Time for Serial Search
Generalize for array size n.

Expression for average-case running time:

(1+2+…+n)/n = n(n+1)/2n = (n+1)/2

Therefore, average case time complexity for serial


search is O(n).
Pseudocode for Serial Search
Recursive Version
// Call function Search(a, start, end).
// Returns pointer to desired record if found.
// Otherwise, call function a(start+1,end)

Search(a,start,end)
{ found=0;
if (a[start] is desired item)
found=1;
return &a[start];
else { if(start>end) return 0;
Search(a,start+1,end);}
}
// if found==0 at end, then desired item was not found;
• T(n) = 1 + T(n-1), T(0) = 0

substitution method
T(n)= 1+T(n-1) = 1+1+T(n-2)= 1+1+1+T(n-3)=…

Generalised expression up to Kth terms


T(n)= K+T(n-k)
Putting K=n
T(n)= n + T(0)=n
Design and Analysis of Algorithm

Linear Time Sorting


(Shell Sort)

Lecture -23
Overview

• Running time of Shell sort in worst case is 𝑂 𝑛 or


float between 𝑂 𝑛 log 𝑛 𝑎𝑛𝑑 𝑂 𝑛 .
• Running time of Shell sort in best case is Ο(𝑛 lg 𝑛) .
And 𝑂(𝑛 ) if the total number of comparisons for
each interval (or increment) is equal to the size of
the array.
• Is a not a stable sorting.
Shell Sort

• Designed by Donald Shell and named the


sorting algorithm after himself in 1959.
• Shell sort works by comparing elements
that are distant rather than adjacent
elements in an array or list where adjacent
elements are compared.
• Shell sort is also known as diminishing
increment sort.
Shell Sort
• Shell sort improves on the efficiency of
insertion sort by quickly shifting values to
their destination.
• This algorithm tries to decreases the
distance between comparisons (i.e. gap) as
the sorting algorithm runs and reach to its
last phase where, the adjacent elements are
compared only.
Shell Sort
• The distance of comparisons (i.e. gap) is
maintained by the following methods:
• divide by 2(Two) [Designed by Donaled
Shell)
• Knuth Method(𝑖. 𝑒. 𝑔𝑎𝑝 ← 𝑔𝑎𝑝 ∗ 3 + 1)
(𝑖𝑛𝑖𝑡𝑖𝑎𝑙𝑙𝑦 𝑡ℎ𝑒 𝑔𝑎𝑝 𝑠𝑡𝑎𝑟𝑡 𝑤𝑖𝑡ℎ 1)
Shell Sort
• Let’s execute an example with the help of
Knuth’s gap method on the following array.

35 33 42 10 14 19 27 44

• At the beginning the gap is initialized as 1


• Hence the new gap value for iteration 1 is
calculated as follows:
𝑔𝑎𝑝 = 𝑔𝑎𝑝 ∗ 3 + 1
=1∗3+1 =4
Shell Sort
Swap count =0

35 33 42 10 14 19 27 44

swap
Shell Sort
Swap count =1

14 33 42 10 35 19 27 44

Swap
Shell Sort
Swap count =1

14 33 42 10 35 19 27 44

swap
Shell Sort
Swap count =2

14 19 42 10 35 33 27 44

swap
Shell Sort
Swap count =2

14 19 42 10 35 33 27 44

swap
Shell Sort
Swap count =3

14 19 27 10 35 33 42 44

swap
Shell Sort
Swap count =3

14 19 27 10 35 33 42 44

No swap required
Shell Sort
• After the first iteration the array looks like as
follows.

14 19 27 10 35 33 42 44

• Again we finding the gap value for nest iteration.


𝑔𝑎𝑝 = 𝑔𝑎𝑝 ∗ 3 + 1
• We can write the above equation as follows:
𝑔𝑎𝑝 − 1
𝑔𝑎𝑝 =
3
• So, the new gap is 𝑔𝑎𝑝 = = =1
Shell Sort
Swap count =3

14 19 27 10 35 33 42 44

No swap
Shell Sort
Swap count =3

14 19 27 10 35 33 42 44

No swap
Shell Sort
Swap count =3

14 19 27 10 35 33 42 44

swap
Shell Sort
Swap count =4

14 19 10 27 35 33 42 44

swap
Shell Sort
Swap count =4

14 19 10 27 35 33 42 44

swap

14 19 10 27 35 33 42 44

swap
Shell Sort
Swap count =5

14 19 10 27 35 33 42 44

swap

14 10 19 27 35 33 42 44

swap
Shell Sort
Swap count =5

14 19 10 27 35 33 42 44

swap

14 10 19 27 35 33 42 44

swap
14 10 19 27 35 33 42 44

swap
Shell Sort
Swap count =6

14 19 10 27 35 33 42 44

swap

14 10 19 27 35 33 42 44

swap
10 14 19 27 35 33 42 44

swap
Shell Sort
Swap count =6

10 14 19 27 35 33 42 44

No swap
Shell Sort
Swap count =6

10 14 19 27 35 33 42 44

swap
Shell Sort
Swap count =7

10 14 19 27 33 35 42 44

swap
Shell Sort
Swap count =7

10 14 19 27 33 35 42 44

No swap
Shell Sort
Swap count =7

10 14 19 27 33 35 42 44

No swap
Shell Sort
Swap count =7

10 14 19 27 33 35 42 44

No swap
• Hence ,total number of swap required in
• 1st iteration= 3
• 2nd iteration= 4
• So total 7 numbers of swap required to sort the
array by shell sort.
Shell Sort
Algorithm Shell sort (Knuth Method)
1. gap=1
2. while(gap < A.length/3)
3. gap=gap*3+1
4. while( gap>0)
5. for(outer=gap; outer<A.length; outer++)
6. Ins_value=A[outer]
7. inner=outer
8. while(inner>gap-1 && A[inner-gap]≥ Ins_value)
9. A[inner]=A[inner-gap]
10. inner=inner-gap
11. A[inner]=Ins_value
12. gap=(gap-1)/3
Shell Sort
• Let us dry run the shell sort algorithm with the same example
as already discussed.
35 33 42 10 14 19 27 44
At the beginning
A .length=8 and gap=1
After first three line execution the gap value changed to 4
Now, gap>0 (i.e. 4>0)
Now in for loop outer=4;outer<8;outer++
Ins_value=A[outer]=A[4]=14
inner=outer i.e. inner=4
Now the line no 8 is true⟹ 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑐𝑐𝑢𝑟𝑒𝑑 and the updated array is
looked as follow
14 33 42 10 35 19 27 44

Swap
Shell Sort

Now in for loop outer=5 ;outer<8; outer++


Ins_value=A[outer]=A[5]=19
inner=outer i.e. inner=5
Now the line no 8 is true⟹ 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑐𝑐𝑢𝑟𝑒𝑑 and the updated
array is looked as follow

14 19 42 10 35 33 27 44

swap
Shell Sort

Now in for loop outer=6 ;outer<8; outer++


Ins_value=A[outer]=A[6]=27
inner=outer i.e. inner=6
Now the line no 8 is true⟹ 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑐𝑐𝑢𝑟𝑒𝑑 and the updated
array is looked as follow

14 19 27 10 35 33 42 44

swap
Shell Sort

Now in for loop outer=7 ;outer<8; outer++


Ins_value=A[outer]=A[7]=44
inner=outer i.e. inner=7
Now the line no 8 is False ⟹ no change in array and the updated
array is looked as follow

14 19 27 10 35 33 42 44

No swap required
Shell Sort

• Now again gap value will be calculated .


• The new gap value is 1. And again the same procedure will be
continued .
Shell Sort

• Now again gap value will be calculated .


• The new gap value is 1. And again the same procedure will be
continued .
Shell Sort

• Now again gap value will be calculated .


• The new gap value is 1. And again the same procedure will be
continued . And finally the sorted array looks as given below
with 7(seven) number of swap

10 14 19 27 33 35 42 44
Shell Sort
Analysis:
• Shell sort is efficient for medium size lists.
• For bigger list, this algorithm is not the best choice.
• But it is the fastest of all Ο(𝑛 ) sorting algorithm.
• The best case in shell sort is when the array is already sorted in
the right order i.e. Ο(𝑛)
• The worst case time complexity is based on the gap sequence.
That’s why various scientist give their gap intervals. They are:
1. Donald Shell give the gap interval ⁄ . ⟹ Ο(𝑛 log 𝑛)

2. Knuth give the gap interval 𝑔𝑎𝑝 ← 𝑔𝑎𝑝 ∗ 3 + 1 ⟹ Ο(𝑛 )
3. Hibbard give the gap interval 2 ⟹ Ο(𝑛 log 𝑛)
Shell Sort
Analysis:
In General
• Shell sort is an unstable sorting algorithm because this
algorithm does not examine the elements lying in between the
intervals.
• Worst Case Complexity: less than or equal to 𝑂 𝑛 or float
between 𝑂 𝑛 log 𝑛 𝑎𝑛𝑑 𝑂 𝑛 .
• Best Case Complexity: 𝑂(𝑛 log 𝑛)
When the array is already sorted, the total number of
comparisons for each interval (or increment) is equal to 𝑂(𝑛 )
i.e. the size of the array.
• Average Case Complexity: 𝑂(𝑛 log 𝑛)
It is around 𝑂 𝑛 . .

(Remark: Accurate model not yet been discovered)


Design and Analysis of Algorithm

Linear Time Sorting


(Radix Sort and Bucket Sort)

Lecture -22
Linear Time Sorting
(Radix Sort)
Overview

• Running time of Radix sort is 𝛩(𝑑 𝑛 + 𝑘 )


• Required extra space for sorting.
• Is a stable sorting.
Radix Sort
• Radix sort is non comparative sorting
method
• Two classifications of radix sorts are least
significant digit (LSD) radix sorts and most
significant digit (MSD) radix sorts.
• LSD radix sorts process the integer
representations starting from the least digit
and move towards the most significant digit.
MSD radix sorts work the other way around.
Radix Sort (Algorithm)

Radix_Sort(A,d)
𝑓𝑜𝑟 𝑖 ← 𝑑 𝑑𝑜𝑤𝑛 𝑡𝑜 1
𝑈𝑠𝑒 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡 𝑡𝑜 𝑠𝑜𝑟𝑡 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦 𝐴 𝑜𝑛 𝑑𝑖𝑔𝑖𝑡 𝑖
(𝑖. 𝑒. 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡)
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅 )
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑑
𝑑𝑜 "use a stable sort to sort array A on digit i;
329
457
657
839
436
720
355
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅 )
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑑
𝑑𝑜 " use a stable sort to sort array A on digit i;
329 720
457 355
657 436
839 457
436 657
720 329
355 839
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅 )
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑑
𝑑𝑜 " use a stable sort to sort array A on digit i;
329 720 720
457 355 329
657 436 436
839 457 839
436 657 355
720 329 457
355 839 657
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅 )
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑑
𝑑𝑜 " use a stable sort to sort array A on digit i;
329 720 720 329
457 355 329 355
657 436 436 436
839 457 839 457
436 657 355 657
720 329 457 720
355 839 657 839
Radix Sort (Analysis)
Radix_Sort(A,d)
𝑓𝑜𝑟 𝑖 ← 𝑑 𝑑𝑜𝑤𝑛 𝑡𝑜 1
𝑈𝑠𝑒 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡 𝑡𝑜 𝑠𝑜𝑟𝑡 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦 𝐴 𝑜𝑛 𝑑𝑖𝑔𝑖𝑡 𝑖
(𝑖. 𝑒. 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡)

• 𝐻𝑒𝑟𝑒 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡 𝑒𝑥𝑒𝑐𝑢𝑡𝑒 𝑓𝑜𝑟 𝑑 𝑡𝑖𝑚𝑒𝑠.


• 𝑇ℎ𝑒 𝑟𝑢𝑛𝑛𝑖𝑛𝑔 𝑡𝑖𝑚𝑒 𝑜𝑓 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡 𝑖𝑠 𝛩 𝑛 + 𝑘
• 𝐻𝑒𝑛𝑐𝑒 𝑡ℎ𝑒 𝑟𝑢𝑛𝑛𝑖𝑛𝑔 𝑡𝑖𝑚𝑒 𝑐𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦 𝑜𝑓 𝑅𝑎𝑑𝑖𝑥 𝑆𝑜𝑟𝑡 𝑖𝑠
𝛩(𝑑 𝑛 + 𝑘 )
Linear Time Sorting
(Bucket Sort)
Overview

• The average time complexity is 𝑂(𝑛 + 𝑘).


• The worst time complexity is 𝑂(𝑛²).
• Required extra space for sorting.
• Is a stable sorting.
Bucket Sort
• Bucket sort is a comparison sort algorithm
that operate on elements by dividing them
into different bucket and return the result.
• Buckets are assigned based on each
element’s search key.
• A the time of returning the result, First
concatenate each bucket one by one and
then return the result in a single array.
Bucket Sort
• Some variations
– Make enough buckets so that each will
only hold one element, use a count for
duplicates.
– Use fewer buckets and then sort the
contents of each bucket.
• The more buckets you use, the faster the
algorithm will run but it uses more
memory.
Bucket Sort
• Time complexity is reduced when the number of
items per bucket is evenly distributed and it is
closed to one item per bucket.
• As buckets require extra space, This algorithm
trading increased space consumption for a
lower time complexity.
• In general, Bucket Sort beats all other sorting
techniques in time complexity but can require a
huge of space.
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
0 1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
1 2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
0 1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
2
0 1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
0 2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0

2
0 2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4

0 0 0 1 1 2 2 2 2 3 3 4 4
Bucket Sort
• One Value per bucket:
Algorithm BucketSort( S )
( values in S are between 0 and m-1 )
for j  0 to m-1 do // initialize m buckets
b[j]  0
for i  0 to n-1 do // place elements in their
b[S[i]]  b[S[i]] + 1 // appropriate buckets
i0
for j  0 to m-1 do // place elements in buckets
for r  1 to b[j] do // back in S (Concatination)
S[i]  j
ii+1
Bucket Sort
One Value per bucket (Analysis)
• Bucket initialization: 𝑂( 𝑚 )
• From array to buckets: 𝑂( 𝑛 )
• From buckets to array: 𝑂( 𝑛 )
• Due to the implementation of dequeue.
• Since 𝑚 will likely be small compared to 𝑛, Bucket
sort is 𝑂( 𝑛 )
• Strictly speaking, time complexity is 𝑂 ( 𝑛 + 𝑚 )
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.20
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.12 .20
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.12 .20 .52


0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.12 .20 .52 .63


0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.64
.12 .20 .52 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.64
.12 .20 .36 .52 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.37 .64
.12 .20 .36 .52 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.37 .64
.12 .20 .36 .47 .52 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.37 .58 .64


.12 .20 .36 .47 .52 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.18 .37 .58 .64


.12 .20 .36 .47 .52 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.18 .37 .58 .64


.12 .20 .36 .47 .52 .63 .88
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.18 .37 .58 .64


.09 .12 .20 .36 .47 .52 .63 .88
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.18 .37 .58 .64


.09 .12 .20 .36 .47 .52 .63 .88 .99
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.18 .37 .52 .63


.09 .12 .20 .36 .47 .58 .64 .88 .99
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.18 .37 .58 .64


.09 .12 .20 .36 .47 .52 .63 .88 .99
0 1 2 3 4 5 6 7 8 9
Apply Internal
sorting(stable)
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.18 .37 .58 .64


.09 .12 .20 .36 .47 .52 .63 .88 .99
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99

.18 .37 .58 .64


.09 .12 .20 .36 .47 .52 .63 .88 .99
0 1 2 3 4 5 6 7 8 9

.09 .12 .18 .20 .36 .37 .47 .52 .58 .63 .64 .88 .99
Bucket Sort
• Multiple items per bucket:
Algorithm BucketSort( S )
1. 𝐿𝑒𝑡 𝐵[0. . (𝑛 − 1)]𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦.
2. 𝑛 ← 𝐴. 𝑙𝑒𝑛𝑔𝑡ℎ
3. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1
4. 𝑚𝑎𝑘𝑒 𝐵 𝑖 𝑎𝑛 𝑒𝑚𝑝𝑡𝑦 𝑙𝑖𝑠𝑡
5. 𝑓𝑜𝑟 𝑖 ←← 1 𝑡𝑜 𝑛
6. 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵[𝑛 ∗ 𝐴 𝑖 ]
7. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1
8. 𝑠𝑜𝑟𝑡 𝑙𝑖𝑠𝑡 𝐵 𝑖 𝑤𝑖𝑡ℎ 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡𝑖𝑛𝑔(𝑖𝑛𝑠𝑒𝑟𝑡𝑖𝑜𝑛 𝑠𝑜𝑟𝑡)
9. 𝐶𝑜𝑛𝑐𝑎𝑡𝑒𝑛𝑎𝑡𝑒 𝑡ℎ𝑒 𝑙𝑖𝑠𝑡 𝐵 0 , 𝐵 1 , 𝐵 2 , … … , 𝐵 𝑛 − 1
𝑡𝑜𝑔𝑒𝑡ℎ𝑒𝑟 𝑖𝑛 𝑜𝑟𝑑𝑒𝑟.
Bucket Sort
Multiple items per bucket (Analysis)
• It was observed that except line no 8 all other lines
take Ο 𝑛 time in worst case.
• Line no. 8 (i.e. insertion sort) takes Ο(𝑛 ) .
• The average time complexity for Bucket Sort is
𝑂(𝑛 + 𝑘) in uniform distribution.
Bucket Sort
Characteristics of Bucket Sort
• Bucket sort assumes that the input is drawn from a
uniform distribution.
• The computational complexity estimates involve the
number of buckets.
• Bucket sort can be exceptionally fast because of the
way elements are assigned to buckets, typically using
an array where the index is the value.
Bucket Sort
Characteristics of Bucket Sort
• This means that more auxiliary memory is required
for the buckets at the cost of running time than more
comparison sorts.
• The average time complexity is 𝑂(𝑛 + 𝑘).
• The worst time complexity is 𝑂(𝑛²).
• The space complexity for Bucket Sort is 𝑂(𝑛 + 𝑘).
Design and Analysis of Algorithm

Recurrence Equation
(Solving Recurrence using
Recursion Tree Methods)

Lecture – 8 and 9
Overview
• A recurrence is a function is defined in terms of
– one or more base cases, and
– itself, with smaller arguments.
Examples:
Overview
• Many technical issues:
• Floors and ceilings
[Floors and ceilings can easily be removed and don’t affect
the solution to the recurrence. They are better left to a
discrete math course.]
• Exact vs. asymptotic functions
• Boundary conditions
Overview

In algorithm analysis, the recurrence and it’s solution are


expressed by the help of asymptotic notation.
• Example: 𝑇 (𝑛) = 2𝑇 (𝑛/2) + 𝛩(𝑛), with solution
𝑇 (𝑛) = 𝛩(𝑛 lg 𝑛).
• The boundary conditions are usually expressed as
𝑇 (𝑛) = 𝛰(1) for sufficiently small n..
• But when there is a desire of an exact, rather than
an asymptotic, solution, the need is to deal with
boundary conditions.
• In practice, just use asymptotics most of the time,
and ignore boundary conditions.
Recursive Function
• Example
𝐴(𝑛)
{
𝐼𝑓(𝑛 > 1)
𝑛
𝑅𝑒𝑡𝑢𝑟𝑛 𝐴
2
}
The relation is called recurrence relation
The Recurrence relation of given function is written as follows.
𝑛
𝑇(𝑛) = 𝑇 +1
2
Recursive Function
• To solve the Recurrence relation the following methods
are used:
1. Substitution Method (Forward and Backward (Iteration))
2. Recursion-Tree method
3. Master Method
Recursion Tree Method
• Recursion Tree is another method for solving recurrence relations.
• In a recursion tree, each node represents the cost of a single
subproblem.
• This method work on two steps. These are
• First, A set of pre level costs are obtained by sum the cost of each
level of the tree and the height of the tree.
• Second, to determine the total cost of all level of recursion, we sum
all the pre level cost.
• This method is best used for good guess.
• For generating good guess, we can ignore floors( ) and ceiling
( ) when solving the recurrences. Because, they usually do not
affect the final guess.
Recursion Tree Method (Example 1)
Example 1

Solve the recurrence 𝑇 𝑛 = 3𝑇 +Θ 𝑛 by using recursion tree method.

Ans :

We start by focusing on finding an upper bound for the solution by using good guess.
As we know that floors and ceilings usually do not matter when solving the
recurrences, we drop the floor and write the recurrence equation as follows:
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛 ,𝑐 > 0
4

The term 𝑐𝑛 , at the root represent the costs incurred by the subproblems of size .
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4
𝑇(𝑛)
Fig –(a)
Figure (a) shows T(n), which progressively expands in (b) and (d) to form recursion
tree.

𝑛 𝑛 𝑛
𝑇 𝑇 𝑇
4 4 4
Fig –(b)

L1.9
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4

𝑐(𝑛/4) 𝑐(𝑛/4) 𝑐(𝑛/4)

𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛
𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇
16 16 16 16 16 16 16 16 16

Fig –(c)

L1.10
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4

𝑐(𝑛/4) 𝑐(𝑛/4) 𝑐(𝑛/4)


log 𝑛

𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛
𝑐 𝑐 𝑐 𝑐 𝑐 𝑐 𝑐 𝑐 𝑐
16 16 16 16 16 16 16 16 16

𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1
Fig –(d)

L1.11
Recursion Tree Method (Example 1)

Analysis
First, we find the height of the recursion tree
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
i.e. the subproblem size hits 𝑛 = 1, when =1
So, if =1
⟹𝑛= 4 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 4
⟹ 𝑖 = log 𝑛
So the height of the tree is log 𝑛.
Recursion Tree Method (Example 1)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 3 .
So, each node at depth ′𝑖 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … , log 𝑛 − 1 has cost 𝑐 .

Hence the total cost at level ′𝑖′ is 3 𝑐


𝑛
⟹ 3 . 𝑐.
4
𝑛
⟹ 3 . 𝑐.
16
3
⟹ . 𝑐. 𝑛
16
3
⟹ . 𝑐. 𝑛
16
Recursion Tree Method (Example 1)

However, the bottom level is special. Each of the bottom node has
contribute cost = 𝑇(1)
Hence the cost of the bottom level is = 3
⟹3 𝑎𝑠 𝑖 = log 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹𝑛
So, the total cost of entire tree is
𝑇 𝑛
3 3 3 3
= 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯ + 𝑐𝑛 + Θ(𝑛 )
16 16 16 16
3
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
16
Recursion Tree Method (Example 1)
The left term is just a sum of Geometric series. So the value of 𝑇 𝑛 is as follows.
3 −1
16
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
3
−1
16
The above equation looks very complicated. So, we use an infinite geometric series as an upper
bound. Hence the new form of the equation is given below:
3
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
16

3
𝑇 𝑛 ≤ 𝑐𝑛 + Θ(𝑛 )
16
1
𝑇 𝑛 ≤ 𝑐𝑛 + Θ(𝑛 )
3
1−
16
16
𝑇 𝑛 ≤ 𝑐𝑛 + Θ(𝑛 )
13
𝑇 𝑛 = Ο(𝑛 )
Recursion Tree Method (Example 2)
Example 2

Solve the recurrence 𝑇 𝑛 = 4𝑇 + 𝑛 by using recursion tree


method.
𝑛
𝑇 𝑛 = 4𝑇 + 𝑐𝑛 , 𝑐 > 0
2
The term 𝑐𝑛, at the root represent the costs incurred by the
subproblems of size .

Construction of Recursion tree

𝑇(𝑛)

Fig –(a)

Figure (a) shows T(n), which progressively expands in (b) to form


recursion tree.
Recursion Tree Method (Example 2)
𝑛
𝑇 𝑛 = 4𝑇 + 𝑐𝑛 , 𝑐 > 0
2

𝑛
𝑛 𝑐
𝑛 𝑐 2
𝑛
𝑐 𝑐 2
2 2
log 𝑛

𝑛 𝑛
𝑛 𝑛 𝑇 𝑛 𝑛 𝑇
𝑇 𝑛 𝑛 𝑇 𝑛 𝑛 4 𝑇 𝑇 4
4 𝑇 𝑇 4 𝑇 𝑛 𝑛 𝑇 𝑛 𝑛 4 4
4 4 4 𝑇 𝑇 4 𝑇 𝑛 𝑛 𝑇
4 4 4 𝑇 𝑇 4
4 4

𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1

Fig –(b)
Recursion Tree Method (Example 2)

Analysis
First, we find the height of the recursion tree
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
i.e. the subproblem size hits 𝑛 = 1, when =1
So, if =1
⟹𝑛= 2 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2
⟹ 𝑖 = log 𝑛
So the height of the tree is log 𝑛.
Recursion Tree Method (Example 2)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 4 .
So, each node at depth ′𝑖 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … … , log 𝑛 − 1 has cost
𝑐 .
Hence the total cost at level ′𝑖′ is 4 𝑐
𝑛
⟹ 4 . 𝑐.
2
𝑛
⟹ 4 . 𝑐.
2
4
⟹ . 𝑐𝑛
2
4
⟹ . 𝑐𝑛
2
Recursion Tree Method (Example 2)
However, the bottom level is special. Each of the bottom node has contribute
cost = 𝑇(1)
Hence the cost of the bottom level is = 4
⟹4 𝑎𝑠 𝑖 = log 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹𝑛

So, the total cost of entire tree is


4 4 4 4
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯ + 𝑐𝑛 + Θ(𝑛 )
2 2 2 2
4
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
2
Recursion Tree Method (Example 2)
The left term is just a sum of Geometric series. So the value of 𝑇 𝑛 is as follows.

4
−1
2
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
4
−1
2

2 −1
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
2−1
𝑛 −1
𝑛 = 𝑐𝑛 + 𝑐𝑛
2−1
𝑛−1
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛
1
𝑇 𝑛 = 𝑐𝑛 − 𝑐𝑛 + 𝑐𝑛
𝑇 𝑛 = 2𝑐𝑛 − 𝑐𝑛
𝐻𝑒𝑛𝑐𝑒, 𝑇 𝑛 = Θ(𝑛 )
Recursion Tree Method (Example 3)
Example 3
Solve the recurrence 𝑇 𝑛 = 2𝑇 + Θ(𝑛) by using recursion tree
method.
𝑇(𝑛) = 2𝑇(𝑛/2) + 𝑐𝑛 , 𝑐 > 0
The term 𝑐𝑛, at the root represent the costs incurred by the
subproblems of size .
Construction of Recursion tree
𝑇(𝑛)
Fig –(a)
Figure (a) shows T(n), which progressively expands in (b) to form
recursion tree.
Recursion Tree Method (Example 3)
𝑇(𝑛) = 2𝑇(𝑛/2) + 𝑐𝑛 , 𝑐 > 0

𝑛 𝑛
𝑐 𝑐
2 2
log 𝑛

𝑛 𝑛 𝑛 𝑛
𝑇 𝑇 𝑇 𝑇
4 4 4 4

𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1

Fig –(b)
Recursion Tree Method (Example 3)
Analysis
First, we find the height of the recursion tree
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
i.e. the subproblem size hits 𝑛 = 1, when =1
So, if =1
⟹𝑛= 2 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2
⟹ 𝑖 = log 𝑛
So the height of the tree is log 𝑛.
Recursion Tree Method (Example 3)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 4 .
So, each node at depth ′𝑖 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … … , log 𝑛 − 1 has cost 𝑐
.
Hence the total cost at level ′𝑖′ is 2 𝑐 = 𝑐𝑛
However, the bottom level is special. Each of the bottom node has
contribute cost = 𝑇(1)
Hence the cost of the bottom level is = 2
⟹2 𝑎𝑠 𝑖 = log 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹𝑛
⟹𝑛
Recursion Tree Method (Example 3)

So, the total cost of entire tree is

𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯ + 𝑐𝑛

𝑇 𝑛 = 𝑐𝑛 ∑ 1 .

𝑇 𝑛 = 𝑐𝑛 (log 𝑛 + 1)

𝑇 𝑛 = 𝑐𝑛 log 𝑛 + 𝑐𝑛

𝐻𝑒𝑛𝑐𝑒, 𝑇 𝑛 = Θ(𝑛 log 𝑛)


Recursion Tree Method (Example 4)
Example 4
Solve the recurrence 𝑇 𝑛 = 𝑇 +𝑇 +𝑇 + Θ(𝑛) by using
recursion tree method.
𝑛 𝑛 𝑛
𝑇 𝑛 =𝑇 +𝑇 +𝑇 + 𝑐𝑛 , 𝑐 > 0
2 4 8
The term 𝑐𝑛, at the root represent the costs incurred by the subproblems
of size , , 𝑎𝑛𝑑 .
Construction of Recursion tree
𝑇(𝑛)
Fig –(a)
Figure (a) shows T(n), which progressively expands in (b) to form
recursion tree.
Recursion Tree Method (Example 4)
𝑛 𝑛 𝑛
𝑇 𝑛 =𝑇 +𝑇 +𝑇 + 𝑐𝑛 , 𝑐 > 0
2 4 8

𝑛 𝑛
𝑛 𝑐
𝑐 𝑐 8
2 4
log 𝑛

𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛
𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇
2 4 8 8 16 32 16 32 64

𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1

Fig –(b)
Recursion Tree Method (Example 4)
Analysis
First, we find the height of the recursion tree

Hear the problem divide into three subproblem of size , , 𝑎𝑛𝑑 .


For calculating the height of the tree, we consider the longest path of the tree. It has
been observed that the node on the left-hand side is the longest path of the tree.
Hence the node at depth ′𝑖′ reflects a subproblem of size .

i.e. the subproblem size hits 𝑛 = 1, when =1


So, if = 1
⟹𝑛=2 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2
⟹ 𝑖 = log 𝑛
So the height of the tree is log 𝑛.
Recursion Tree Method (Example 4)
Second, we determine the cost of the tree in level ′𝑖′ = + +
So, the total cost of the tree is:
7 7 7
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯
8 8 8
For simplicity we take ∞ Geometric Series
7 7 7
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ∞
8 8 8
7
𝑇 𝑛 ≤ 𝑐𝑛
8
1
𝑇 𝑛 ≤ 𝑐𝑛
7
1−8
8
𝑇 𝑛 ≤ 𝑐𝑛
1
𝑇 𝑛 ≤ 8𝑐𝑛
𝐻𝑒𝑛𝑐𝑒, 𝑇 𝑛 = Ο(𝑛)
Design and Analysis of Algorithm
(BCS503)

Implementation of Iterative approach


Insertion Sort with analysis

Lecture - 3
How do we analyze an algorithm's running
time?
• Input size: Depends on the problem being studied.

– Usually, the number of items in the input. Like the size n of the
array being sorted.
– But could be something else. If multiplying two integers, could
be the total number of bits in the two integers.
– Could be described by more than one number. For example,
graph algorithm running times are usually expressed in terms of
the number of vertices and the number of edges in the input
graph.
• Running time: On a particular input, it is the number
of primitive operations (steps) executed.
– Want to define steps to be machine-independent.
– Figure that each line of pseudo code requires a constant
amount of time.
– One line may take a different amount of time than another,
but each execution of line i takes the same amount of time
ci .
– This is assuming that the line consists only of primitive
operations.
• If the line is a subroutine call, then the actual call takes
constant time, but the execution of the subroutine being
called might not.
• If the line specifies operations other than primitive ones,
then it might take
• more than constant time.
A Sorting Problem (Incremental Approach)
Input: A sequence of n numbers a1, a2, . . . , an.
Output: A permutation (reordering) a1 , a2 , . . . , an of the input
sequence such that a1 ≤ a2 ≤ ・ ・ ・ ≤ an .

The sequences are typically stored in arrays.

We also refer to the numbers as keys. Along with each key may be
additional information, known as satellite data. (You might want to clarify
that .satellite data. does not necessarily come from a satellite!)

We will see several ways to solve the sorting problem. Each way will be
expressed as an algorithm: a well-defined computational procedure that
takes some value, or set of values, as input and produces some value, or
set of values, as output.
Insertion sort
• A good algorithm for sorting a small number of elements.

• It works the way you might sort a hand of playing cards:

– Start with an empty left hand and the cards face down on the
table.
– Then remove one card at a time from the table, and insert it
into the correct position in the left hand.
– To find the correct position for a card, compare it with each of
the cards already in the hand, from right to left.
– At all times, the cards held in the left hand are sorted, and
these cards were originally the top cards of the pile on the
table.
Insertion sort (Example)
Insertion sort (Example)
Insertion sort (Algorithm)
Correctness Proof of Insertion Sort
• Initialization: Just before the first iteration, j = 2. The sub array
A[1 . . j − 1] is the single element A[1], which is the element originally
in A[1], and it is trivially sorted.

• Maintenance: To be precise, we would need to state and prove a loop


invariant for the “inner” while loop. Rather than getting bogged down
in another loop invariant, we instead note that the body of the inner
while loop works by moving A[ j − 1], A[ j − 2], A[ j − 3], and so on, by
one position to the right until the proper position for key (which has the
value that started out in A[ j ]) is found. At that point, the value of key is
placed into this position.

• Termination: The outer for loop ends when j > n; this occurs when j
= n + 1. Therefore, j −1 = n. Plugging n in for j −1 in the loop invariant,
the sub array A[1 . . n] consists of the elements originally in A[1 . . n]
but in sorted order. In other words, the entire array is sorted!
Analysis of insertion sort

• Assume that the i th line takes time ci , which is a


constant. (Since the third line is a comment, it
takes no time.)
• For j = 2, 3, . . . , n, let tj be the number of times
that the while loop test is executed for that value
of j .
• Note that when a for or while loop exits in the
usual way-due to the test in the loop header-the
test is executed one time more than the loop body.
Running Time of Insertion Sort
Home Assignment

• Write the algorithm of Bubble sort and


calculate the time complexity.
Bubble sort

• compare adjacent elements of list


• exchange them if they are out of order:
largest element “bubbles up” to end of list
• next pass: 2nd largest element bubbles up
• repeat n−1 times until all elements are
sorted
Pseudocode for Bubble Sort
BubbleSort(A[0..n-1])

# sorts a given array by bubble sort


# input: array A[0..n-1] of orderable elements
# output: array A[0..n-1] sorted in non-decreasing order
for i=0 to n-2 do
for j=0 to n-2-i do
if A[j+1] < A[j]:
swap A[j] and A[j+1]
Analysis of Bubble Sort
• basic operation: key comparison
• number of key comparisons same for all arrays
C(n)=∑ i=0n−2 ∑ j=0 n−21=∑ i=0 n−2[(n−2)−0+1]
=∑ i=0 n−2 (n−1−i)= n(n−1)/2 ∈ Θ(n2)
-number of key swaps is dependent on input
• in worst case (decreasing array): same as the number
of key comparisons
• can make a simple tweak to improve the algorithm: if
there are no exchanges during a pass, the list is sorted
and we can stop. It is still Θ(n2) on worst and average
cases.

You might also like