DAA (College)
DAA (College)
Lecture -21
Overview
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 7 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 7 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 3 C 2 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 2 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 2 2 4 6 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 C 1 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j]; 1 2 3 4 5 6 7 8
C[ A[j] ] = C[ A[j] ] - 1; A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 6 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 6 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 3 3 C 1 2 4 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 4 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 4 5 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 2 3 3 C 1 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 1 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 1 2 3 5 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 C 0 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 5 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 5 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 C 0 2 3 4 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 8
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 8
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 3 3 3 5 C 0 2 3 4 7 7
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 3 4 7 7
Counting Sort
for j=A. length down to 1
B[C[ A[j] ]] = A[j];
C[ A[j] ] = C[ A[j] ] - 1; 1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 3 4 7 7
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
j
1 2 3 4 5 6 7 8 0 1 2 3 4 5
B 0 0 2 2 3 3 3 5 C 0 2 2 4 7 7
Counting Sort
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k
3. C[i]= 0;
4. for j=1 to A. length
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Counting Sort
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k [Loop 1]
3. C[i]= 0;
4. for j=1 to A. length [Loop 2]
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k [Loop 3]
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1 [Loop 4]
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Complexity Analysis
Counting-Sort(A, B, k)
1. Let C[0…..k] be a new array
2. for i=0 to k [Loop 1] 𝚶 𝒌 𝒕𝒊𝒎𝒆𝒔
3. C[i]= 0;
4. for j=1 to A. length [Loop 2] 𝚶 𝒏 𝒕𝒊𝒎𝒆𝒔
5. C[ A[j] ] = C[ A[j] ] + 1;
6. for i=1 to k [Loop 3] 𝚶 𝒌 𝒕𝒊𝒎𝒆𝒔
7. C[i] = C[i] + C[i-1];
8. for j=A. length down to 1 [Loop 2] 𝚶 𝒏 𝒕𝒊𝒎𝒆𝒔
9. B[C[ A[j] ]] = A[j];
10. C[ A[j] ] = C[ A[j] ] - 1;
Complexity Analysis
• So the counting sort takes a total time of: O(n + k)
• Counting sort is called stable sort.
( A sorting algorithm is stable when numbers with the
same values appear in the output array in the same
order as they do in the input array.)
Pro’s and Con’s of Counting Sort
• Pro’s
• Asymptotically fast Fast - O(n + k)
• Simple to code
• Con’s
• Doesn’t sort in place.
• Requires O(n + k) extra storage space.
Design and Analysis of Algorithm
Recurrence Equation
(Solving Recurrence using
Iteration Methods)
Lecture – 6 and 7
Overview
• A recurrence is a function is defined in terms of
– one or more base cases, and
– itself, with smaller arguments.
Examples:
Overview
• Many technical issues:
• Floors and ceilings
[Floors and ceilings can easily be removed and don’t affect
the solution to the recurrence. They are better left to a
discrete math course.]
• Exact vs. asymptotic functions
• Boundary conditions
Head Recursion: the recursive call is made before
any other operations in the function. Uses more
stack space as each call must wait for the
subsequent calls to complete. Generally harder to
convert into iterative loops.
Tail Recursion: the recursive call is the last
operation in the function. More efficient in terms
of stack space as the compiler can optimize tail-
recursive calls to reuse the same stack frame.
Easier to convert into iterative loops.
Selection Sort • Recursive (Tail recursion)
• Iterative • void selectionSort(int array[]) {sort(array,0); }
• SelectionSort(A[0..n-1]) • void sort(int[] array, int i) {
• # sort given array by • if (i < array.length - 1)
selection sort • { int j = smallest(array, i); # T(n)
• # input: array A[0..n-1] of • int temp = array[i]; array[i] = array[j];
orderable elements
• array[j] = temp; sort(array, i + 1); # T(n)
• # output: array A[0..n-1] }}
sorted in non-decreasing
order • int smallest(int[] array, int j) # T(n - k)
• for i=0 to n-2 do • { if (j == array.length - 1)
• min = i • return array.length - 1;
• for j=i+1 to n-1 do • int k = smallest(array, j + 1);
• if A[j] < A[min]: • return array[j] < array[k] ? j : k; }
• min = j 𝑇 𝑛 − 1 + 𝑂(𝑛) 𝑖𝑓 𝑛 > 1
• 𝑇 𝑛 =
• swap A[i] and A[min] 1 𝑖𝑓 𝑛 = 1
• Try to Recall Recursive version of Linear Search
𝑇 𝑛 − 1 + 𝑂(𝑛) 𝑖𝑓 𝑛 > 1
• 𝑇 𝑛 =
1 𝑖𝑓 𝑛 = 1
• Putting eq 2 in 1
• T(n)= T(n-2)+ c(n-1)+cn…………..(3)
• T(n-2)=T(n-3)+c(n-2) …………(4)
• Putting eq 4 in 3
• T(n)= T(n-3)+c(n-2)+ c(n-1)+c(n)
• kth Term can be T(n)= T(n-k)+c(n+(n-1)+(n-2)+………..+1)
• Putting n-k=1 => k=n-1
• T(n)=T(1)+c(n(n+1)/2) = 1+c(n2+n) = O(n2)
Overview
𝑇 𝑛 =8 𝑇 1 + 𝑛 2 +2 …+ ⋯+ ⋯+ 2 + 2 + 1
2 −1
𝑇 𝑛 =8 + 𝑛
2−1
Iteration Method ( Example 3)
𝑛 −1
𝑇 𝑛 =𝑛 + 𝑛
1
𝑇 𝑛 =𝑛 + 𝑛 𝑛 −1
𝑇 𝑛 =𝑛 + 𝑛 𝑛 −1 As log 8 = 3 𝑎𝑛𝑑 log 2 = 1
𝑇 𝑛 =𝑛 + 𝑛 𝑛 −1
𝑇 𝑛 =𝑛 + 𝑛 𝑛−1
𝑇 𝑛 =𝑛 +𝑛 +𝑛
𝑇 𝑛 = 2𝑛 + 𝑛
𝐻𝑒𝑛𝑐𝑒 𝑇 𝑛 = Ο(𝑛 )
Iteration Method ( Example 4)
Example 4:
Solve the following recurrence relation by using Iteration method.
𝑛
7𝑇 +𝑛 𝑖𝑓 𝑛 > 1
𝑇 𝑛 = 2
1 𝑖𝑓 𝑛 = 1
(𝑖. 𝑒. 𝑆𝑡𝑟𝑎𝑠𝑠𝑖𝑜𝑛 𝐴𝑙𝑔𝑜𝑟𝑖𝑡ℎ𝑚)
Iteration Method ( Example 4)
𝑛
𝐼𝑡 𝑚𝑒𝑎𝑛𝑠 𝑇 𝑛 = 7𝑇 + 𝑛 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑇 𝑛 = 1 𝑤ℎ𝑒𝑛 𝑛 = 1 −−− −(1)
2
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛 𝑛
𝑇 = 7𝑇 +
2 4 2
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
2
𝑛 𝑛
𝑇 𝑛 = 7 7𝑇 + +𝑛
4 2
𝑛 𝑛
𝑇 𝑛 =7 𝑇 + 7 + 𝑛 −−−−−−−−−−−−− −(2)
4 4
𝑛
𝑃𝑢𝑡 𝑛 = 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛
𝑇 = 7𝑇 +
4 8 4
Iteration Method ( Example 4)
𝑛
𝑃𝑢𝑡 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑇 𝑖𝑛 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2, 𝑤𝑒 𝑔𝑒𝑡
4
𝑛 𝑛 𝑛
𝑇 𝑛 = 7 7𝑇 + +7 + 𝑛
8 4 4
𝑛 𝑛 𝑛
𝑇 𝑛 =7 𝑇 +7 + 7 + 𝑛 −−−−−−−−−−−−− −(3)
8 4 4
……
𝑛 𝑛 𝑛 𝑛
𝑇 𝑛 =7 𝑇 +7 + ⋯+ ⋯+ ⋯+ 7 + 7 + 𝑛 −− −(𝑘 𝑡𝑒𝑟𝑚)
2 4 4 4
𝑛 7 7 7 7
𝑇 𝑛 =7 𝑇 + 𝑛 + …+ ⋯+⋯+ + + 1
2 4 4 4 4
𝑛 7
𝑇 𝑛 =7 𝑇 + 𝑛 −−−− −(4)
2 4
Iteration Method ( Example 4)
𝐴𝑠 𝑤𝑒 𝑘𝑛𝑜𝑤 𝑡ℎ𝑎𝑡 𝑆𝑢𝑚 𝑜𝑓 𝑓𝑖𝑛𝑖𝑡𝑒 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑠𝑒𝑟𝑖𝑒𝑠 𝑖𝑠
𝑟 −1
= 𝑎 + 𝑎𝑟 + 𝑎𝑟 + . . . + 𝑎𝑟 = 𝑎𝑟 = 𝑎
𝑟−1
𝐻𝑒𝑟𝑒, 𝑛(𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑟𝑚𝑠) = log 𝑛 , 𝑎 = 1 𝑎𝑛𝑑 𝑟 = 7/4.
Lecture -10 - 11
Overview
BUILD-MAX-HEAP(A,n) 9
𝒇𝒐𝒓 𝒊 ← 𝒏/𝟐 𝒅𝒐𝒘𝒏𝒕𝒐 𝟏
do MAX-HEAPIFY (A,i,n) 6 5
0 8 2 1
1 2 3 4 5 6 7 8
9 6 5 0 8 2 1 3
3
Tighter analysis Proof
• For easy understanding, Let us take a complete binary
Tree,
⟹ Ο 𝑛∑
⟹Ο ∑
⟹Ο ∑ ℎ
⁄
⟹Ο ⁄
⟹ Ο( 2 )
T(n) ⟹ Ο 𝑛
Hence the running time of BUILD-MAX-HEAP(A,n) is Ο 𝑛 in tight bound .
Tighter analysis Proof
1
𝑥 = value of Infanite G P Series
1−𝑥
𝑥 = (1 − 𝑥)
𝐴 𝐴 𝐵 𝐵 𝐶 𝐶
𝐴= ,𝐵 = ,𝐶 =
𝐴 𝐴 𝐵 𝐵 𝐶 𝐶
4 2 0 1 2 1 3 2
𝐴= 3 1 2 5 ,B= 5 4 2 3
3 2 1 4 1 4 0 2
5 2 6 7 3 2 4 1
Matrix Multiplication
Example 2
First we partition the input matrices into sub matrices as shown
below:
4 2 0 1 2 1 3 2
𝐴 = ,𝐴 = 𝐵 = ,𝐵 =
3 1 2 5 5 4 2 3
3 2 1 4 1 4 0 2
𝐴 = ,𝐴 = 𝐵 = ,𝐵 =
5 2 6 7 3 2 4 1
Matrix Multiplication
Example 2
Calculate the value of P, Q, R, S, T, U and V
𝑃 = (A + A ). (𝐵 +𝐵 )
5 6 2 3
=
9 8 9 5
Q=(𝐴 + 𝐴 ). 𝐵
64 45
= 4 6 2 1
90 67 =
11 9 5 4
38 28
=
67 47
Matrix Multiplication
Example 2
R= A .(𝐵 −𝐵 )
4 2 3 0
=
3 1 −2 2
8 4
=
7 2 S= 𝐴 .(𝐵 −𝐵 )
1 4 −1 3
=
6 7 −2 −2
−9 −5
=
−20 4
Matrix Multiplication
Example 2
T=(𝐴 + 𝐴 ). 𝐵 U= (𝐴 − 𝐴 ).(𝐵 +𝐵 )
4 3 0 2 −1 0 5 3
= =
5 6 4 1 2 1 7 7
12 11 −5 −3
= =
24 16 17 13
V=(𝐴 − 𝐴 ).(𝐵 +𝐵 )
−1 −3 1 6
=
−4 −2 7 3
−22 −15
=
−18 −30
Matrix Multiplication
Example 2
Now, Compute 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 :
𝐶 =𝑃+𝑆−𝑇+𝑉
64 45 −9 −5 12 11 −22 −15 21 14
𝐶 = + - + =
90 67 −20 4 24 16 −18 −30 28 25
𝐶 =𝑅+𝑇
8 4 12 11 20 15
𝐶 = + =
7 2 24 16 31 18
Matrix Multiplication
Example 2
Now, Compute 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 :
𝐶 =𝑄+𝑆
38 28 −9 −5 29 23
𝐶 = + =
67 47 −20 4 47 51
𝐶 =𝑃+𝑅−𝑄+𝑈
64 45 8 4 38 28 −5 −3 29 18
𝐶 = + - + =
90 67 7 2 67 47 17 13 47 35
Matrix Multiplication
Example 2
So the values of 𝑪𝟏𝟏 , 𝑪𝟏𝟐 , 𝑪𝟐𝟏 , 𝒂𝒏𝒅 𝑪𝟐𝟐 are:
21 14 20 15 29 23 29 18
𝐶 = ,𝐶 = ,𝐶 = and 𝐶 =
28 25 31 18 47 51 47 35
Hence the resultant Matrix C is =
21 14 20 15
𝐶 𝐶
𝐶= = 28 25 31 18
𝐶 𝐶 29 23 29 18
47 51 47 35
Design and Analysis of Algorithm
Lecture – 4 & 5
Overview
• A way to describe behaviour of functions in the limit. We’re
studying Asymptotic efficiency.
• Describe growth of functions.(i.e. The order of growth of the
running time of an algorithm)
• Focus on what’s important by abstracting away low-order
terms and constant factors.
• How we indicate running times of algorithms.
• A way to compare “sizes” of functions through different
notations (i.e. Asymptotic Notations):
Overview
Growth of function
Asymptotic Notations
Growth of function
• The growth of function refers to how the
size of a function’s values change as the
input increases.
• Growth is determined by the highest
order term among the multiple term of
the algorithm.
• Growth function estimate how many
steps an algorithm will take as the input
increases.
Some common types of growth
functions include:
Constant complexity: O(1)
Logarithmic complexity: O(log(n))
Radical complexity: O(√n)
Linear complexity: O(n)
Linearithmic complexity: O(nlog(n))
Quadratic complexity: O(n2)
Cubic complexity: O(n3)
Exponential complexity: O(bn), b>1
Factorial complexity: O(n!)
Constant <= log(log(n)) <= log n <= log n2 <= (log
n)k <= √n <= n <= nlog(n) <= (n)√n <= nlog n <= n2
<= n2log n <= n3 <= 2n <= 3n <= n! <= nn
profravik
log variant | Desmos
Complexity of an Algorithm
• Complexity of an Algorithm is a function, f(n) which
gives the running time and storage space requirement of
the algorithm in terms of size n of the input data.
n= 1 2 3 10 1000
c >=2 >=1 >=0.667 >= 0.2 >=0.002
Asymptotic notation (Big Oh )
Example 1
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
Asymptotic notation (Big Oh )
Example 1
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
2𝑛 + 3 ≤ 𝑐𝑔 𝑛
2𝑛 + 3 ≤ 5𝑛
n= 1 2 3 10 100
c >=5 >=3.5 >=3 >=2.3 >=2.03
Asymptotic notation (Big Oh )
Example 1
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3 ∈ 𝑂(𝑛)
⟹ 2𝑛 + 3 ≤ 𝑐𝑔 𝑛
⟹ 2𝑛 + 3 ≤ 𝑐 𝑛
⟹ 2𝑛 + 3 ≤ 5𝑛, 𝑛 ≥ 1, 𝑐 ≥ 5
𝐻𝑒𝑛𝑐𝑒𝑓 𝑛 = 𝑂 𝑛
n= 1 2 3 10 100
c >=5 >=3.5 >=3 >=2.3 >=2.03
Asymptotic notation (Big Oh )
Example 2
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3𝑛 + 4 ∈ 𝑂 𝑛
Asymptotic notation (Big Oh )
Example 2
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3𝑛 + 4 ∈ 𝑂 𝑛
⟹ 2𝑛 + 3𝑛 + 4 ≤ 2𝑛 + 3𝑛 + 4𝑛
⟹ 2𝑛 + 3𝑛 + 4 ≤ 9𝑛 , 𝑤ℎ𝑒𝑟𝑒 𝑐 ≥ 9 𝑎𝑛𝑑 𝑛 ≥ 1
𝐻𝑒𝑛𝑐𝑒𝑓 𝑛 = 𝑂 𝑛
Asymptotic notation (Big Oh )
Example 3
𝐼𝑓 𝑓 𝑛 = 2 𝑎𝑛𝑑 𝑔 𝑛 = 2 𝑡ℎ𝑒 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ Ο(𝑔 𝑛 )
Asymptotic notation (Big Oh )
Example 3
𝐼𝑓 𝑓 𝑛 = 2 𝑎𝑛𝑑 𝑔 𝑛 = 2 𝑡ℎ𝑒 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ Ο(𝑔 𝑛 )
⟹2 = 2 .2
So, as per the definition of Big Oh
𝑓 𝑛 ≤ 𝑐𝑔(𝑛)
Hence
⟹ 2 ≤ 𝑐. 2
⟹ 2.2 ≤ 𝑐. 2 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 > 0 𝑎𝑛𝑑 𝑐 > 1
𝐻𝑒𝑛𝑐𝑒, 𝑓 𝑛 ∈ Ο(𝑔 𝑛 )
Asymptotic notation (Big Omega )
Asymptotic notation (Big Omega )
Asymptotic notation (Big Omega )
Example 4
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3𝑛 + 4 ∈ Ω 𝑛
Asymptotic notation (Big Omega )
Example 4
𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 2𝑛 + 3𝑛 + 4 ∈ Ω 𝑛
f(n)>=cg(n)
2𝑛 + 3𝑛 + 4 ≥ 𝑐 ∗ 𝑛
𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 = 𝛺 𝑛 𝑓𝑜𝑟 𝑐 ≤ 2 𝑎𝑛𝑑 𝑛 ≥ 1
n 1 2 3 4 10 100
c 9 4 3 3 2 2
Asymptotic notation (Big Omega )
Example 5
𝐼𝑓 𝑓 𝑛 = 3𝑛 + 2, 𝑔 𝑛 = 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
Asymptotic notation (Big Omega )
Example 5
𝐼𝑓 𝑓 𝑛 = 3𝑛 + 2, 𝑔 𝑛 = 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 ≥𝑐
→ 𝑔 𝑛
⟹ 𝑙𝑖𝑚 ≥1
→
2
𝑛 3+
𝑛
⟹ 𝑙𝑖𝑚 ≥1
→ 𝑛
2
3+
𝑛
⟹ 𝑙𝑖𝑚 ≥1
→ 𝑛
⟹ 0 ≥ 1 𝑖𝑠 𝑓𝑎𝑙𝑠𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ∉ Ω(𝑔 𝑛 )
Asymptotic notation (Big Omega )
Example 6
Example 6
⟹ 2 + 𝑛 ≥ 𝑐. 2
𝐻𝑒𝑛𝑐𝑒, 𝑓 𝑛 ∈ Ω 𝑔 𝑛 𝑖𝑠 𝑡𝑟𝑢𝑒
n= 1 2 3 5 10
c <=1.5 <=2 <=2.125 <=1.7812 <=1.0976
Asymptotic notation (Theta)
Asymptotic notation (Theta)
Example 7
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 10𝑛 + 5𝑛 + 17 ∈ Θ 𝑛
Asymptotic notation (Theta)
Example 7
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 10𝑛 + 5𝑛 + 17 ∈ Θ 𝑛
As per the definition of 𝜃 notation 𝐶 𝑔 𝑛 ≤ 𝑓 𝑛 ≤ 𝐶 𝑔 𝑛
⟹ 10𝑛 ≤ 10𝑛 + 5𝑛 + 17 ≤ 10𝑛 + 5𝑛 + 17𝑛
⟹ 10𝑛 ≤ 10𝑛 + 5𝑛 + 17 ≤ 32𝑛
𝑆𝑜, 𝐶 = 10 and 𝐶 = 32 for all n ≥ 1
𝐻𝑒𝑛𝑐𝑒, 𝑃𝑟𝑜𝑣𝑒𝑑
Asymptotic notation (Theta)
Example 8
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = (𝑛 + 𝑎) ∈ Θ 𝑛
Asymptotic notation (Theta)
Example 8
𝑆ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 = (𝑛 + 𝑎) ∈ Θ 𝑛
Example 9
𝐼𝑓 𝑓 𝑛 = 2𝑛, 𝑔 𝑛 = 𝑛 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )
Example 9
𝐼𝑓 𝑓 𝑛 = 2𝑛, 𝑔 𝑛 = 𝑛 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =0
→ 𝑔 𝑛
2𝑛
⟹ 𝑙𝑖𝑚
→ 𝑛
2
⟹ 𝑙𝑖𝑚
→ 𝑛
⟹ 0
𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑇𝑟𝑢𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 = 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )
Example 10
𝐼𝑓 𝑓 𝑛 = 2𝑛 , 𝑔 𝑛 = 𝑛 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
Asymptotic notation (Little Oh )
Example 10
𝐼𝑓 𝑓 𝑛 = 2𝑛 , 𝑔 𝑛 = 𝑛 𝑃𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =0
→ 𝑔 𝑛
2𝑛
⟹ 𝑙𝑖𝑚
→ 𝑛
⟹ 𝑙𝑖𝑚 2
→
⟹ 2≠0
𝑊ℎ𝑖𝑐ℎ 𝑖𝑠 𝑇𝑟𝑢𝑒, 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ≠ 𝜊(𝑔 𝑛 )
Asymptotic notation (Little omega )
Asymptotic notation (Little omega )
Example 11
𝐼𝑓 𝑓 𝑛 = 2𝑛 + 16 𝑎𝑛𝑑 𝑔 𝑛 = 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
Asymptotic notation (Little omega )
Example 11
𝐼𝑓 𝑓 𝑛 = 2𝑛 + 16 𝑎𝑛𝑑 𝑔 𝑛 = 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =∞
→ 𝑔 𝑛
16
𝑛 2+
𝑛
⟹ 𝑙𝑖𝑚
→ 𝑛
16
⟹ 𝑙𝑖𝑚 2 +
→ 𝑛
⟹ 𝑙𝑖𝑚 2 + 0
→
⟹ 𝑙𝑖𝑚 2
→
𝑆𝑜 2 ≠ ∞ 𝑖𝑠 𝑡𝑟𝑢𝑒 , 𝐻𝑒𝑛𝑐𝑒 𝑓 𝑛 ≠ 𝜔 𝑔 𝑛
Asymptotic notation (Little Oh omega )
Example 12
𝐼𝑓 𝑓 𝑛 = 𝑛 𝑎𝑛𝑑 𝑔 𝑛 = log 𝑛, 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ 𝜔 𝑔 𝑛
Asymptotic notation (Little Oh omega )
Example 12
𝐼𝑓 𝑓 𝑛 = 𝑛 𝑎𝑛𝑑 𝑔 𝑛 = log 𝑛 𝑠ℎ𝑜𝑤 𝑡ℎ𝑎𝑡 𝑓 𝑛 ∈ 𝜔 𝑔 𝑛
𝑓 𝑛
⟹ 𝑙𝑖𝑚 =∞
→ 𝑔 𝑛
𝑛
⟹ 𝑙𝑖𝑚
→ log 𝑛
T = O (n)
Complexity of Example 2
T = O (n2)
Example 3
A() A()
{ {
int i=1 ,s=1; scanf(“%d”, &n);
scanf(“%d”, &n); for(i=1, s=1 ; s<n ; i++, s=s+i)
while(s<n) {
{ printf(“abcd”);
i++; }
s=s+i; }
printf(“abcd”);
}
}
Complexity of Example 3
•
Example 4
A()
{
int i=1;
for(i=1; i pow 2<=n; i++)
{
printf(“abcd”);
}
}
Complexity of Example 4
•
Example 5
Complexity of Example 5
Design and Analysis of Algorithm
(BCS503)
Lecture - 2
Exhaustive Search Method
it is in a quadratic polynomial.
Analysis of Selection Sort
• basic operation: key comparison
• number of times executed depends only on array
size
• C(n)=∑i=0n−2∑j=i+1n−11 = ∑i=0n−2[(n−1)−(i+1)+1]
=∑i=0n−2(n−1−i) = n(n−1)/2 ∈ Θ(n2)
• selection sort is Θ(n2) for all inputs
• number of key swaps is only Θ(n) which makes it
suitable for swapping small number of large items
Design and Analysis of Algorithm
• Sorts in place.
Description of quicksort
Performance of quicksort
We will analyze
• the worst-case running time of QUICKSORT
and RANDOMIZED-QUICKSORT (the same),
and
• the expected (average-case) running time of
RANDOMIZED-QUICKSORT is O(n lg n) .
Design and Analysis of Algorithm
(BCS503)
Lecture - 1
Overview
• Discussion on algorithms and its
characteristics.
• Analysis of Algorithms and its significance
in the context of real world software
development.
• Discussion on the iterative version and
recursive version of Linear search.
What is an Algorithm ?
• An algorithm is a finite set of instructions that
if followed, accomplishes a particular task.
• An algorithm is any well defined computational
procedure that takes some value or set of
values, as input and produces some value or
set of values as output.
• We can also view an algorithm as a tool for
solving a well specified computational
problem.
Characteristics of an Algorithm
RP1
RP0
• Input: Zero or more quantities are externally
supplied.
• Output: At least one quantity is produced.
RP2
…
Number 701466868 Number 281942902 Number 233667136 Number 580625685 Number 506643548 Number 155778322
Search(a,start,end)
{ found=0;
if (a[start] is desired item)
found=1;
return &a[start];
else { if(start>end) return 0;
Search(a,start+1,end);}
}
// if found==0 at end, then desired item was not found;
• T(n) = 1 + T(n-1), T(0) = 0
substitution method
T(n)= 1+T(n-1) = 1+1+T(n-2)= 1+1+1+T(n-3)=…
Lecture -23
Overview
35 33 42 10 14 19 27 44
35 33 42 10 14 19 27 44
swap
Shell Sort
Swap count =1
14 33 42 10 35 19 27 44
Swap
Shell Sort
Swap count =1
14 33 42 10 35 19 27 44
swap
Shell Sort
Swap count =2
14 19 42 10 35 33 27 44
swap
Shell Sort
Swap count =2
14 19 42 10 35 33 27 44
swap
Shell Sort
Swap count =3
14 19 27 10 35 33 42 44
swap
Shell Sort
Swap count =3
14 19 27 10 35 33 42 44
No swap required
Shell Sort
• After the first iteration the array looks like as
follows.
14 19 27 10 35 33 42 44
14 19 27 10 35 33 42 44
No swap
Shell Sort
Swap count =3
14 19 27 10 35 33 42 44
No swap
Shell Sort
Swap count =3
14 19 27 10 35 33 42 44
swap
Shell Sort
Swap count =4
14 19 10 27 35 33 42 44
swap
Shell Sort
Swap count =4
14 19 10 27 35 33 42 44
swap
14 19 10 27 35 33 42 44
swap
Shell Sort
Swap count =5
14 19 10 27 35 33 42 44
swap
14 10 19 27 35 33 42 44
swap
Shell Sort
Swap count =5
14 19 10 27 35 33 42 44
swap
14 10 19 27 35 33 42 44
swap
14 10 19 27 35 33 42 44
swap
Shell Sort
Swap count =6
14 19 10 27 35 33 42 44
swap
14 10 19 27 35 33 42 44
swap
10 14 19 27 35 33 42 44
swap
Shell Sort
Swap count =6
10 14 19 27 35 33 42 44
No swap
Shell Sort
Swap count =6
10 14 19 27 35 33 42 44
swap
Shell Sort
Swap count =7
10 14 19 27 33 35 42 44
swap
Shell Sort
Swap count =7
10 14 19 27 33 35 42 44
No swap
Shell Sort
Swap count =7
10 14 19 27 33 35 42 44
No swap
Shell Sort
Swap count =7
10 14 19 27 33 35 42 44
No swap
• Hence ,total number of swap required in
• 1st iteration= 3
• 2nd iteration= 4
• So total 7 numbers of swap required to sort the
array by shell sort.
Shell Sort
Algorithm Shell sort (Knuth Method)
1. gap=1
2. while(gap < A.length/3)
3. gap=gap*3+1
4. while( gap>0)
5. for(outer=gap; outer<A.length; outer++)
6. Ins_value=A[outer]
7. inner=outer
8. while(inner>gap-1 && A[inner-gap]≥ Ins_value)
9. A[inner]=A[inner-gap]
10. inner=inner-gap
11. A[inner]=Ins_value
12. gap=(gap-1)/3
Shell Sort
• Let us dry run the shell sort algorithm with the same example
as already discussed.
35 33 42 10 14 19 27 44
At the beginning
A .length=8 and gap=1
After first three line execution the gap value changed to 4
Now, gap>0 (i.e. 4>0)
Now in for loop outer=4;outer<8;outer++
Ins_value=A[outer]=A[4]=14
inner=outer i.e. inner=4
Now the line no 8 is true⟹ 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑐𝑐𝑢𝑟𝑒𝑑 and the updated array is
looked as follow
14 33 42 10 35 19 27 44
Swap
Shell Sort
14 19 42 10 35 33 27 44
swap
Shell Sort
14 19 27 10 35 33 42 44
swap
Shell Sort
14 19 27 10 35 33 42 44
No swap required
Shell Sort
10 14 19 27 33 35 42 44
Shell Sort
Analysis:
• Shell sort is efficient for medium size lists.
• For bigger list, this algorithm is not the best choice.
• But it is the fastest of all Ο(𝑛 ) sorting algorithm.
• The best case in shell sort is when the array is already sorted in
the right order i.e. Ο(𝑛)
• The worst case time complexity is based on the gap sequence.
That’s why various scientist give their gap intervals. They are:
1. Donald Shell give the gap interval ⁄ . ⟹ Ο(𝑛 log 𝑛)
⁄
2. Knuth give the gap interval 𝑔𝑎𝑝 ← 𝑔𝑎𝑝 ∗ 3 + 1 ⟹ Ο(𝑛 )
3. Hibbard give the gap interval 2 ⟹ Ο(𝑛 log 𝑛)
Shell Sort
Analysis:
In General
• Shell sort is an unstable sorting algorithm because this
algorithm does not examine the elements lying in between the
intervals.
• Worst Case Complexity: less than or equal to 𝑂 𝑛 or float
between 𝑂 𝑛 log 𝑛 𝑎𝑛𝑑 𝑂 𝑛 .
• Best Case Complexity: 𝑂(𝑛 log 𝑛)
When the array is already sorted, the total number of
comparisons for each interval (or increment) is equal to 𝑂(𝑛 )
i.e. the size of the array.
• Average Case Complexity: 𝑂(𝑛 log 𝑛)
It is around 𝑂 𝑛 . .
Lecture -22
Linear Time Sorting
(Radix Sort)
Overview
Radix_Sort(A,d)
𝑓𝑜𝑟 𝑖 ← 𝑑 𝑑𝑜𝑤𝑛 𝑡𝑜 1
𝑈𝑠𝑒 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡 𝑡𝑜 𝑠𝑜𝑟𝑡 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦 𝐴 𝑜𝑛 𝑑𝑖𝑔𝑖𝑡 𝑖
(𝑖. 𝑒. 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡)
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅 )
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑑
𝑑𝑜 "use a stable sort to sort array A on digit i;
329
457
657
839
436
720
355
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅 )
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑑
𝑑𝑜 " use a stable sort to sort array A on digit i;
329 720
457 355
657 436
839 457
436 657
720 329
355 839
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅 )
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑑
𝑑𝑜 " use a stable sort to sort array A on digit i;
329 720 720
457 355 329
657 436 436
839 457 839
436 657 355
720 329 457
355 839 657
Radix Sort
• In input array A, each element is a number of d digit.
𝑹𝒂𝒅𝒊𝒙_𝑺𝒐𝒓𝒕( 𝑨, 𝒅 )
𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑑
𝑑𝑜 " use a stable sort to sort array A on digit i;
329 720 720 329
457 355 329 355
657 436 436 436
839 457 839 457
436 657 355 657
720 329 457 720
355 839 657 839
Radix Sort (Analysis)
Radix_Sort(A,d)
𝑓𝑜𝑟 𝑖 ← 𝑑 𝑑𝑜𝑤𝑛 𝑡𝑜 1
𝑈𝑠𝑒 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡 𝑡𝑜 𝑠𝑜𝑟𝑡 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦 𝐴 𝑜𝑛 𝑑𝑖𝑔𝑖𝑡 𝑖
(𝑖. 𝑒. 𝐶𝑜𝑢𝑛𝑡𝑖𝑛𝑔 𝑆𝑜𝑟𝑡)
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
0 1 2 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
1 2
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
0 1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
2
0 1 2 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
0 2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
Bucket Sort
• One Value per bucket:
4 2 1 2 0 3 2 1 4 0 2 3 0
2
0 2
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
0 0 0 1 1 2 2 2 2 3 3 4 4
Bucket Sort
• One Value per bucket:
Algorithm BucketSort( S )
( values in S are between 0 and m-1 )
for j 0 to m-1 do // initialize m buckets
b[j] 0
for i 0 to n-1 do // place elements in their
b[S[i]] b[S[i]] + 1 // appropriate buckets
i0
for j 0 to m-1 do // place elements in buckets
for r 1 to b[j] do // back in S (Concatination)
S[i] j
ii+1
Bucket Sort
One Value per bucket (Analysis)
• Bucket initialization: 𝑂( 𝑚 )
• From array to buckets: 𝑂( 𝑛 )
• From buckets to array: 𝑂( 𝑛 )
• Due to the implementation of dequeue.
• Since 𝑚 will likely be small compared to 𝑛, Bucket
sort is 𝑂( 𝑛 )
• Strictly speaking, time complexity is 𝑂 ( 𝑛 + 𝑚 )
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99
.20
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99
.12 .20
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99
.64
.12 .20 .52 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99
.64
.12 .20 .36 .52 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99
.37 .64
.12 .20 .36 .52 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99
.37 .64
.12 .20 .36 .47 .52 .63
0 1 2 3 4 5 6 7 8 9
Bucket Sort
• Multiple items per bucket:
.20 .12 .52 .63 .64 .36 .37 .47 .58 .18 .88 .09 .99
.09 .12 .18 .20 .36 .37 .47 .52 .58 .63 .64 .88 .99
Bucket Sort
• Multiple items per bucket:
Algorithm BucketSort( S )
1. 𝐿𝑒𝑡 𝐵[0. . (𝑛 − 1)]𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦.
2. 𝑛 ← 𝐴. 𝑙𝑒𝑛𝑔𝑡ℎ
3. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1
4. 𝑚𝑎𝑘𝑒 𝐵 𝑖 𝑎𝑛 𝑒𝑚𝑝𝑡𝑦 𝑙𝑖𝑠𝑡
5. 𝑓𝑜𝑟 𝑖 ←← 1 𝑡𝑜 𝑛
6. 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵[𝑛 ∗ 𝐴 𝑖 ]
7. 𝑓𝑜𝑟 𝑖 ← 0 𝑡𝑜 𝑛 − 1
8. 𝑠𝑜𝑟𝑡 𝑙𝑖𝑠𝑡 𝐵 𝑖 𝑤𝑖𝑡ℎ 𝑎 𝑠𝑡𝑎𝑏𝑙𝑒 𝑠𝑜𝑟𝑡𝑖𝑛𝑔(𝑖𝑛𝑠𝑒𝑟𝑡𝑖𝑜𝑛 𝑠𝑜𝑟𝑡)
9. 𝐶𝑜𝑛𝑐𝑎𝑡𝑒𝑛𝑎𝑡𝑒 𝑡ℎ𝑒 𝑙𝑖𝑠𝑡 𝐵 0 , 𝐵 1 , 𝐵 2 , … … , 𝐵 𝑛 − 1
𝑡𝑜𝑔𝑒𝑡ℎ𝑒𝑟 𝑖𝑛 𝑜𝑟𝑑𝑒𝑟.
Bucket Sort
Multiple items per bucket (Analysis)
• It was observed that except line no 8 all other lines
take Ο 𝑛 time in worst case.
• Line no. 8 (i.e. insertion sort) takes Ο(𝑛 ) .
• The average time complexity for Bucket Sort is
𝑂(𝑛 + 𝑘) in uniform distribution.
Bucket Sort
Characteristics of Bucket Sort
• Bucket sort assumes that the input is drawn from a
uniform distribution.
• The computational complexity estimates involve the
number of buckets.
• Bucket sort can be exceptionally fast because of the
way elements are assigned to buckets, typically using
an array where the index is the value.
Bucket Sort
Characteristics of Bucket Sort
• This means that more auxiliary memory is required
for the buckets at the cost of running time than more
comparison sorts.
• The average time complexity is 𝑂(𝑛 + 𝑘).
• The worst time complexity is 𝑂(𝑛²).
• The space complexity for Bucket Sort is 𝑂(𝑛 + 𝑘).
Design and Analysis of Algorithm
Recurrence Equation
(Solving Recurrence using
Recursion Tree Methods)
Lecture – 8 and 9
Overview
• A recurrence is a function is defined in terms of
– one or more base cases, and
– itself, with smaller arguments.
Examples:
Overview
• Many technical issues:
• Floors and ceilings
[Floors and ceilings can easily be removed and don’t affect
the solution to the recurrence. They are better left to a
discrete math course.]
• Exact vs. asymptotic functions
• Boundary conditions
Overview
Ans :
We start by focusing on finding an upper bound for the solution by using good guess.
As we know that floors and ceilings usually do not matter when solving the
recurrences, we drop the floor and write the recurrence equation as follows:
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛 ,𝑐 > 0
4
The term 𝑐𝑛 , at the root represent the costs incurred by the subproblems of size .
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4
𝑇(𝑛)
Fig –(a)
Figure (a) shows T(n), which progressively expands in (b) and (d) to form recursion
tree.
𝑛 𝑛 𝑛
𝑇 𝑇 𝑇
4 4 4
Fig –(b)
L1.9
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4
𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛
𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇
16 16 16 16 16 16 16 16 16
Fig –(c)
L1.10
Recursion Tree Method (Example 1)
𝑛
𝑇 𝑛 = 3𝑇 + 𝑐𝑛 , 𝑐 > 0 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
4
𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛
𝑐 𝑐 𝑐 𝑐 𝑐 𝑐 𝑐 𝑐 𝑐
16 16 16 16 16 16 16 16 16
𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1
Fig –(d)
L1.11
Recursion Tree Method (Example 1)
Analysis
First, we find the height of the recursion tree
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
i.e. the subproblem size hits 𝑛 = 1, when =1
So, if =1
⟹𝑛= 4 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 4
⟹ 𝑖 = log 𝑛
So the height of the tree is log 𝑛.
Recursion Tree Method (Example 1)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 3 .
So, each node at depth ′𝑖 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … , log 𝑛 − 1 has cost 𝑐 .
However, the bottom level is special. Each of the bottom node has
contribute cost = 𝑇(1)
Hence the cost of the bottom level is = 3
⟹3 𝑎𝑠 𝑖 = log 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹𝑛
So, the total cost of entire tree is
𝑇 𝑛
3 3 3 3
= 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯ + 𝑐𝑛 + Θ(𝑛 )
16 16 16 16
3
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
16
Recursion Tree Method (Example 1)
The left term is just a sum of Geometric series. So the value of 𝑇 𝑛 is as follows.
3 −1
16
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
3
−1
16
The above equation looks very complicated. So, we use an infinite geometric series as an upper
bound. Hence the new form of the equation is given below:
3
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
16
3
𝑇 𝑛 ≤ 𝑐𝑛 + Θ(𝑛 )
16
1
𝑇 𝑛 ≤ 𝑐𝑛 + Θ(𝑛 )
3
1−
16
16
𝑇 𝑛 ≤ 𝑐𝑛 + Θ(𝑛 )
13
𝑇 𝑛 = Ο(𝑛 )
Recursion Tree Method (Example 2)
Example 2
𝑇(𝑛)
Fig –(a)
𝑛
𝑛 𝑐
𝑛 𝑐 2
𝑛
𝑐 𝑐 2
2 2
log 𝑛
𝑛 𝑛
𝑛 𝑛 𝑇 𝑛 𝑛 𝑇
𝑇 𝑛 𝑛 𝑇 𝑛 𝑛 4 𝑇 𝑇 4
4 𝑇 𝑇 4 𝑇 𝑛 𝑛 𝑇 𝑛 𝑛 4 4
4 4 4 𝑇 𝑇 4 𝑇 𝑛 𝑛 𝑇
4 4 4 𝑇 𝑇 4
4 4
𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1
Fig –(b)
Recursion Tree Method (Example 2)
Analysis
First, we find the height of the recursion tree
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
i.e. the subproblem size hits 𝑛 = 1, when =1
So, if =1
⟹𝑛= 2 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2
⟹ 𝑖 = log 𝑛
So the height of the tree is log 𝑛.
Recursion Tree Method (Example 2)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 4 .
So, each node at depth ′𝑖 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … … , log 𝑛 − 1 has cost
𝑐 .
Hence the total cost at level ′𝑖′ is 4 𝑐
𝑛
⟹ 4 . 𝑐.
2
𝑛
⟹ 4 . 𝑐.
2
4
⟹ . 𝑐𝑛
2
4
⟹ . 𝑐𝑛
2
Recursion Tree Method (Example 2)
However, the bottom level is special. Each of the bottom node has contribute
cost = 𝑇(1)
Hence the cost of the bottom level is = 4
⟹4 𝑎𝑠 𝑖 = log 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹𝑛
4
−1
2
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
4
−1
2
2 −1
𝑇 𝑛 = 𝑐𝑛 + Θ(𝑛 )
2−1
𝑛 −1
𝑛 = 𝑐𝑛 + 𝑐𝑛
2−1
𝑛−1
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛
1
𝑇 𝑛 = 𝑐𝑛 − 𝑐𝑛 + 𝑐𝑛
𝑇 𝑛 = 2𝑐𝑛 − 𝑐𝑛
𝐻𝑒𝑛𝑐𝑒, 𝑇 𝑛 = Θ(𝑛 )
Recursion Tree Method (Example 3)
Example 3
Solve the recurrence 𝑇 𝑛 = 2𝑇 + Θ(𝑛) by using recursion tree
method.
𝑇(𝑛) = 2𝑇(𝑛/2) + 𝑐𝑛 , 𝑐 > 0
The term 𝑐𝑛, at the root represent the costs incurred by the
subproblems of size .
Construction of Recursion tree
𝑇(𝑛)
Fig –(a)
Figure (a) shows T(n), which progressively expands in (b) to form
recursion tree.
Recursion Tree Method (Example 3)
𝑇(𝑛) = 2𝑇(𝑛/2) + 𝑐𝑛 , 𝑐 > 0
𝑛 𝑛
𝑐 𝑐
2 2
log 𝑛
𝑛 𝑛 𝑛 𝑛
𝑇 𝑇 𝑇 𝑇
4 4 4 4
𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1
Fig –(b)
Recursion Tree Method (Example 3)
Analysis
First, we find the height of the recursion tree
Observe that a node at depth ′𝑖′ reflects a subproblem of size .
i.e. the subproblem size hits 𝑛 = 1, when =1
So, if =1
⟹𝑛= 2 𝐴𝑝𝑝𝑙𝑦 𝐿𝑜𝑔 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒
⟹ log 𝑛 = log 2
⟹ 𝑖 = log 𝑛
So the height of the tree is log 𝑛.
Recursion Tree Method (Example 3)
Second, we determine the cost of each level of the tree.
The number of nodes at depth ′𝑖′ is 4 .
So, each node at depth ′𝑖 𝑖. 𝑒. 𝑖 = 0,1,2,3,4, … … , log 𝑛 − 1 has cost 𝑐
.
Hence the total cost at level ′𝑖′ is 2 𝑐 = 𝑐𝑛
However, the bottom level is special. Each of the bottom node has
contribute cost = 𝑇(1)
Hence the cost of the bottom level is = 2
⟹2 𝑎𝑠 𝑖 = log 𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒
⟹𝑛
⟹𝑛
Recursion Tree Method (Example 3)
𝑇 𝑛 = 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + 𝑐𝑛 + ⋯ + ⋯ + 𝑐𝑛
𝑇 𝑛 = 𝑐𝑛 ∑ 1 .
𝑇 𝑛 = 𝑐𝑛 (log 𝑛 + 1)
𝑇 𝑛 = 𝑐𝑛 log 𝑛 + 𝑐𝑛
𝑛 𝑛
𝑛 𝑐
𝑐 𝑐 8
2 4
log 𝑛
𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛
𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇
2 4 8 8 16 32 16 32 64
𝑇 1 𝑇 1 𝑇 1 ………………………..𝑇 1
Fig –(b)
Recursion Tree Method (Example 4)
Analysis
First, we find the height of the recursion tree
Lecture - 3
How do we analyze an algorithm's running
time?
• Input size: Depends on the problem being studied.
– Usually, the number of items in the input. Like the size n of the
array being sorted.
– But could be something else. If multiplying two integers, could
be the total number of bits in the two integers.
– Could be described by more than one number. For example,
graph algorithm running times are usually expressed in terms of
the number of vertices and the number of edges in the input
graph.
• Running time: On a particular input, it is the number
of primitive operations (steps) executed.
– Want to define steps to be machine-independent.
– Figure that each line of pseudo code requires a constant
amount of time.
– One line may take a different amount of time than another,
but each execution of line i takes the same amount of time
ci .
– This is assuming that the line consists only of primitive
operations.
• If the line is a subroutine call, then the actual call takes
constant time, but the execution of the subroutine being
called might not.
• If the line specifies operations other than primitive ones,
then it might take
• more than constant time.
A Sorting Problem (Incremental Approach)
Input: A sequence of n numbers a1, a2, . . . , an.
Output: A permutation (reordering) a1 , a2 , . . . , an of the input
sequence such that a1 ≤ a2 ≤ ・ ・ ・ ≤ an .
We also refer to the numbers as keys. Along with each key may be
additional information, known as satellite data. (You might want to clarify
that .satellite data. does not necessarily come from a satellite!)
We will see several ways to solve the sorting problem. Each way will be
expressed as an algorithm: a well-defined computational procedure that
takes some value, or set of values, as input and produces some value, or
set of values, as output.
Insertion sort
• A good algorithm for sorting a small number of elements.
– Start with an empty left hand and the cards face down on the
table.
– Then remove one card at a time from the table, and insert it
into the correct position in the left hand.
– To find the correct position for a card, compare it with each of
the cards already in the hand, from right to left.
– At all times, the cards held in the left hand are sorted, and
these cards were originally the top cards of the pile on the
table.
Insertion sort (Example)
Insertion sort (Example)
Insertion sort (Algorithm)
Correctness Proof of Insertion Sort
• Initialization: Just before the first iteration, j = 2. The sub array
A[1 . . j − 1] is the single element A[1], which is the element originally
in A[1], and it is trivially sorted.
• Termination: The outer for loop ends when j > n; this occurs when j
= n + 1. Therefore, j −1 = n. Plugging n in for j −1 in the loop invariant,
the sub array A[1 . . n] consists of the elements originally in A[1 . . n]
but in sorted order. In other words, the entire array is sorted!
Analysis of insertion sort