0% found this document useful (0 votes)
274 views104 pages

CS3401 Algorithms Unit III

Uploaded by

keerthika.cse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
274 views104 pages

CS3401 Algorithms Unit III

Uploaded by

keerthika.cse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

Course Overview

Mr.C.Karthikeyan
Assistant Professor, Department of CSE
Karpagam Institute of Technology
CS3401 Algorithms | Unit III 1
Course Objectives
• Understand and apply the algorithm analysis techniques on
searching and sorting algorithms
• Critically analyze the efficiency of graph algorithms
• Understand different algorithm design techniques
• Solve programming problems using state space tree
• Understand the concepts behind NP Completeness,
Approximation algorithms and randomized algorithms.
CS3401 Algorithms | Unit III 2
Course Outcomes
• Develop algorithms for various computing problems and time and space
complexity analysis.
• Apply graph algorithms to solve problems and analyze their efficiency.
• Analyze the different design techniques like divide and conquer, dynamic
programming and Greedy techniques to solve problems.
• Make use of the state space tree method for solving computational
problems.
• Solve problems using approximation algorithms and randomized
algorithms.
CS3401 Algorithms | Unit III 3
Course Syllabus

COURSE SYLLABUS

CS3401 Algorithms | Unit III 4


Text Books
1. Thomas H. Cormen, Charles E. Leiserson,
Ronald L. Rivest and Clifford Stein,
"Introduction to Algorithms", 3rd Edition,
Prentice Hall of India, 2009.
2. Ellis Horowitz, Sartaj Sahni, Sanguthevar
Rajasekaran ―Computer Algorithms/C++‖
Orient Blackswan, 2nd Edition, 2019.

CS3401 Algorithms | Unit III 5


Reference Books
1. Anany Levitin, ― Introduction to the Design
and Analysis of Algorithms, Third Edition,
Pearson Education, 2012.
2. Alfred V. Aho, John E. Hopcroft and Jeffrey
D. Ullman, "Data Structures and
Algorithms", Reprint Edition, Pearson
Education, 2006.
3. S. Sridhar, ― Design and Analysis of
Algorithms, Oxford university press, 2014.

CS3401 Algorithms | Unit III 6


Unit III : Algorithm Design Techniques

Mr.C.Karthikeyan
Assistant Professor, Department of CSE
Karpagam Institute of Technology
CS3401 Algorithms | Unit III 7
Divide and Conquer methodology: Finding maximum and minimum -

Merge sort - Quick sort Dynamic programming: Elements of dynamic

programming — Matrix-chain multiplication - Multi stage graph — Optimal

Binary Search Trees. Greedy Technique: Elements of the greedy strategy -

Activity-selection problem –- Optimal Merge pattern — Huffman Trees.

CS3401 Algorithms | Unit III 8


CS3401 Algorithms | Unit III 9
Divide and Conquer : Introduction
• Sort a sequence of n elements into non-decreasing order. This algorithm is an example of
divide and conquer approach.
• Divide: Divide the n-element sequence to be sorted into two subsequences of n/2
elements each
• Conquer: Sort the two subsequences recursively using merge sort.
• Combine: Merge the two sorted subsequences to produce the sorted answer.
problem
of size n

sub problem 1 sub problem 2


of size n/2 of size n/2

solution to solution to
sub problem 1 sub problem 2

solution to the original


problem
CS3401 Algorithms | Unit III
10
Finding Maximum and Minimum

CS3401 Algorithms | Unit III 11


Finding Maximum and Minimum : Introduction
• Given an array of size N. Find the maximum and the minimum element of the array using
divide and conquer approach.
Divide: Divide array into two halves.
Conquer: Recursively find maximum and minimum of both halves.
Combine: Compare maximum of both halves to get overall maximum and compare
minimum of both halves to get overall minimum.

CS3401 Algorithms | Unit III


12
Finding Maximum and Minimum : Steps
1. Function call maxmin (X[], l, r) 4. Recursively find the maximum and
minimum for right part by calling the same
return maximum and minimum of the
function.
array. X[] denotes array, l and r are the i.e. maxmin(X, mid + 1, r)

left and right end. 4. Finally, get the overall maximum and
minimum by comparing the min and max of
2. Divide array by calculating mid index both halves.
i.e. mid = (l + r)/2 if(l==r){
if(X[mid]<min)
3. Recursively find the maximum and min=X[mid];
if(X[mid]>max)
minimum of left part by calling the max=X[mid];
return;
same function. }
i.e. maxmin(X, l, mid)
CS3401 Algorithms | Unit III
5. Store max and min and return it. 13
Finding Maximum and Minimum
Program:
void maxmin(int X[],int l,int r){
int mid=(l+r)/2;
#include<stdio.h> if(l==r){
void maxmin(int[], int, int); if(X[mid]<min)
int max,min; min=X[mid];
int main(){ if(X[mid]>max)
int X[]={134,123,111,5115,48}; max=X[mid];
int l=0,r=4; return;
max=min=X[0]; }
maxmin(X,l,r); maxmin(X,l,mid);
printf("%d %d",max,min); maxmin(X,mid+1,r);}
return 0;
}
CS3401 Algorithms | Unit III
14
Finding Maximum and Minimum : Analysis
There are two recursive calls made in this algorithm, for each half divided sub list.
Time required for computing max and min will be
T(n) = T(n/2)+ T(n/2)+2
T(n) = 1
T(n) = 2T(n/2) + 2
= 2 [ 2T(n/4) + 2] + 2
= 2(2[2T(n/8) + 2] + 2) +2
= 8T(n/8) + 10
Continuing in this fashion a recursive equation can be obtained. If we put n= 2k
− +
T(n) = 2 k-1 T(2) + 𝑘−1
𝑖=1 2𝑖 = 2
𝑘 1 2𝑘 − 2

T(n) = 3n / 2-2
Ignoring the order of magnitude, The time complexity is O(n)
CS3401 Algorithms | Unit III
15
Merge Sort

CS3401 Algorithms | Unit III 16


Merge Sort : Introduction
• Sort a sequence of n elements into non-decreasing order. This algorithm is an example of
divide and conquer approach.
• Divide: Divide the n-element sequence to be sorted into two subsequences of n/2
elements each
• Conquer: Sort the two subsequences recursively using merge sort.
• Combine: Merge the two sorted subsequences to produce the sorted answer.
problem
of size n

sub problem 1 sub problem 2


of size n/2 of size n/2

solution to solution to
sub problem 1 sub problem 2

solution to the original


problem
CS3401 Algorithms | Unit III
17
Merge Sort : Algorithm

CS3401 Algorithms | Unit III 18


Merge Sort : Example

CS3401 Algorithms | Unit III 19


Merge Sort : Example

CS3401 Algorithms | Unit III 20


Merge Sort : Example

CS3401 Algorithms | Unit III 21


Merge Sort : Implementation
22
/* Implementation of Merge Sort */
#include <stdio.h>
#define size 100
void merge(int a[], int, int, int);
void merge_sort(int a[],int, int);
int main(){
int arr[size],i,n;
printf("\n Enter the number of elements in the array : ");
scanf("%d",&n);
printf("\n Enter the elements of the array: ");
for(i=0;i<n;i++)
scanf("%d",&arr[i]);
merge_sort(arr,0,n-1);
printf("\n The sorted array is:\n");
for(i=0;i<n;i++)
printf(" %d\t", arr[i]);
return 0;
}
CS3401 Algorithms | Unit III 22
Merge Sort : Implementation
void merge(int arr[], int beg, int mid, int end){
23
else{
int i=beg, j=mid+1, index=beg, temp[size], k; while(i<=mid){
while((i<=mid) && (j<=end)){ temp[index] = arr[i];
if(arr[i] < arr[j]){ i++;
temp[index] = arr[i]; index++;
i++; }
} }
else{ for(k=beg;k<index;k++)
temp[index] = arr[j]; arr[k] = temp[k];
j++;
}
}
void merge_sort(int arr[],int beg,int
index++;
end){
}
int mid;
if(i>mid){
while(j<=end){ if(beg<end){
temp[index] = arr[j]; mid = (beg+end)/2;
j++; merge_sort(arr, beg, mid);
index++; merge_sort(arr, mid+1, end);
} merge(arr, beg, mid, end);
} }
CS3401 Algorithms | Unit
} III 23
Merge Sort : Analysis

CS3401 Algorithms | Unit III 24


Merge Sort : Analysis

CS3401 Algorithms | Unit III 25


Merge Sort : Analysis

CS3401 Algorithms | Unit III 26


Merge Sort : Example
Index 0 1 2 3 4 5 6
Values 35 21 40 2 8 75 11
Is Left Index <
Divide the Yes
m=0+(6-1)/2
array
Right in to=Two
Index 2 halves

35 21 40 2 8 75 11
Is Left Index <
arrayYes
Divide the m=0+(3-1)/2
in to Two halves
Right Index

35 21 40 2 8 75 11
Is Left Index <
Divide the arrayYes
m=leftindex+(rightindex-1)/2
in to Two halves
Right Index

35 21 40 2 8 75
CS3401 Algorithms | Unit III 27
Merge Sort : Example
35 21 40 2 8 75

21 35 2 40 8 75 11

2 21 35 40 8 11 75

2 8 11 21 35 40 75

CS3401 Algorithms | Unit III 28


Merge Sort : Example
Comparison
Sort the following elements using Mergesort
Value Assigned
35,21,40,2,8,75 and 11

Left Sub Array ( L) Right Sub Array ( R)


Index 0 1 2 3 0 1 2
Values 2 21 35 40 8 11 75
L[i]<=
A[k]=R[j]
A[k]=R[i]
A[k]=L[i]
Initially
L[i]<=
A[k]=L[i]
L[i]>R[j] R[j]
R[j]
j j ++i
++i
++i
++j , ++k
,, ++k
++k
L[1]
L[3]
L[0]
L[2]
L[1] <=
A[2]=R[1]
A[0]=L[0]
A[1]=R[0]
A[4]=L[2]
<=
<=
A[5]=L[3]
A[3]=L[1]
> R[2]
R[2]
R[0]
R[0]
R[1]
i=0,
21
40i <=
2 <=
35
j=0,
reached
75
A[2]=11
A[4]=35
A[0]=2
A[1]=8
75
8
k=0
end
( True
True
( (True
A[5]=40
A[1]=21 ) ))
21
21>
>11
8 ((True
True))
Array ( A )
Index 0 1 2 3 4 5 6
Values 2 8 11 21 35 40 75
k k k k k k

CS3401 Algorithms | Unit III 29


Merge Sort : Time and Space Complexity
30

Time Complexity
• The best case complexity is O(log n).
• The worst case complexity is O(n*log n).
• The average case complexity is O(n*log n).

Space Complexity of selection sort is O(n).

Stable and Not an In-Place Algorithm

CS3401 Algorithms | Unit III 30


Merge Sort : Advantages
31
Advantages
• It can be applied to files of any size.
• Reading of the input during the run-creation step is sequential ==> Not much seeking.
• If heap sort is used for the in-memory part of the merge, its operation can be overlapped
with I/O

CS3401 Algorithms | Unit III 31


Merge Sort : Disadvantages
32
Disadvantages
• Requires extra space »N
• Merge Sort requires more space than other sort.
• Merge sort is less efficient than other sort

CS3401 Algorithms | Unit III 32


Merge Sort : Applications
33
• Inversion count problem
• External sorting
• E-commerce applications

CS3401 Algorithms | Unit III 33


Merge Sort : Additional Examples
34
Example : Arrange the following elements in increasing order using merge sort

CS3401 Algorithms | Unit III 34


Merge Sort : Additional Examples
35
Example : Write an algorithm to sort a given list of elements using merge sort. Show
the operation of the algorithm on the list 38,27,43,3,9,82 and 10

CS3401 Algorithms | Unit III 35


Quick Sort

CS3401 Algorithms | Unit III 36


Quick Sort : Introduction
• Quick sort is an extremely efficient sorting technique that divides a large data array
into smaller ones.
• A large array is divided into two arrays, one of which contains values smaller than the
pivot value and the other of which contains values greater than the pivot value.
• Quicksort partitions an array and then calls itself recursively twice to sort the two
resulting subarrays.
• This algorithm is highly efficient for large-sized data sets.

CS3401 Algorithms | Unit III 37


Quick Sort : Algorithm

CS3401 Algorithms | Unit III 38


Quick Sort : Example

CS3401 Algorithms | Unit III 39


Quick Sort : Example & Analysis

CS3401 Algorithms | Unit III 40


Quick Sort : Analysis

CS3401 Algorithms | Unit III 41


Quick Sort : Analysis

CS3401 Algorithms | Unit III 42


Quick Sort : Analysis

CS3401 Algorithms | Unit III 43


Quick Sort
Sort the following elements using Quick sort: 5,3,1,9 and 8
Pivot
Consider the leftmost element as ―Pivot‖
Rules :
1. i Moves forward when i is less than Pivot.
2. j moves backward when j is greater than Pivot.

5 3 1 9 8
j j

Left since i Initially


Subarray and j crosses5 is taken
each as pivot.
other
RightandSubarray
since
sinceSince
Since 98 1
3><95 5>j moves
i5moves
i stops
backward
there
forward
i < j,iSwap
pointstheto pivot
3 andwith jth elemnent.
j points to 8
Apply the same technique for both the
sub arrays
CS3401 Algorithms | Unit III 44
Quick Sort
Sort the following elements using Quick sort: 5,3,1,9 and 8 Pivot
Consider the leftmost element as ―Pivot‖
Rules :
1. i Moves forward when i is less than Pivot.
2. j moves backward when j is greater than pivot.

1 3 5 8 9
j j

Left Subarray Right Subarray

Since
Since
3 >3 1,
> 1,
i stops
j Moves
there Since
Since 98 >> 8,
9, ji Moves
stops

CS3401 Algorithms | Unit III 45


Quick Sort : Implementation
/* Implementation of Quick Sort */ 46
#include <stdio.h>
#define size 100
int partition(int a[],int beg,int end);
void quick_sort(int a[],int beg,int end);
int main(){
int arr[size],i,n;
printf("\n Enter the number of elements in the array:");
scanf("%d",&n);
printf("\n Enter the elements of the array:");
for(i=0;i<n;i++)
scanf("%d",&arr[i]);
quick_sort(arr,0,n-1);
printf("\n The sorted array is: \n");
for(i=0;i<n;i++)
printf(" %d\t",arr[i]);
return 0;
}
CS3401 Algorithms | Unit III 46
Quick Sort : Implementation
47
int partition(int a[],int beg,int end){ if(flag!=1){
int left,right,temp,loc,flag; while((a[loc]>=a[left])&&(loc!=left))
loc=left=beg; left++;
right=end; if(loc==left)
flag=0; flag=1;
while(flag!=1){ else if(a[loc] <a[left]){
while((a[loc]<=a[right])&&(loc!=right)) temp=a[loc];
right--; a[loc]=a[left];
if(loc==right) a[left]=temp;
flag=1; loc=left;
else if(a[loc]>a[right]){ }
temp=a[loc]; }
a[loc]=a[right]; }
a[right]=temp; return loc;
loc=right; }
}

CS3401 Algorithms | Unit III 47


Quick Sort : Time and Space Complexity
48

Time Complexity
• The best case complexity is O(n*log n).
• The worst case complexity is O(n2).
• The avearage case complexity is O(n*log n).

Space Complexity of selection sort is O(log n).

UnStable and In-Place Algorithm

CS3401 Algorithms | Unit III 48


Quick Sort : Advantages
49
Advantages
• The quick sort is regarded as the best sorting algorithm.
• It is able to deal well with a huge list of items.
• Because it sorts in place, no additional storage is required as well

CS3401 Algorithms | Unit III 49


Quick Sort : Disadvantages
50
Disadvantages
• The slight disadvantage of quick sort is that its worst-case performance is similar to
average performances of the bubble, insertion or selections sorts.
• If the list is already sorted than bubble sort is much more efficient than quick sort
• If the sorting element is integers than radix sort is more efficient than quick sort.
• element

CS3401 Algorithms | Unit III 50


Quick Sort : Applications
51
• Quicksort algorithm is used when
• the programming language is good for recursion
• time complexity matters
• space complexity matters

CS3401 Algorithms | Unit III 51


Quick Sort : Additional Examples
Example : Perform Quicksort on the following list 52

CS3401 Algorithms | Unit III 52


Quick Sort : Additional Examples
53

CS3401 Algorithms | Unit III 53


CS3401 Algorithms | Unit III 54
Dynamic programming : Introduction
55
• Dynamic programming is invented by Richard Bellman in 1950.
• It is a technique for solving problems with overlapping sub problems.
• In this method each sub problem is solved once. The result of each sub problem is
recorded in a table from which we can obtain a solution to the original problem.
• Following are the two main properties of a problem that suggests that the given problem
can be solved using Dynamic programming.
• Overlapping Sub problems
• Optimal Substructure

CS3401 Algorithms | Unit III 55


Dynamic programming : Introduction
56
Elements of Dynamic Programming:
• Overlapping Sub problems
• Optimal Substructure
Overlapping sub-problem :
• The problem is broken down into smaller problems which is called sub problems.
• The overlapping sub-problems are found in a bigger problem which shares the same
smaller problem. Unlike divide and conquer there are many sub problems in which overlap
cannot be handled independently.
Two ways to handle overlapping sub problems:
• Top-down approach
• Bottom-up approach

CS3401 Algorithms | Unit III 56


Dynamic programming : Introduction
Top-down approach 57

• Memoization is a top-down approach where we cache the results of function calls and
return the cached result if the function is called again with the same inputs.
• It is used when we can divide the problem into sub problems and the sub problems have
overlapping sub problems.
• Memoization is typically implemented using recursion and is well-suited for problems that
have a relatively small set of inputs.
Bottom-up approach
• Tabulation is a bottom-up approach where we store the results of the subproblems in a
table and use these results to solve larger subproblems until we solve the entire problem.
• It is used when we can define the problem as a sequence of subproblems and the sub
problems do not overlap.
• Tabulation is typically implemented using iteration and is well-suited for problems that
CS3401 Algorithms | Unit III 57
have a large set of inputs.
Dynamic programming : Introduction
58
Comparison of memoization and tabulation:

S.No Top-down approach Bottom-up approach


1. Caches the results of function calls Stores the results of sub problems in a table
2. Recursive implementation Iterative implementation
3. Well-suited for problems with a Well-suited for problems with a large set of
relatively small set of inputs inputs
4.Used when the sub problems have Used when the sub problems do not overlap
overlapping sub problems
Common Example : Calculating the nth number in the Fibonacci sequence

CS3401 Algorithms | Unit III 58


Dynamic programming : Introduction
59
S.No Divide and conquer Dynamic Programming
1. Follows Top-down approach Follows bottom-up approach
2. Used to solve decision problem Used to solve optimization problem
3. Solution of sub problem is computed The solution of subproblems is computed once and
recursively more than once. stored in a table for later use.
4. It is used to obtain a solution to the given
It always generates optimal solution.
problem, it does not aim for the optimal solution
5. Recursive in nature. Recursive in nature.
6. Less efficient and slower. More efficient but slower than greedy.
7. More memory is required to store subproblems for later
some memory is required.
use.
8.
Examples: Merge sort, Quicksort, Strassen‘s Examples: 0/1 Knapsack, All pair shortest path, Matrix-
matrix multiplication. CS3401 Algorithmschain
| Unit IIImultiplication. 59
Matrix – Chain Multiplication

CS3401 Algorithms | Unit III 60


Matrix – Chain Multiplication
61

• The Matrix Chain multiplication problem is the classic dynamic programming problem.
• Chain means one matrix's column is equal to the second matrix's row [always].
In general:
• If A = ⌊aij⌋ is a p x q matrix B = ⌊bij⌋ is a q x r matrix C = ⌊cij⌋ is a p x r matrix then

• Given following matrices {A1,A2,A3,...An} and we have to perform the matrix


multiplication, which can be accomplished by a series of matrix multiplications
A1 xA2 x,A3 x.....x An

CS3401 Algorithms | Unit III 61


Matrix – Chain Multiplication
62
• Matrix Multiplication operation is associative in nature rather commutative.
Step1: Structure of an optimal parenthesization:
• Our first step in the dynamic paradigm is to find the optimal substructure and then use it
to construct an optimal solution to the problem from an optimal solution to sub problems.
• Let Ai....j where i≤ j denotes the matrix that results from evaluating the product
Ai Ai+1....Aj.
• If i < j then any parenthesization of the product Ai Ai+1 ......Aj must split that the product
between Ak and Ak+1 for some integer k in the range i ≤ k ≤ j. That is for some value of k,
we first compute the matrices Ai.....k & Ak+1....j and then multiply them together to produce
the final product Ai....j. The cost of computing Ai....k plus the cost of computing Ak+1....j plus
the cost of multiplying them together is the cost of parenthesization.
CS3401 Algorithms | Unit III 62
Matrix – Chain Multiplication
63
Step 2: A Recursive Solution Let m [i, j] be the minimum number of scalar multiplication
needed to compute the matrix Ai....j.

• To construct an optimal solution, let us define s [i,j] to be the value of 'k' at which we
can split the product Ai Ai+1 .....Aj.
Step 3: Computing Optimal Costs
• To obtain an optimal parenthesization i.e. s [i, j] = k such that m [i,j] = m [i, k] + m [k +
1, j] + pi-1 pk pj

CS3401 Algorithms | Unit III 63


Matrix – Chain Multiplication : Algorithm & Analysis
64

• The basic operation in this algorithm is computation


of m[i,j] and s[i,j] which is within three nested for
loops. Hence the time complexity is O(n3)
CS3401 Algorithms | Unit III 64
Matrix – Chain Multiplication : Example
65

CS3401 Algorithms | Unit III 65


Matrix – Chain Multiplication : Example
66

CS3401 Algorithms | Unit III 66


Multi stage Graph

CS3401 Algorithms | Unit III 67


Multi stage Graph
68

CS3401 Algorithms | Unit III 68


Multi stage Graph : Example
69

CS3401 Algorithms | Unit III 69


Multi stage Graph : Example
70

CS3401 Algorithms | Unit III 70


Multi stage Graph : Example
71

CS3401 Algorithms | Unit III 71


Multi stage Graph : Example
72

CS3401 Algorithms | Unit III 72


Multi stage Graph : Example
73

CS3401 Algorithms | Unit III 73


Multi stage Graph : Example
74

CS3401 Algorithms | Unit III 74


Multi stage Graph : Example
75

CS3401 Algorithms | Unit III 75


Multi stage Graph : Additional Examples
76

CS3401 Algorithms | Unit III 76


Multi stage Graph : Additional Examples
77

CS3401 Algorithms | Unit III 77


Multi stage Graph : Additional Examples
78

CS3401 Algorithms | Unit III 78


Multi stage Graph : Additional Examples
79

CS3401 Algorithms | Unit III 79


Optimal Binary Search Trees (OBST)

CS3401 Algorithms | Unit III 80


Optimal Binary Search Trees (OBST)
81
Definition :
Let {a1,a2,a3,…..an} be a set of identifiers such that a1<a2<a3… let p(i) be the probability with
which we can search for ai i.e. successful search. Let qi be the probability of searching an
element x such that ai<x<ai+1 where 0<=i <=n unsuccessful search and q(i) is the
probability of un successful search. Then the tree which is build with optimum cost from
𝑛 𝑛
𝑖=1 𝑝 𝑖 and 𝑖=1 𝑞 𝑖 is called optimal binary search tree.

CS3401 Algorithms | Unit III 81


Optimal Binary Search Trees (OBST)
82

Time Complexity : The algorithm requires O(n3)


time, since three nested for loop are used. Each of
these loops takes an atmost ‗n‘ values.
Space Complexity : The extra space needed is
O(nk+1). Time and Space complexity are
polynomial in the size of the input.
CS3401 Algorithms | Unit III 82
Optimal Binary Search Trees (OBST)
83
Example: Construct the Optimal Binary Search Tree for the following:
Key A B C D
Probability 0.1 0.2 0.4 0.3
Analyze the time efficiency and space efficiency of algorithm OptimalBST algorithm.

Initial Tables :
0 1 2 3 4
1 2 3 4
1 0 0.1
1 1
2 0 0.2
2 2
3 0 0.4
3 3
4 0 0.3
4 4
5 0
83
Optimal Binary Search Trees (OBST)
84
Using formula : C[i,i-1]=0 so C[1,0]=C[2,1]=C[3,2]=C[4,3]=C[5,4]=C[6,5]=0
C[i,i]=Pi ==> C[1,1]=0.1; C[2,2]=0.2 ; C[3,3]=0.4; C[4,4]=0.3
R[i,i]=i==> R[1,1]=1 ; C[2,2]=2 ; C[3,3]=3; C[4,4]=4
Let us compute C[i,j] using the following formula:
𝒋
C[i,j]=min { C[i.k-1] + C[k+1,j] + 𝒔=𝒊 𝑷𝒔
C[1,0]+C[2,2]+P1+P2 C[2,1]+C[3,3]+P2+P3
K=1 =0+0.2+0.1+0.2 K=2 =0+0.4+0.2+0.4
=0.5 =1.0
C[1,2] C[2,3]
C[1,1]+C[3,2]+P1+P2 C[2,2]+C[4,3]+P2+P3
K=2 =0.1+0+0.1+0.2 K=3 =0.2+0+0.2+0.4
=0.4 =0.8
Minimum value at K=2 Minimum value at K=3
C[1,2]=0.4 and R[1,2]=2 C[2,3]=0.8 and R[2,3]=3
84
Optimal Binary Search Trees (OBST)
85

C[1,0]+C[2,3]+P1+P2+P3
K=1 =0+0.8+0.1+0.4+0.2
C[3,2]+C[4,4]+P3+P4
=1.5
K=3 =0+0.3+0.4+0.3
C[1,1]+C[3,3]+P1+P2+P3
=1.0
C[3,4] C[1,3] K=2 =0.1+0.4+0.1+0.4+0.2
C[3,3]+C[5,4]+P3+P4
=1.2
K=4 =0.4+0+0.4+0.3
C[1,2]+C[4,3]+P1+P2+P3
=1.1
K=3 =0.4+0+0.1+0.4+0.2
Minimum value at K=3
=1.1
C[3,4]=1.0 and R[3,4]=3
Minimum value at K=3
C[1,3]=1.1 and R[1,3]=3

85
Optimal Binary Search Trees (OBST)
86
C[1,0]+C[2,4]+P1+P2+P3+P4
K=1 =0+1.4+0.1+0.2+0.4+0.3
C[2,1]+C[3,4]+P2+P3+P4
=2.4
K=2 =0+1.0+0.2+0.4+0.3
C[1,1]+C[3,4]+P1+P2+P3+P4
=1.9
K=2 =0.1+1.0+0.1+0.2+0.4+0.3
C[2,2]+C[4,4]+P2+P3+P4
=2.1
C[2,4] K=3 =0.2+0.3+0.2+0.4+0.3 C[1,4]
C[1,2]+C[4,4]+P1+P2+P3+P4
=1.4
K=3 =0.4+0.3+0.1+0.2+0.4+0.3
C[2,3]+C[5,4]+P2+P3+P4
=1.7
K=4 =0.8+0+0.2+0.4+0.3
C[1,3]+C[5,4]+P1+P2+P3+P4
=1.7
K=4 =1.1+0+0.1+0.2+0.4+0.3
Minimum value at K=3
=2.1
C[2,4]=1.4 and R[2,4]=3
Minimum value at K=3
C[1,4]=1.7 and R[1,4]=3 86
Optimal Binary Search Trees (OBST)
87
Final Tables:
Cost table
0 1 2 3 4 Root table
1 0 0.1 0.4 1.1 1.7 1 2 3 4
2 0 0.2 0.8 1.4 1 1 2 3 3
3 0 0.4 1.0 2 2 3 3
4 0 0.3 3 3 3
5 0 4 4
To build a tree R[1,4]=3
Here i=1,j=4 and k=3

Optimal cost C[1,4]=1.7 87


Greedy Technique

CS3401 Algorithms | Unit III 88


Greedy Technique
• A greedy algorithm is an approach for solving a problem by selecting 89 the best option
available at the moment. It doesn't worry whether the current best result will bring the overall
optimal result.
• The algorithm never reverses the earlier decision even if the choice is wrong.
• It works in a top-down approach.
• This algorithm may not produce the best result for all the problems. It's because it always
goes for the local best choice to produce the global best result.
• We can determine if the algorithm can be used with any problem if the problem has the
following properties:
1. Greedy Choice Property
• If an optimal solution to the problem can be found by choosing the best choice at each step
without reconsidering the previous steps once chosen, the problem can be solved using a
greedy approach. This property is called greedy choice property.
2. Optimal Substructure
• If the optimal overall solution to the problem corresponds to the optimal solution to its sub
problems, then the problem can be solved using a greedy approach. This property is called
optimal substructure. 89
Greedy Technique
90
• Greedy Algorithm have following five components:
• Candidate Set: A solution that is created from the set is known as a candidate set.
• Selection Function: Chooses the best candidate to be added to the solution.
• Feasibility Function: Used to determine, if a candidate can be used to contribute to a
solution.
• Objective Function: A function is used to assign the value to the solution or the partial
solution.
• Solution Function: Indicates when we have discovered a complete solution.
Following are few algorithms that make use of greedy approach/technique:
• Knapsack problem.
• Kruskal's Algorithm.
• Prim's Algorithm.
• Dijkstra's Algorithm.
• Huffman tree building.
• Traveling salesman problem. etc. 90
Activity Selection Problem

CS3401 Algorithms | Unit III 91


Activity Selection Problem
92
• The activity selection problem is a mathematical optimization problem.
• Our first illustration is the problem of scheduling a resource among several challenge
activities.
• We find a greedy algorithm provides a well designed and simple method for selecting a
maximum- size set of manually compatible activities.
• Suppose S = {1, 2....n} is the set of n proposed activities. The activities share resources
which can be used by only one activity at a time, e.g., Tennis Court, Lecture Hall, etc.
Each Activity "i" has start time si and a finish time fi, where si ≤fi. If selected activity "i"
take place meanwhile the half-open time interval [si,fi). Activities i and j
are compatible if the intervals (si, fi) and [si, fi) do not overlap (i.e. i and j are compatible
if si ≥fi or si ≥fi). The activity-selection problem chosen the maximum- size set of
mutually consistent activities.
92
Activity Selection Problem : Algorithm
93

GREEDY- ACTIVITY SELECTOR (s, f)


1. n ← length [s]
2. A ← {1}
3. j ← 1.
4. for i ← 2 to n
5. do if si ≥ fi
6. then A ← A ∪ {i}
7. j←i
8. return A

93
Activity Selection Problem : Example
94
Given 10 activities along with their start and end time as
S = (A1 A2 A3 A4 A5 A6 A7 A8 A9 A10)
Si = (1,2,3,4,7,8,9,9,11,12)
fi = (3,5,4,7,10,9,11,13,12,14)
Solution:
• Arranging the activities in
increasing order of end time

94
Activity Selection Problem : Example
95
• Now, schedule A1
• Next schedule A3 as A1 and A3 are non-interfering.
• Next skip A2 as it is interfering.
• Next, schedule A4 as A1 A3 and A4 are non-interfering, then next, schedule A6 as
A1 A3 A4 and A6 are non-interfering.
• Skip A5 as it is interfering.
• Next, schedule A7 as A1 A3 A4 A6 and A7 are non-interfering.
• Next, schedule A9 as A1 A3 A4 A6 A7 and A9 are non-interfering.
• Skip A8 as it is interfering.
• Next, schedule A10 as A1 A3 A4 A6 A7 A9 and A10 are non-interfering.
• Thus the final Activity schedule is:

95
Optimal Merge Pattern

CS3401 Algorithms | Unit III 96


Optimal Merge Pattern
97
• Merge a set of sorted files of different length into a single sorted file.
• We need to find an optimal solution, where the resultant file will be generated in
minimum time. If the number of sorted files are given, there are many ways to merge
them into a single sorted file.
• This merge can be performed pair wise. Hence, this type of merging is called as 2-way
merge patterns. As, different pairings require different amounts of time, in this strategy
we want to determine an optimal way of merging many files together. At each step, two
shortest sequences are merged.
• To merge a p-record file and a q-record file requires possibly p + q record moves, the
obvious choice being, merge the two smallest files together at each step. Two-way merge
patterns can be represented by binary merge trees. Let us consider a set of n sorted
files {f1, f2, f3, …, fn}. Initially, each element of this is considered as a single node binary
tree.

97
Optimal Merge Pattern
Algorithm: TREE (n) 98
for i := 1 to n – 1 do
declare new node
node.leftchild := least (list)
node.rightchild := least (list)
node.weight) := ((node.leftchild).weight) + ((node.rightchild).weight)
insert (list, node);
return least (list);
Example :
• Consider the given files, f1, f2, f3, f4 and f5 with 20, 30, 10, 5 and 30 number of elements
respectively.
• If merge operations are performed according to the provided sequence, then
M1 = merge f1 and f2 => 20 + 30 = 50
M2 = merge M1 and f3 => 50 + 10 = 60
M3 = merge M2 and f4 => 60 + 5 = 65
M4 = merge M3 and f5 => 65 + 30 = 95
98
Hence, the total number of operations is 50 + 60 + 65 + 95 = 270
Optimal Merge Pattern
Sorting the numbers according to their size in an ascending order, we 99
get the following
sequence : f4, f3, f1, f2, f5
Hence, merge operations can be performed on this sequence
M1 = merge f4 and f3 => 5 + 10 = 15
M2 = merge M1 and f1 => 15 + 20 = 35
M3 = merge M2 and f2 => 35 + 30 = 65
M4 = merge M3 and f5 => 65 + 30 = 95
Therefore, the total number of operations is 15 + 35 + 65 + 95 = 210
Obviously, this is better than the previous one.
Solving the problem the spcified algorithm:
Initial Set

99
Optimal Merge Pattern 10
0
Step 3
Step 1

Step 2

Step 4

100
Hence, the solution takes 15 + 35 + 60 + 95 = 205 number of comparisons.
Huffman Trees : Introduction
10
1
• The huffman trees are constructed for encoding a given text, each character is associated
with some bit sequence. Such a bit sequence is called code word.
Encoding of two types:
• Fixed length encoding : It is kind of encoding in which each character is associated with
a bit string of some fixed length.
• Variable length encoding : It is kind of encoding in which each character is associated
with a code word of different length.
Example:
Character A B C D E
Probability 0.40 0.1 0.25 0.2 0.15

101
Huffman Trees : Introduction
10
2

102
Huffman Trees : Introduction
10
3

103
Huffman Trees : Introduction
10
4

104

You might also like