0% found this document useful (0 votes)
495 views90 pages

Sem 4 AoA

The document contains a list of 12 experiments conducted on various algorithms from February 2021 to May 2021. The experiments include insertion sort, selection sort, quick sort, merge sort, finding minimum and maximum values, Dijkstra's algorithm, fractional knapsack problem, 0/1 knapsack problem, longest common subsequence, N-queen problem, sum of subsets problem, and Knuth-Morris-Pratt algorithm. For each experiment, the problem statement, theory, algorithm, example, complexity analysis and code are provided.

Uploaded by

Gayatri Jethani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
495 views90 pages

Sem 4 AoA

The document contains a list of 12 experiments conducted on various algorithms from February 2021 to May 2021. The experiments include insertion sort, selection sort, quick sort, merge sort, finding minimum and maximum values, Dijkstra's algorithm, fractional knapsack problem, 0/1 knapsack problem, longest common subsequence, N-queen problem, sum of subsets problem, and Knuth-Morris-Pratt algorithm. For each experiment, the problem statement, theory, algorithm, example, complexity analysis and code are provided.

Uploaded by

Gayatri Jethani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Contents

Sr.
Title of Experiment Date
No.

1 Insertion sort, Selection sort 08.02.2021

2 Quick sort, Merge sort 08.02.2021

3 Finding Minimum and Maximum 22.02.2021

4 Single Source Shortest Path- Dijkstra 01.03.2021

5 Fractional Knapsack 15.03.2021

6 0/1 Knapsack 12.04.2021

7 Longest Common Subsequence (LCS) 26.04.2021

8 N-queen 26.04.2021

9 Sum Of Subsets 26.04.2021

10 Knuth Morris Pratt (KMP) Algorithm 05.04.2021

11 Assignment No 1 03.05.2021

12 Assignment No 2 10.05.2021
Name: Manuja Vedant
Roll. No: 1902094
Batch: C22
Experiment 1(a)

Problem Statement: Analyse Insertion sort.

Theory:
The array is virtually split into a sorted and an unsorted part. Values from
the unsorted part are picked and placed at the correct position in the
sorted part. It is much less efficient on large lists than more advanced
algorithms such as quicksort, heapsort, or merge sort. However, insertion
sort provides several advantages: Efficient for (quite) small data sets, much
like other quadratic sorting algorithms

Algorithm:
Insertion Sort(A)
for i = 2 to A.length
key = A[j]
i = j-1
while i > 0 and A[i] > key
A[i+1] = A[j]
i=i-1
A[i+1] = key

Example:
Consider an array of size 5
54321
Sequence for iteration 1 is: 5 4 3 2 1
Sequence for iteration 2 is: 1 4 3 2 5
Sequence for iteration 3 is: 1 2 3 4 5
Sequence for iteration 4 is: 1 2 3 4 5
Sorted array:
12345
Complexity:
i) Time Complexity:
Worst case: O( ) when the elements are arranged in reverse order.
Best case: O(n) The best case occurs if the array is already sorted.
Average case: O( )
ii) Space Complexity: is O(1) since the same array is maintained without
need of another copied array.

Code:
#include <iostream>
#define MAX 25
using namespace std;

void printArr(int arr[], int n)


{
int i;
for (i = 0; i < n; i++)
cout << arr[i] << " ";
}

void insSort(int arr[], int n)


{
int i, key, j;
for (i = 1; i < n; i++)
{
key = arr[i];
j = i - 1;

while (j >= 0 && arr[j] > key)


{
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;

cout<<"Iteration "<<i+1<<" is: ";


printArr(arr,n);
cout<<" key: "<<key<<endl;
}
}

int main()
{
int arr[MAX], n;

cout << "Enter size: ";


cin >> n;

for(int i=0;i<n;i++){
cin>>arr[i];
}

insSort(arr, n);

cout << "Array after sorting: \n";


printArr(arr, n);
cout<<endl;

return 0;
}

Output:
Name: Manuja Vedant
Roll. No: 1902095
Batch: C22
Experiment 1(b)

Problem Statement: Analyse Selection sort.

Theory:
The Selection Sort algorithm is an in-place comparison-based algorithm.
The idea behind it is to divide an array into two segments:
The sorted part at the left end.
The remaining unsorted part at the right end.
First, the algorithm finds either the minimum or the maximum element in
our input array. Then, the algorithm moves this element to its correct
position in the array.

Algorithm:
Algorithm: SelectSort (A)
{
for i = 1 to n-1 do
{
min = i;
for j = i + 1 to n do
if A[j] < A[min]
min = j; // j ends
temp = A[i];
A[i] = A[min];
A[min] = temp;

} // i ends

}
Example:
Consider an array of size 5
54321
Sequence for iteration 1 is: 5 4 3 2 1
Sequence for iteration 2 is: 1 4 3 2 5
Sequence for iteration 3 is: 1 2 3 4 5
Sequence for iteration 4 is: 1 2 3 4 5
Sorted array:
12345

Complexity:
i) Time Complexity:
Worst case: O( ) when the elements are arranged in reverse order.
Best case: O( ) The best case occurs if the array is already sorted.
Average case: O( )
ii) Space Complexity: is O(1) since the same array is maintained without
need of another copied array.

Code:
#include <iostream>
#define MAX 25
using namespace std;

void swap(int *x, int *y)


{
int temp = *x;
*x = *y;
*y = temp;
}

void printArr(int arr[], int size)


{
int i;
for (i = 0; i < size; i++)
cout << arr[i] << " ";
cout << endl;
}
void selectSort(int arr[], int n)
{
int i, j, min_idx;

for (i = 0; i < n - 1; i++)


{

min_idx = i;

for (j = i + 1; j < n; j++)


if (arr[j] < arr[min_idx])
min_idx = j;

cout<<"Iteration "<<i+1<<" is: ";


printArr(arr,n);
swap(&arr[min_idx], &arr[i]);
}
}

int main()
{
int arr[MAX],n;

cout<<"Enter size: ";


cin>>n;

for(int i=0;i<n;i++){
cin>>arr[i];
}
selectSort(arr, n);
cout << "Array after sorting: \n";
printArr(arr, n);

return 0;
}
Output:
Name: Manuja Vedant
Roll.No: 1902094
Batch: C22

Experiment 2(a)
Problem Statement: Analyse Quick Sort.

Theory:
QuickSort is a Divide and Conquer algorithm. It picks an element as pivot
and partitions the given array around the picked pivot. The key process in
quickSort is partition(). Target of partitions is, given an array and an
element x of array as pivot, put x at its correct position in sorted
array and put all smaller elements (smaller than x) before x, and put all
greater elements (greater than x) after x. All this should be done in linear
time.

Algorithm:
PARTITION (A,p,r)
{
x = A[r]
i = p- 1
for j = p to r - 1
if A[j] <= x
{
i = i + 1;
exchange A[i] with A[j]
}
exchange A[i+1] with A[r]
return (i + 1)
}
QUICKSORT (A,p,r)
{
if p < r
{
q = PARTITION(A,p,r)
QUICKSORT(A,p,q-1)
QUICKSORT(A,q+1,r)
}
}
Example:
Consider array 5 4 3 2 1
low: 0 high: 4 pivot: 1
low: 1 high: 4 pivot: 5
low: 1 high: 3 pivot: 2
low: 2 high: 3 pivot: 4
Sorted array is: 1 2 3 4 5
Time complexity:
Worst-case partitioning
Worst-case behaviour for Quick sort occurs when the partitioning routine
produces one sub-problem with
(n -1) elements and one with 0 elements
Let us assume that this unbalanced partitioning arises in each recursive call
The partitioning costs - e
Recursive call on an array of size 0 just returns,

The recurrence for the running time is


T(n) = T(n-
T(n) = T(n-
T(n) = T(n- )
2
If the partitioning is maximally unbalanced at every recursive level of the
algorithm, the running time is
( )
2
Therefore the worst-case running time of
)
2
Best-case partitioning
In the most even possible split, PARTITION produces two sub-problems,
one is of size (n/2) and one of size (n/2)-1
In this case, Quick sort runs much faster.
The recurrence for the running time is then

Code:
#include<iostream>
using namespace std;
void swap(int* x, int* y)
{
int temp;
temp = *x;
*x = *y;
*y = temp;
}
void printArray(int arr[], int size)
{
for (int i = 0; i < size; i++)
cout << arr[i] << " ";
cout << endl;
}
int partition(int arr[], int low, int high){
int pivot = arr[high];
int i = low - 1;
for(int j = low; j<=high-1; j++){
if(arr[j] < pivot){
i++;
swap(&arr[i], &arr[j]);
}
}
swap(&arr[i+1], &arr[high]);
cout<<"low: "<<low<<" high: "<<high<<" pivot: "<<pivot<<endl;
return i+1;
}
void quickSort(int arr[], int low, int high)
{
if (low < high)
{
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
int main(){
int arr[50],n;
cout<<"Enter size: ";
cin>>n;
for(int i=0;i<n;i++){
cin>>arr[i];
}
quickSort(arr, 0, n-1);
cout<<"Sorted array: ";
printArray(arr,n);
system("pause");
return 0;
}
Output:
Experiment 2(a)

Problem Statement: Analyse Merge Sort.


Theory:
The divide-and-conquer approach The merge sort algorithm closely follows
the divide-and-conquer paradigm. Intuitively, it operates as follows.
Divide: Divide the n-element sequence to be sorted into two subsequences
of n=2 elements each.
Conquer: Sort the two subsequences recursively using merge sort.
Combine: Merge the two sorted subsequences to produce the sorted
answer.
Merge by calling an procedure MERGE (A, p, q, r), where A is an array
and p, q, and r are indices into the array such that p <= q < r.
It merges them to form a single sorted subarray that replaces the current
subarray A[p..r].
The Merge sort can be implemented using following two algorithms.
MERGE-SORT(A,p,r) sorts the elements in it.
If p >= r, at most one element and is already sorted.
Otherwise, simply computes an index q
q partitions A[p..r] into two subarrays: A[p..q], containing n/2 elements,
and A[q+1..r] containing n/2 elements.

Algorithm:
MERGE(A,p,q,r)
n1 = q-p+1
N2 = r-1
Let L[1..n1+1] and R[1..n2+1]
For i =1 to n1
L[i]=A[p+i-1]
For j=1 to n2
R[j]=A[q+j]

i=1
j=1
For k = p to r
If L[i]<= R[j]
A[k]=L[i]
i = i+1
Else A[k]=R[j]
j =j +1
MERGE-SORT(A,p,r)
If p<r
q=(p+r)/2
MERGE-SORT(A,p,q)
MERGE-SORT(A,q+1,r)
MERGE(A,p,q,r)
Example:
Consider array:
38 27 43 3 9 82 10
l: 0 r: 6 m:3
l: 0 r: 3 m:1
l: 0 r: 1 m:0
left array:
38
right array:
27
l: 2 r: 3 m:2
left array:
43
right array:
3
left array:
27 38
right array:
3 43
l: 4 r: 6 m:5
l: 4 r: 5 m:4
left array:
9
right array:
82
left array:
9 82
right array:
10
left array:
3 27 38 43
right array:
9 10 82
Sorted array is
3 9 10 27 38 43 82
Time Complexity:
Assume that problem size is a power of 2 .
Each divide yields two subsequences of size exactly n/2.
T(n), worst-case running time of merge sort on n numbers.
Merge sort on just one element takes constant time.
For n > 1 elements, break down the running time as follows.
Add functions D(n) and C(n) for merge sort analysis,
nd 2T(n/2) gives the
recurrence for the worst-case running time T(n) of merge sort:

Code:
#include <iostream>
using namespace std;
void printArray(int A[], int size)
{
for (int i = 0; i < size; i++)
cout << A[i] << " ";
cout<<endl;
}
void merge(int arr[], int l, int m, int r)
{
int n1 = m - l + 1;
int n2 = r - m;
int L[n1], R[n2];
for (int i = 0; i < n1; i++)
L[i] = arr[l + i];
for (int j = 0; j < n2; j++)
R[j] = arr[m + 1 + j];
cout<<"left array:\n";
printArray(L,sizeof(L)/sizeof(L[0]));
cout<<"right array:\n";
printArray(R,sizeof(R)/sizeof(R[0]));
int i = 0;
int j = 0;
int k = l;
while (i < n1 && j < n2)
{
if (L[i] <= R[j])
{
arr[k] = L[i];
i++;
}
else
{
arr[k] = R[j];
j++;
}
k++;
}
while (i < n1)
{
arr[k] = L[i];
i++;
k++;
}
while (j < n2)
{
arr[k] = R[j];
j++;
k++;
}
}
void mergeSort(int arr[], int l, int r)
{
if (l >= r)
{
return;
}
int m = l + (r - l) / 2;
cout<<"l: "<<l<<" r: "<<r<<" m:"<<m<<endl;
mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);
merge(arr, l, m, r);
}
int main()
{
int arr[50],arr_size;
cout<<"Enter size: ";
cin>>arr_size;
for(int i=0;i<arr_size;i++){
cin>>arr[i];
}
cout << "Array before sorting: \n";
printArray(arr, arr_size);
cout<<endl;
mergeSort(arr, 0, arr_size - 1);
cout << "\nSorted array: \n";
printArray(arr, arr_size);
return 0;
}
Output:
Name: Manuja Vedant
Roll. No: 1902094
Batch: C22

Experiment 3

Problem Statement: Find minimum and maximum elements in array


through divide and conquer approach.

Theory:
Just like the merge sort, we could divide the array into two equal parts and
recursively find the maximum and minimum of those parts. After this,
compare the maximum and minimum of those parts to get the maximum
and minimum of the whole array.
Write a recursive function accepting the array and its start and end index
as parameters
1. The base cases will be

minimum
2. The recursive part is

right parts

Return max and min.


Algorithm:

Algorithm MAXMIN(i, j, max, min)


{
// initially i and j point to start and end positions of
array // only one element case
if( i == j)
then max = min = a[i];
else if (i = j - 1) // only 2 elements case
then
{
if ( a[i] < a[j] ) then
{
max = a[j];
min = a[i];
}
else
{
max = a[i];
min = a[j];
}
}
else // n>2 case
{
mid = ( i + j )/2;
MAXMIN(i, mid, max, min);
MAXMIN(mid+1, j, max1, min1);
if( max < max1 )
then max = max1;
if( min > min1 )
then min = min1;
}
}
Example:
Code:
#include <iostream>
using namespace std;
int* findMinMax(int A[], int start, int end)
{
int max;
int min;
if (start == end)
{
max = A[start];
min = A[start];
}
else if (start == end - 1)
{
if (A[start] < A[end])
{
max = A[end];
min = A[start];
}
else
{
max = A[start];
min = A[end];
}
}
else
{
int mid = start + (end - start) / 2;
int* left = new int[0, mid];
left = findMinMax(A, start, mid);
int *right = new int[mid + 1, end - 1];
right = findMinMax(A, mid + 1, end);

if (left[0] > right[0])


max = left[0];
else
max = right[0];
if (left[1] < right[1])
min = left[1];
else
min = right[1];
}
int *ans = new int[2];
ans[0] = max;
ans[1] = min;
return ans;
}
int main()
{
const int size = 10;
int arr[size] = {4, 12, 36, 18, 14, 122, 97, 56, 87, 78};
int *ans = findMinMax(arr, 0, size - 1);
cout << "Minimum is: " << ans[0] << "\nMaximum is: " << ans
[1] << "\n\n";
return 0;
}
Output:
Name: Manuja Vedant
Roll. No: 1902094
Batch: C22

Experiment 4

Problem statement: Given a graph G = (V, E) and a source vertex S in the


graph, find shortest paths from thesource vertex to all the vertices in the
given graph using Algorithm.

Theory:
o Dijkstra algorithm is a greedy algorithm.
o It solves the single-source shortest path problem for a directed graph
G = (V, E) withnon-negative edge weights, i.e., w (u, v) 0 for each
edge (u, v) E.
o It finds a shortest path tree for a weighted directed / undirected
graph. This means itfinds shortest paths between nodes in a graph
o For a given source node in the graph, the algorithm finds the shortest
path betweensource node and every other node.
o Dijkstra algorithm works only for connected graphs.
o It works for directed as well as undirected graphs.
o This algorithm works only for those graphs that do not contain any
negative weight edge.

Limitation of Algorithm:
Dijkstra's Algorithm can only work with graphs that have positive weights.
This is because,during the process, the weights of the edges have to be
added to find the shortest path.
If there is a negative weight in the graph, then the algorithm will not work
properly. Once anode has been marked as "visited", the current path to that
node is marked as the shortestpath to reach that node. And negative weights
can alter this if the total weight can be decremented after this step has
occurred.
For a graph,
G = (V, E) where - V is a set of vertices.
- E is a set of edges.

Dijkstra's algorithm keeps two sets of vertices:


S : the set of vertices whose shortest paths from the source have already been
determinedV - S : the remaining vertices.

The other data structures needed are:


d : array of best estimates of shortest path to each
vertexpi : an array of predecessors for each vertex

The basic mode of operation is:

Initialize d and pi
Set S to empty,
While there are still vertices in V-S,
Sort the vertices in V-S according to the current best estimate of their
distancefrom the source,
Add u, the closest vertex in V-S, to S,
Relax all the vertices still in V-S connected to u

Relaxation Process

The relaxation process updates the costs of all the vertices, v, connected to a vertex, u, if
wecould improve the best estimate of the shortest path to v by including (u,v) in the
path to v.The algorithm repeatedly se - S with the minimum
shortest - path estimate, insert u into S and relaxes all edges leaving u.

(Because it always chooses the "lightest" or "closest" vertex in V - S to insert into set S, it
iscalled as the greedy strategy.)
Algorithm:

Create cost matrix C[ ][ ] from adjacency matrix adj[ ][ ]. C[i][j] is the cost of going
fromvertex i to vertex j. If there is no edge between vertices i and j then C[i][j] is
infinity.
Array visited[ ] is initialized to
zero.for(i=0; i<n; i++)
visited[i]=0;

If the vertex 0 is the source vertex then visited[0] is marked as 1.


Create the distance matrix, by storing the cost of vertices from vertex no. 0 to n-1
fromthe source vertex 0.
for(i=1; i<n; i++)
distance[i]=cost[0][i];
Initially, distance of source vertex is taken as 0. i.e. distance[0]=0;
5. for(i=1; i<n; i++)
Choose a vertex w, such that distance[w] is minimum and visited[w] is 0. Mark
visited[w]as 1.
Recalculate the shortest distance of remaining vertices from the source.
Only, the vertices not marked as 1 in array visited[ ] should be
considered forrecalculation of distance. i.e. for each vertex v
if(visited[v] == 0):
distance[v] = min(distance[v], distance[w]+cost[w][v])

Example: Find the shortest path from source vertex B to all other vertices in the graph:
Step 1: Assign d[i] to vertices.
d[source_vertex] = 0
d[other vertices] =

Step 2: Calculate minimum cost for neighbors of selected source.


For each neighbor A, C and D of source vertex selected (B), calculate the cost associated
toreach them from B. Once this is done, mark the source vertex as visited.

d[i] = Minimum(current cost of neighbor vertex, cost(B) + edge_value(neighbor,B))


For neighbor A: cost = Min 0+3) = 3
For neighbor C: cost = Min 0+1) = 1
For neighbor D: cost = Min , 0+6) = 6
Step 3: Select next vertex with smallest cost from the unvisited list.
Choose the unvisited vertex with minimum cost (here, it would be C) and consider
all itsunvisited neighbors (A, E and D) and calculate the minimum cost for them.
Once this is done, mark C as visited.

For neighbor A: cost = Min (3, 1+2) = 3


For neighbor E: cost = Min 1+4) = 5
For neighbor D: cost = Min (6, 1+4) = 5

Step 4: Repeat step 3 for all the remaining unvisited nodes


Repeat step 4 until there are no unvisited nodes left. The final state of the graph would
belike below.
Complexity Analysis:

G[i,j] stores the information about edge (i,j).


Time taken for selecting i with the smallest dist is O(V).
For each neighbor of i, time taken for updating dist[j] is O(1) and there will be
maximumV neighbors.
Time taken for each iteration of the loop is O(V) and one vertex is deleted from Q.
Thus, total time complexity becomes O(V2).

Time Complexity of the implementation is O(V2). If the input graph is represented using
adjacency list, it can be reduced to O (E log V) with the help of binary heap.

Dijkstra's Algorithm Applications:


To find the shortest path
In social networking applications
In a telephone network
To find the locations in the map
Code:
#include<stdio.h>
#include<conio.h>
#define INFINITY
999
#define MAX 10

void dijkstra(int G[MAX][MAX], int n, int source)


{
int cost[MAX][MAX], d[MAX], pi[MAX], visited[MAX], count, min_d, nextnode, i,
j;for(i = 0; i < n; i ++)
for(j = 0; j < n; j ++)
if(G[i][j] == 0)
cost[i][j] = INFINITY; //create the cost (weight) matrix
else
cost[i][j] = G[i][j];

for(i = 0; i < n; i ++)


{
d[i] = cost[source][i]; //initialize pi[],d[] and
visited[]pi[i] = source;
visited[i] = 0;
}
d[source] = 0;
visited[source] = 1;
count = 1;
while(count < n-
1)
{
min_d = INFINITY;
for(i = 0; i < n; i ++)
if( d[i] < min_d && !visited[i] )
{
min_d = d[i];
nextnode =
i;
}
//check if a better path exists through nextnode (relaxation)
visited[nextnode] = 1;
for(i = 0; i < n; i ++)
if(!visited[i])
if(min_d + cost[nextnode][i] < d[i])
{
d[i] = min_d +
cost[nextnode][i];pi[i] =
nextnode;
count ++; }
}

//print the path and distance of each node


printf("\nVertex \t Distance \t Path from
source");for(i = 0; i < n; i ++)
if(i != source)
{
printf("\n%c \t %d", (char)(i+65), d[i]);
printf(" \t\t %c",
(char)(i+65));j=i;
do {
j = pi[j];
printf(" <- %c", (char)(j+65));
}while(j != source);
}
}
int main()
{

int G[MAX][MAX], i, j, n;
char start;
printf("Enter no. of vertices:
");scanf("%d", &n);
printf("\nEnter the adjacency matrix:\n");
//adjacency
matrix for(i = 0; i <
n; i ++)
for(j = 0; j < n; j ++)
scanf("%d", &G[i][j]);

fflush(stdin);
printf("\nEnter the starting node: ");
scanf("%c", &start);
dijkstra(G, n, start-
'A');return 0;
}
Output: (for below graph)

Shortest path from B


Name: Manuja Vedant
Roll. No: 1902094
Batch: C22

Experiment 5

Problem statement: To Implement fractional knapsack.

Theory:
Given a set of items, each with a weight and a value, determine a subset of
items to include in a collection so that the total weight is less than or equal
to a given limit and the total value is as large as possible.

The knapsack problem is in combinatorial optimization problem. It appears


as a sub problem in many, more complex mathematical models of real-
world problems. One general approach to difficult problems is to identify
the most restrictive constraint, ignore the others, solve a knapsack
problem, and somehow adjust the solution to satisfy the ignored
constraints.

The fractional knapsack problem to obtain an integer solution that


maximizes a linear fractional objective function under the constraint of one
linear inequality is considered. A modification of the Dinkelbach's
algorithm [3] is proposed to exploit the fact that good feasible solutions are
easily obtained for both the fractional knapsack problem and the ordinary
knapsack problem. An upper bound of the number of iterations is derived.
In particular it is clarified how optimal solutions depend on the right hand
side of the constraint; a fractional knapsack problem reduces to an
ordinary knapsack problem if the right hand side exceeds a certain bound.

As the main time taking a step is of sorting so it defines the time complexity
of our code. So the time complexity will be O(n log n) if we use quick sort
for sorting.
Algorithm:

Step 1: Node root represents the initial state of the knapsack, where you
have not selected any package. o TotalValue = 0. o The upper bound of the
root node UpperBound = M * Maximum unit cost.

Step 2: Node root will have child nodes corresponding to the ability to
select the package with the largest unit cost. For each node, you re-
calculate the parameters: o TotalValue = TotalValue (old) + number of
selected packages * value of each package. o M = M (old) number of
packages selected * weight of each package. o UpperBound = TotalValue +
M (new) * The unit cost of the packaced to be considered next.

Step 3: In child nodes, you will prioritize branching for the node having the
larger upper bound. The children of this node correspond to the ability of
selecting the next package having large unit cost. For each node, you must
re-calculate the parameters TotalValue, M, UpperBound according to the
formula mentioned in step 2.

Step 4: Repeat Step 3 with the note: for nodes with upper bound is lower or
equal values to the temporary maximum cost of an option found, you do
not need to branch for that node anymore.

Step 5: If all nodes are branched or cut off, the most expensive option is the
one to look for.

Applications:
In many cases of resource allocation along with some constraint, the
problem can be derived in a similar way of Knapsack problem. Following is

portfolio optim

Conclusion: Fractional Knapsack is implemented.


Code:

#include <stdio.h>
typedef struct Item
{
int id;
int weight;
int value;
float density;
}Item;
void fractionalKnapsack(struct Item items[], int n, int W);
int main()
{
int i,j,n,W;
Item temp;
printf("Enter the number of items : \n");
scanf("%d",&n);
Item items[n];
printf("Enter the weight and values of the items : \n");
for(i=0;i<n;i++)
{
scanf("%d",&items[i].weight);
scanf("%d",&items[i].value);
items[i].id=i+1;
items[i].density=0;
}
printf("Enter capacity of knapsack :\n");
scanf("%d",&W);
//compute desity = (value/weight)
for(i = 0; i < n; i++) {
items[i].density = ((float)items[i].value)/(items[i].weight);
}
//sort by density in descending order
for(i = 1; i < n; i++) {
for(j = 0; j < n - i; j++) {
if(items[j+1].density > items[j].density) {
temp = items[j+1];
items[j+1] = items[j];
items[j] = temp;
}
}
}
printf("Items are presorted in the decreasing order of value/weight :\n");
for(i=0;i<n;i++)
{
printf("Selected Item: i%d |\tWeight: %d |\tValue: %d\t\n", items[i].id,
items[i].weight,
items[i].value);
}
printf("\n\n");
fractionalKnapsack(items, n, W);
return 0;
}
void fractionalKnapsack( struct Item items[], int n, int W)
{
int i, wt;
float value;
float totalWeight = 0, totalBenefit = 0;
for(i = 0; i < n; i++)
{
if(items[i].weight + totalWeight <= W)
{
totalWeight += items[i].weight;
totalBenefit += items[i].value;
printf("Selected Item: i%d |\tWeight: %d |\tValue: %d |\tTotal Weight: %f
|\tTotal Benefit: %f\n", items[i].id, items[i].weight, items[i].value,
totalWeight, totalBenefit);
}
else
{
wt = (W - totalWeight);
value = wt * ((float)items[i].value) / (items[i].weight);
totalWeight += wt;
totalBenefit += value;
printf("Selected Item: i%d |\tWeight: %d |\tValue: %f |\tTotal Weight: %f
|\tTotal Benefit: %f\n", items[i].id, wt, value, totalWeight, totalBenefit);
break;
}
}
printf("Total Weight: %f\n", totalWeight);
printf("Total Benefit: %f\n", totalBenefit);
}
Output:
Name: Manuja Vedant
Roll. No: 1902094
Batch: C22

Experiment 6

Problem statement: Implementing Knapsack 0/1 Dynamic programming

Theory:
Given two integer arrays to represent weights and profits items, we need to
find a subset of these items which will give us maximum profit such that their
only be
selected once, which means either we put an item in the knapsack or skip it.

When analyzing 0/1 Knapsack problem using Dynamic programming, you


can find some noticeable points. The value of the knapsack algorithm
depends ontwo factors:

1. How many packages are being considered


2. The remaining weight which the knapsack can

store.Therefore, you have two variable quantities.

With dynamic programming, you have useful information:

1. the objective function will depend on two variable quantities


2. the table of options will be a 2-dimensional table.

If calling B[i][j] is the maximum possible value by selecting in packages {1,


2, ...,i} with weight limit j.

The maximum value when selected in n packages with the weight


limitM is B[n][M]. In other words: When there are i packages to
choose, B[i][j] is the optimal weight when the maximum weight of
the knapsackis j.
The optimal weight is always less than or equal to the maximum weight:
B[i][j] j.

For example: B[4][10] = 8. It means that in the optimal case, the total
weight ofthe selected packages is 8, when there are 4 first packages to
choose from (1stto 4th package) and the maximum weight of the knapsack
is 10. It is not necessary that all 4 items are selected.
Algorithm:
KNAPSACK (n, W)
1. for w = 0, W
2. do V
3. for i=0, n
4. do V [i, 0]
5. for w = 0, W
6. do if (wi w & vi + V [i-1, w - wi]> V [i -1,W])
7. then V [i, W] i + V [i - 1, w - wi]
8. else V [i, [i - 1, w]

Example:
The maximum weight the knapsack can hold is W is 11. There are five
items tochoose from. Their weights and values are presented in the
following table:
The [i, j] entry here will be V [i, j], the best value obtainable using the first
"i"rows of items if the maximum capacity were j. We begin by initialization
andfirst row.

V [i, j] = max {V [i - 1, j], vi + V [i - 1, j -wi]

The value of V [3, 7] was computed as

follows:V [3, 7] = max {V [3 - 1, 7], v3 + V [3

- 1, 7 - w3]
= max {V [2, 7], 18 + V [2, 7 - 5]}
= max {7, 18 + 6}
= 24
Finally, the output is:

Complexity:
Time complexity: Running time of Brute force approach is O(2n)
Running time using dynamic programming is O(n*M) where M is the
capacityof knapsack
Space Complexity: The space complexity is O(nM) where M is the
knapsackcapacity

Applications:
1.Home Energy Management.
2.Cognitive Radio Networks.
3.Resource management in
software.

4.Power allocation management.


5.Selecting adverts garden city radio.
Code:
wt = []

n = int(input("Enter the no of elements:


"))print("Enter the weight of elements:
")
for i in range(n):
wt.append(int(input()
))

val = []
print("Enter the profits:
")for i in range(n):
val.append(int(input()))
w = int(input("Enter the maximum weight: "))

def knapsack(W, wt, val, n):


K = [[0 for x in range(W + 1)] for x in range(n +
1)]print("Weight limit\t|", end="")
for j in range(W+1):
print(j,
"\t|",end="")
print()

for i in
range(n+1):if
i>0:
print("w=",wt[i-1]," v=",val[i-
1],"\t|",end="")for w in range(W + 1):
if i == 0 or w == 0:
K[i][w] = 0
elif wt[i - 1] <= w:
K[i][w] = max(val[i - 1] + K[i - 1][w - wt[i - 1]], K[i -
1][w])else:
K[i][w] = K[i - 1][w]
if i > 0:
print(K[i][w],"\t|",end="
")print()

return K[n][W]

print("The Maximum profit is: ", knapsack(w, wt, val, n))

Output:
Name: Manuja Vedant
Roll. No: 1902094
Batch: C22

Experiment 7

Aim: Implementing longest common subsequence algorithm

Theory:
The longest common subsequence (LCS) is defined as the longest
subsequencethat is common to all the given sequences, provided that the
elements of the subsequence are not required to occupy consecutive
positions within the original sequences.
If S1 and S2 are the two given sequences then, Z is the common
subsequence of S1 and S2 if Z is a subsequence of both S1 and S2.
Furthermore, Z must be astrictly increasing sequence of the indices of
both S1 and S2.
In a strictly increasing sequence, the indices of the elements chosen from
theoriginal sequences must be in ascending order in Z.
If S1 = {B, C, D, A, A, C, D}
S2 = {A, C, D, B, A, C}
Then, common subsequences are {B, C}, {C, D, A, C}, {D, A, C}, {A, A, C}, {A, C},
{C, D}, ...
Among these subsequences, {C, D, A, C} is the longest common
subsequence.We are going to find this longest common subsequence
using dynamic programming.
Algorithm:

X and Y be two given sequences


Initialize a table LCS of dimension X.length *
Y.lengthX.label = X
Y.label = Y
LCS[0][] = 0
LCS[][0] = 0
Start from
LCS[1][1] Compare
X[i] and Y[j]
If X[i] = Y[j]
LCS[i][j] = 1 + LCS[i-1, j-1]
Point an arrow to
LCS[i][j]Else
LCS[i][j] = max(LCS[i-1][j], LCS[i][j-1])
Point an arrow to max(LCS[i-1][j], LCS[i][j-1])

Example:
Longest Common Subsequence initialize table

Fill each cell of the table using the following logic.


If the character corresponding to the current row and current column
are matching, then fill the current cell by adding one to the diagonal
element.Point an arrow to the diagonal cell.
Else take the maximum value from the previous column and previous
row element for filling the current cell. Point an arrow to the cell with
maximumvalue. If they are equal, point to any of them.
Longest Common Subsequence fill the values

Step 2 is repeated until the table is filled.


The bottom right corner is the length of the LCS

In order to find the longest common subsequence, start from the last
element and follow the direction of the arrow. The elements corresponding
to () symbolform the longest common subsequence.
Longest Common Subsequence create a path
Create a path according to the ar

Thus, the longest common subsequence is CD.

Time Complexity
Worst case time complexity: O(n*m)
Average case time complexity: O(n*m)
Best case time complexity: O(n*m)

Space complexity: O(n*m)

Longest Common Subsequence Applications:


1. In compressing genome resequencing data
2. To authenticate users within their mobile phone through
in-airsignatures
Code:
str_one = input('Enter 1st string :
') str_two = input('Enter 2nd
string : ')len_one = len(str_one)
len_two = len(str_two)

M = [[0 for c in range(len_two + 1)] for r in range(len_one + 1)] # makes


matrixaccording to the lengths of strings
for i in range(len_one + 1):
for j in range(len_two +
1):
if i == 0 or j == 0:
M[i][j] = 0
elif str_one[i - 1] == str_two[j - 1]:
M[i][j] = M[i - 1][j - 1] + 1
else:
M[i][j] = max(M[i - 1][j], M[i][j - 1])
index =
M[len_one][len_two]res =
[""] * (index + 1) res[index]
= ""
i = len_one
j = len_two
while i > 0 and j > 0:

if str_one[i - 1] == str_two[j -
1]:res[index - 1] = str_one[i
- 1]
i -= 1
j -= 1
index -= 1

elif M[i - 1][j] > M[i][j -


1]:i -= 1
else:
j -= 1

print('\nMatrix Representation
\u2193')for k in range(len_two + 2):
if k == 0:
print(' ', end='')
elif k == 1:
print('n', end='
')else:
print(str_two[k - 2], end='
')print()
for i in
range(len(M)):if i
< 1:
print('n', end=' ')

elif i < len_one + 1:


print(str_one[i - 1], end='
')
for j in range(len(M[i])):
print(M[i][j], end='
')print()

print("\nLCS of " + str_one + " and " + str_two + " is : ",


end='')for i in res:
print(i, end='')
Output:
Name: Manuja Vedant
Roll. No: 1902094
Batch: C22

Experiment 8

Aim: Implementing n-queens algorithm

Theory:
N-Queens problem is one of the most common examples of backtracking.
This problem is to find an arrangement of N queens on a chess board, such
that noqueen can attack any other queens on the board.
The chess queens can attack in any direction as horizontal, vertical,
horizontaland diagonal way.
A binary matrix is used to display the positions of N Queens, where no
queenscan attack other queens. So, we start by placing the first queen
anywhere arbitrarily and then place the next queen in any of the safe
places.
We continue this process until the number of unplaced queens becomes
zero(a solution is found) or no safe place is left. If no safe place is left,
then we change the position of the previously placed queen

Algorithm:
Place (k, i)
{
-1
do if (x [j] = i)
or (Abs x [j]) - i) = (Abs (j - k))
then return false;
return true;
}
Place (k, i) return true if a queen can be placed in the kth row and ith
column otherwise return is false.

x [] is a global array whose final k - 1 values have been set. Abs (r)
returns the absolute value of r.

N - Queens (k, n)
{

do if Place (k, i) then


{

if (k ==n) then
write (x [1....n));
else
N - Queens (k + 1, n);
}
}
Example:
The implicit tree for 4 - queen problem for a solution (2, 4, 1, 3) is as
follows:

Time Complexity:
The recurrence of n-Queen problem is defined
asT(n) = n*T(n-1) + n2
Thus, the solution is O(n!).
Space Complexity:
For this algorithm it is O(N).
The algorithm uses an auxiliary array of length N to store just N positions.
Code:
n=int(input("Enter number of

Queens:"))matrix = [[0]*n for _ in

range(n)]

def is_empty(i, j):


#checking if there is a queen in row or
columnfor k in range(0,n):
if matrix[i][k]==1 or
matrix[k][j]==1:return True
#checking
diagonalsfor k in
range(0,n):
for l in range(0,n):
if (k+l==i+j) or (k-
l==i-j):if
matrix[k][l]==1:
return
Truereturn False

def
n_queen(x):
if x==0:
return True
for i in range(0,n):
for j in
range(0,n):
if (not(is_empty(i,j))) and
(matrix[i][j]!=1):matrix[i][j] = 1
if n_queen(x-
1)==True:return
True
matrix[i][j] =

0return False

n_queen(n)
for i in
matrix:
for j in i:
print(j,end=' ')
print()
Output:
Name: Manuja Vedant
Roll. No: 1902094
Batch: C22

Experiment 9

Aim: To implement sum of subsets problem.

Theory:
Subset sum problem is to find subset of elements that are selected from a
given set whose sum adds up to a given number K. We are considering the
set contains non-negative values. It is assumed that the input set is unique
(no duplicates are presented).
The Subset-Sum Problem can be solved by using the backtracking
approach. In this implicit tree is a binary tree. The root of the tree is
selected in such a way that represents that no decision is yet taken on any
input.

Algorithm:
Begin
if total = sum, then
display the subset
//go for finding next subset
subsetSum(set, subset, , subSize-1, total-set[node], node+1, sum)
return
else
for all element i in the set, do
subset[subSize] := set[i]
subSetSum(set, subset, n, subSize+1, total+set[i], i+1, sum)
done
End
Example:

1. In the above tree, a node represents function call and a branch


represents candidate element. The root node contains 4 children.
In other words, root considers every element of the set as different
branch.
2. The next level sub-trees correspond to the subsets that includes the
parent node. The branches at each level represent tuple element to
be considered.
3. For example, if we are at level 1, tuple vector [1] can take any value
of four branches generated. If we are at level 2 of left most
node, tuple vector [2] can take any value of three branches
generated, and so on.

4. For example, the left most child of root generates all those subsets
that include w[1]. Similarly, the second child of root generates all
those subsets that includes w[2] and excludes w[1].
5. As we go down along depth of tree, we add elements so far, and if
the added sum is satisfying explicit constraints, we will continue to
generate child nodes further.
6. Whenever the constraints are not met, we stop further generation of
sub-trees of that node, and backtrack to previous node to explore
the nodes not yet explored. In many scenarios, it saves
considerable amount of processing time.

Time complexity:
In the state space tree, at level I, the tree has 2i nodes. So given n items,

T(n) = 1+2+22+23 -1 =

Space complexity: (1)

Code:
#include<stdio.h>
#include<conio.h>
#define TRUE 1
#define FALSE 0
int inc[50],w[50],sum,n;
int promising(int i,int wt,int total) {
return(((wt+total)>=sum)&&((wt==sum)||(wt+w[i+1]<=sum)));
}
void main() {
int i,j,n,temp,total=0;
printf("Enter number of elements in the set:");
scanf("%d",&n);
printf("Enter %d numbers of the set:\n",n);
for (i=0;i<n;i++) {
scanf("%d",&w[i]);
total+=w[i];
}
printf("Input the sum value to create sub set:");
scanf("%d",&sum);
//sort in ascending order
for (i=0;i<=n;i++)
for (j=0;j<n-1;j++)
if(w[j]>w[j+1]) {
temp=w[j];
w[j]=w[j+1];
w[j+1]=temp;
}
printf("\nThe given %d numbers in ascending order: ",n);
for (i=0;i<n;i++)
printf("%d,",w[i]);
if((total<sum))
printf("\nSubset cannot be made."); else {
for (i=0;i<n;i++)
inc[i]=0;
printf("\nThe solutions is/are:\n");
sumOfSubset(-1,0,total);
}
}
void sumOfSubset(int i,int wt,int total) {
int j;
if(promising(i,wt,total)) {
if(wt==sum) {
printf("\n{");
for (j=0;j<=i;j++)
if(inc[j])
printf("%d,",w[j]);
printf("}\n");
} else {
inc[i+1]=TRUE;
sumOfSubset(i+1,wt+w[i+1],total-w[i+1]);
inc[i+1]=FALSE;
sumOfSubset(i+1,wt,total-w[i+1]);
}
}}
Output:
Name: Manuja Vedant
Roll. No: 1902094
Batch: C22

Experiment 10

Aim: To implement string matching using KMP algorithm.

Theory:
Knuth Morris Pratt (KMP) is an algorithm, which checks the characters
from left to right. When a pattern has a sub-pattern and it appears more
than once in the pattern, it uses that property to improve the time
complexity, also for in the worst case. The b
algorithm is: whenever we detect a mismatch (after some matches), we
already know some of the characters in the text of the next window. We
take advantage of this information to avoid matching the characters that
we know will anyway match.

Algorithm:
KMP:

Begin
n := size of text
m := size of pattern
call findPrefix(pattern, m, prefArray)
while i < n, do
if text[i] = pattern[j], then
increase i and j by 1
if j = m, then
print the location (i-j) as there is the pattern
j := prefArray[j-1]

j := prefArray[j - 1]
else
increase i by 1
done
End

LPS:

Begin
length := 0
prefArray[0] := 0

if pattern[i] = pattern[length], then


increase length by 1
prefArray[i] := length
else

length := prefArray[length - 1]
decrease i by 1
else
prefArray[i] := 0
done
End

Example:
Text S = a c b a c x a b c d a b x a b c d a c b a c d a b c
Pattern P = a c b a c d a b c y
o Step 1:

Index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Text S= a c b a c x a b c d a b x a b c d a c b a c d a b c
Pattern P= a c b a c d a b c y
o Step 2: When the mismatched character is found the KMP Algorithm
data searches for substring before the mismatched character.

mismatched ch
suffixes and prefixes.
Proper prefix are a, ac,acb,acba
Proper suffix are c, ac,bac,cbac
KMP Algorithm data find out the suffix and prefix which are common. In
is the longest common substring such that
it is the suffix and prefix both.
o Step 3:
have matched

Index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Text S= a c b a c x a b c d a b x a b c d a c b a c d a b c
Pattern P= a c b a c d a b c y

string. Therefore,
we can directly start searching from. Hence, index 3 s the current starting
position now.
o Step 4:
the given string so we can directly skip two characters and move to

o Step 5:
out the longest common substring from the substring before the
mismatched character, which is common to both suffix and prefix.
Here, there is no common between suffix and prefix so we can say

o Step 6:

and now increments the position.


Index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Text S= a c b a c x a b c d a b x a b c d a c b a c d a b c
Pattern P= a c b a c d a b c y
o Step 7:
Index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Text S= a c b a c x a b c d a b x a b c d a c b a c d a b c
Pattern P= a c b a c d a b c y
o Step 8: By the rule find substring before unmatched character and
find the longest common substring which is common to both suffix
and prefix. There is no common such substring.

Time complexity and Space complexity:


The time complexity of KMP algorithm is O(n).

Applications:
DNA pattern matching
Searching a particular keyword in any paragraph

Code:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int KMP(char *str, char *word, int *ptr)
{
int i = 0, j = 0;
while ((i + j) < strlen(str))
{
if (word[j] == str[i + j])
{
if (j == (strlen(word) - 1))
{
printf("%s located at the index %d\n", word, i);
}
j = j + 1;
}
else
{
i = i + j - ptr[j];
if (ptr[j] > -1)
{
j = ptr[j];
}
else
{
j = 0;
}
}
}
}
void findOverlap(char *word, int *ptr)
{
int i = 2, j = 0, len = strlen(word);
ptr[0] = -1;
ptr[1] = 0;
while (i < len)
{
if (word[i - 1] == word[j])
{
j = j + 1;
ptr[i] = j;
i = i + 1;
}
else if (j > 0)
{
j = ptr[j];
}
else
{
ptr[i] = 0;
i = i + 1;
}
}
}
int main()
{
char word[256], str[1024];
;
int *ptr, i;
printf("Enter Text :--> ");
fgets(str, 1024, stdin);
str[strlen(str) - 1] = '\0';
printf("Enter Pattern :--> ");
fgets(word, 256, stdin);
word[strlen(word) - 1] = '\0';
ptr = (int *)calloc(1, sizeof(int) * (strlen(word)));
findOverlap(word, ptr);
KMP(str, word, ptr);
return 0;
}

Output:

You might also like