Lab Manual AOA
Lab Manual AOA
THEORY:
Insertion sort is an in-place sorting algorithm. It uses no auxiliary data structures while
sorting. It is inspired from the way in which we sort playing cards.
Consider the following elements are to be sorted in ascending order-
6, 2, 11, 7, 5
Firstly,
Secondly,
Thirdly,
Fourthly,
Analysis of algorithms
• It selects the fifth element (5).
• It checks whether it is smaller than any of the elements before it.
• Since 5 < (6, 7, 11), so it shifts (6, 7, 11) towards right and places 5 before them.
• The resulting list is 2, 5, 6, 7, 11.
for j = 2 to A.length
key ← A [j]
// Insert A[j] into the sorted sequence A[1
←
.. j-1] i j – 1
←
A[i+1] A[i]
←
i i–1
←
A[i+1] key
Step-01: For i = 1
Step-02: For i = 2
Analysis of algorithms
Step-03: For i = 3
Step-04: For i = 4
Analysis of algorithms
Loop gets terminated as ‘i’ becomes 5. The state of array after the loops are finished-
Time Complexity:
Best Case n
Average Case n2
Worst Case n2
Space Complexity:
•
The space complexity works out to be O(1).
SAMPLE CODE:
Analysis of algorithms
OUTPUT:
CONCLUSION:
This way one can implement Insertion Sort and it works best with small number of elements.
Analysis of algorithms
Experiment No. 2
AIM: Implement iterative binary search algorithm using divide & conquer method.
THEORY:
Searching Algorithms are a family of algorithms used for the purpose of searching.
The searching of an element in the given array may be carried out in the following two ways-
Example:
Let
x = 4 be the element to be searched.
Analysis of algorithms
Set two pointers low and high at the lowest and the highest positions respectively.
Find the middle element mid of the array ie. arr[(low + high)/2] = 6
. If x > mid, compare x with the middle element of the elements on the right side
of mid. This is done by setting low to low = mid + 1.
2
.3. Else, compare x with the middle element of the elements on the left side of mid. This
is done by setting high to high = mid - 1.
Analysis of algorithms
x = 4 is found.
Algorithm:
Time Complexities:
Analysis of algorithms
Space Complexity
• The space complexity of the binary search is O(1).
SAMPLE CODE:
Analysis of algorithms
OUTPUT:
CONCLUSION:
This way one can implement binary search using divide and conquer strategy..
Analysis of algorithms
Experiment No. 3
AIM: Find minimum and maximum number from the list using Divide & Conquer
methodology. Compare performance with traditional way of finding minimum and maximum.
THEORY:
Max-Min problem is to find a maximum and minimum element from the given array.
We can effectively solve it using divide and conquer approach.
In the traditional approach, the maximum and minimum element can be found by
comparing each element and updating Max and Min values as and when required.
This approach is simple but it does (n – 1) comparisons for finding max and the same
number of comparisons for finding the min. It results in a total of 2(n – 1)
comparisons. Using a divide and conquer approach, we can reduce the number of
comparisons.
Divide and conquer approach for Max. Min problem works in three stages.
▪
If a1 is the only element in the array, a1 is the maximum and minimum.
▪
If the array contains only two elements a1 and a2, then the single comparison
between two elements can decide the minimum and maximum of them.
▪
If there are more than two elements, the algorithm divides the array from the
middle and creates two subproblems. Both subproblems are treated as an
independent problem and the same recursive process is applied to them. This
division continues until subproblem size becomes one or two.
Analysis of algorithms
After solving two subproblems, their minimum and maximum numbers are compared
to build the solution of the large problem. This process continues in a bottom-up
fashion to build the solution of a parent problem.
Algorithm:
Time Complexities:
average case. MAXMIN does two comparisons to determine the minimum and
T(n) = 1, if n = 2
Analysis of algorithms
T(n) = 2T(n/2) + 2, if n > 2
⇒ T(n) = 2(2T(n/4) + 2) + 2
= 4T(n/4) + 4 + 2 … (2)
By substituting n by n/4 in Equation (1),
T(n/4) = 2T(n/8) + 2
T(n) = 4[2T(n/8) + 2] + 4 + 2
= 8T(n/8) + 8 + 4 + 2
= 23 T(n/23) + 23 + 22 + 21
.
.
After k – 1 iterations
Analysis of algorithms
It can be observed that divide and conquer approach does only [(3n/2) – 2] comparisons
compared to 2(n – 1) comparisons of the conventional approach.
For any random pattern, this algorithm takes the same number of comparisons.
SAMPLE CODE:
Analysis of algorithms
Analysis of algorithms
OUTPUT:
CONCLUSION:
This way one can find minimum and maximum from given list of numbers using divide and
conquer strategy.
Analysis of algorithms
Experiment No. 4
THEORY:
The problem that requires either minimum or maximum result then that problem
is known as an optimization problem.
Greedy method is one of the strategies used for solving the optimization problems.
A greedy
algorithm,
at that as the name suggests, always makes the choice that seems to be the best
moment.
1. The feasible solution: A subset of given inputs that satisfies all specified
constraints of a problem is known as a “feasible solution”.
2. Optimal solution: The feasible solution that achieves the desired extremum is
called an “optimal solution”. In other words, the feasible solution that either
minimizes or maximizes the objective function specified in a problem is known as
an
You“optimal solution”.
are given the following-
• A knapsack (kind of shoulder bag) with limited weight capacity.
• Each having some weight and value.
Fractional knapsack problem is solved using greedy method in the following steps-
Analysis of algorithms
1. Read number of items to variable n
2. Read capacity of knapsack to variable m
3. Initialize remaining capacity of the knapsack as u=m (initially remaining capacity is
full capacity)
4. Initialize solution array x[] with value 0 in indices 0 to n-1
5. Read the weights and profits of each item into two separate arrays w[] and p[]
6. Find the Pi/Wi ratio of each item and store in array ratio[]
7. Sort the ratio[] array in its descending order. Rearrange the corresponding values in
other arrays p[] and w[] along with it.
8. Display the sorted table (Print the arrays p[] ,w[] and ratio[])
9. Calculate Solution array x[]
10. For each weight in w[] that is less than or equal to the value of u (ie. remaining
capacity),
set x[i] = 1 and set reduce the value of u by the weight value w[i] (ie. u=u-w[i])
11. if w[i] becomes greater than u then simply break out of the loop and check whether i
is less than or equal to n.
12. If i<=n then simply calculate u/w[i] and store it in a variable xr
13. Display the solution array x[]
14. Calculate total profit and total weight by simply multiplying and accumulating
x[i]*p[i] and x[i]*w[i]
15. Display the Total profit and Total weight
Time Complexities:
• The main time taking step is the sorting of all items in decreasing order of their value
/ weight ratio.
• If the items are already arranged in the required order, then while loop takes O(n)
time.
• The average time complexity of Quick Sort is O(nlogn).
• Therefore, total time taken including the sort is O(nlogn).
Algorithm:
{
←
For j 1 to n do
←
X[j] 0
←
profit 0 // Total profit of item filled in the knapsack
←
weight 0 // Total weight of items packed in knapsacks
←
j 1
Analysis of algorithms
} }#include<stdio.h>
Profit = profit + p[j] * X[j] int main()
j++; {
} float
} weight[50],profit[50],ratio[50],Totalvalue,temp,capacity,amount;
int n,i,j;
printf("Enter the number of items :");
scanf("%d",&n);
for (i = 0; i < n; i++)
{
printf("Enter Weight and Profit for item[%d] :\n",i);
scanf("%f %f", &weight[i], &profit[i]);
}
printf("Enter the capacity of knapsack :\n");
output:- scanf("%f",&capacity);
Enter the number of items :4 for(i=0;i<n;i++)
ratio[i]=profit[i]/weight[i];
Enter Weight and Profit for item[0] :
for (i = 0; i < n; i++)
2 for (j = i + 1; j < n; j++)
if (ratio[i] < ratio[j])
12 {
temp = ratio[j];
Enter Weight and Profit for item[1] : ratio[j] = ratio[i];
ratio[i] = temp;
1
temp = weight[j];
10 weight[j] = weight[i];
weight[i] = temp;
Enter Weight and Profit for item[2] :
temp = profit[j];
3 profit[j] = profit[i];
profit[i] = temp;
20 }
Enter Weight and Profit for item[3] : printf("Knapsack problems using Greedy Algorithm:\n");
for (i = 0; i < n; i++)
2 {
if (weight[i] > capacity)
15 break;
else
Enter the capacity of knapsack : {
Totalvalue = Totalvalue + profit[i];
5 capacity = capacity - weight[i];
}
Knapsack problems using Greedy }
if (i < n)
Algorithm: Totalvalue = Totalvalue + (ratio[i]*capacity);
printf("\nThe maximum value is :%f\n",Totalvalue);
return 0;
}
The maximum value is :38.333332
--------------------------------
Analysis of algorithms
Experiment No. 5
THEORY:
Spanning tree
• A spanning tree can be defined as the subgraph of an undirected connected
• graph.
• A spanning tree consists of (n-1) edges, where 'n' is the number of vertices
(or nodes). A complete undirected graph can have nn-2 number of spanning
• trees where n is the number of vertices in the graph.
• There can be more than one spanning tree of a connected graph G.
• A spanning tree does not have any cycles or loop.
A spanning tree is minimally connected, so removing one edge from the tree
• will make
the graph disconnected.
Minimum Spanning tree
A spanning tree has n-1 edges, where 'n' is the number of nodes.
A minimum spanning tree can be defined as the spanning tree in which the sum of
the weights of the edge is minimum. The weight of the spanning tree is the sum of
the weights given to the edges of the spanning tree. In the real world, this weight
can be considered as the distance, traffic load, congestion, or any random value.
Analysis of algorithms
So, the minimum spanning tree that is selected from the above spanning trees
for the given weighted graph is –
A minimum spanning tree can be found from a weighted graph by using the
algorithms given below -
• Prim's Algorithm
• Kruskal's Algorithm
In this experiment, we will learn to implement Kruskal's Algorithm using greedy
method.
Kruskal's Algorithm
Analysis of algorithms
Example:
Given graph:
Analysis of algorithms
we choose next edge with the shortest edge and that does not create a cycle i.e. 0-3
next step is to choose the shortest edge so that it doesn’t form a cycle. This is 0-1.
We have covered all the vertices and we have a spanning tree with minimum cost here.
SAMPLE CODE:
Analysis of algorithms
Analysis of algorithms
OUTPUT:
Analysis of algorithms
T IME COMPLEXITY :
O(E logE) or O(V logV) is the time complexity of the Kruskal algorithm. Here E
indicates the no. of edges, and V indicates the no. of vertices.
Auxiliary Space: O(V + E), where V is the number of vertices and E is the number of
edges in the graph
CONCLUSION:
By using Kruskal’s algorithm , one can find minimum spanning tree using greedy
method.
Analysis of algorithms
Experiment No. 6
THEORY:
Bellman Ford algorithm works by overestimating the length of the path from the starting
vertex to all other vertices. Then it iteratively relaxes those estimates by finding new paths
that are shorter than the previously overestimated paths.
Bellman ford algorithm is a single-source shortest path algorithm. This algorithm is used to
find the shortest distance from the single vertex to all the other vertices of a weighted graph.
In contrast to Dijkstra algorithm, bellman ford algorithm guarantees the correct answer even
if the weighted graph contains the negative weight values.
Relaxing means:
If (d(u) + c(u , v) < d(v))
d(v) = d(u) + c(u , v)
First, initialize distance from the source to all vertices as infinite and distance to the source
itself as 0. Create an array dist[] of size |V| with all values as infinite except dist[src] where
src is source vertex.
Analysis of algorithms
distance[V] <- tempDistance
return distance[]
SAMPLE CODE:
Analysis of algorithms
Analysis of algorithms
OUTPUT:
Analysis of algorithms
T
IME COMPLEXITY :
O(V * E), where V is the number of vertices in the graph and E is the number of
edges in the graph
CONCLUSION:
Analysis of algorithms
By using Bellman Ford Algorithm, one can find minimum cost distance
between source vertex to all other vertices.
Analysis of algorithms
Experiment No. 7
THEORY:
• Floyd Warshall Algorithm is an example of dynamic programming
• approach.
• It computes the shortest path between every pair of vertices of the given
• graph.
• It is used to solve All Pairs Shortest Path Problem.
It is best suited for dense graphs.
• This is because its complexity depends only on the number of vertices in
• the given graph.
• Algorithm works for graphs with negative edge weights also.
Negative cycles are not allowed.
A negative cycle is one in which the overall sum of the cycle comes
negative.
For every pair (i, j) of the source and destination vertices respectively, there are two possible
cases.
• k is not an intermediate vertex in shortest path from i to j. we keep the value of dist[i][j]
as it is.
• k is an intermediate vertex in shortest path from i to j.
We update the value of dist[i][j] as dist[i][k] + dist[k][j]
if dist[i][j] > dist[i][k] + dist[k][j]
Analysis of algorithms
ALGORITHM:
SAMPLE CODE:
Analysis of algorithms
OUTPUT:
T
IME COMPLEXITY :
• Floyd Warshall Algorithm consists of three loops over all the nodes.
• The inner most loop consists of only constant complexity operations.
• Hence, the asymptotic complexity of Floyd Warshall algorithm is O(n3).
• Here, n is the number of nodes in the given graph.
CONCLUSION:
By using Floyd Warshall Algorithm, one can find minimum cost distance between all pair of
vertices.
Analysis of algorithms
Experiment No. 8
THEORY:
Given a normal 8x8 chessboard, find a way to place N queens on the board in such
a way that no queen is in danger from another. The objective of eight chess queens
is to place on an 8×8 chessboard so that no two queens threaten each other.
Backtracking is a technique used to solve problems with a large search space, by
systematically
trying and eliminating possibilities.
Analysis of algorithms
ALGORITHM:
N - Queens (k, n)
{
←
For i 1 to n
do if Place (k, i) then
{
←
x [k] i;
if (k ==n) then
write (x [1....n));
else
N - Queens (k + 1, n);
}
}
SAMPLE CODE:
Analysis of algorithms
Analysis of algorithms
OUTPUT:
Analysis of algorithms
T IME COMPLEXITY :
For finding a single solution where the first queen Q has been assigned the first
column and can be put on N positions, the second queen has been assigned the
second column and would choose from N-1 possible positions and so on; the
time complexity is O ( N ) * ( N - 1 ) * ( N - 2 ) * … 1 ). i.e The worst-case time
complexity is O ( N! ).
CONCLUSION:
With theon
queens help of backtracking approach one can find the proper positioning of
chess board.
Analysis of algorithms
Experiment No. 9
AIM: Implement 15 puzzle problem using branch and bound design method.
THEORY:
Given a 4×4 board with 15 tiles (every tile has one number from 1 to 15) and one
empty space. The objective is to place the numbers on tiles in order using the
empty space. We can slide four adjacent (left, right, above and below) tiles into the
empty space.
Here X marks the spot to where the elements can be shifted and the final configuration always
remains the same the puzzle is solvable.
A branch and bound algorithm is an optimization technique to get an optimal solution to the
problem. It looks for the best solution for a given problem in the entire space of the solution.
The bounds in the function to be optimized are merged with the value of the latest best solution.
It allows the algorithm to find parts of the solution space completely.
The purpose of a branch and bound search is to maintain the lowest-cost path to a target. Once
a solution is found, it can keep improving the solution. Branch and bound search is
implemented in depth-bounded search and depth–first search.
Analysis of algorithms
A branch and bound algorithm is an optimization technique to get an optimal
solution to the problem.
Node x is assigned a rank using function ĉ(.) such that ĉ(x) = f(h(x)) + ĝ(x)
SAMPLE CODE:
#include<stdio.h>
int m=0,n=4;
[10]) {
inti,j,m=0;
for(i=0;i<n;i++)
for(j=0;j<n;j++)
Analysis of algorithms
{
if(temp[i][j]!=t[i][j])
m++;
return m;
inti,j,f=1;
for(i=0;i<n;i++)
for(j=0;j<n;j++)
if(a[i][j]!=t[i][j])
f=0;
return f;
void main()
intp,i,j,n=4,a[10][10],t[10][10],temp[10][10],r[10][10];
int m=0,x=0,y=0,d=1000,dmin=0,l=0;
for(i=0;i<n;i++)
for(j=0;j<n;j++)
scanf("%d",&a[i][j]);
Analysis of algorithms
for(i=0;i<n;i++)
for(j=0;j<n;j++)
scanf("%d",&t[i][j]);
for(i=0;i<n;i++)
for(j=0;j<n;j++)
printf("%d\t",a[i][j]);
printf("\n");
for(i=0;i<n;i++)
for(j=0;j<n;j++)
printf("%d\t",t[i][j]);
printf("\n");
while(!(check(a,t)))
l++;
d=1000;
for(i=0;i<n;i++)
for(j=0;j<n;j++)
Analysis of algorithms
if(a[i][j]==0)
x=i;
y=j;
for(i=0;i<n;i++)
for(j=0;j<n;j++)
temp[i][j]=a[i][j];
if(x!=0)
p=temp[x][y];
temp[x][y]=temp[x-1][y];
temp[x-1][y]=p;
m=cal(temp,t);
dmin=l+m;
if(dmin<d)
d=dmin;
for(i=0;i<n;i++)
for(j=0;j<n;j++)
r[i][j]=temp[i][j];
Analysis of algorithms
}
for(i=0;i<n;i++)
for(j=0;j<n;j++)
temp[i][j]=a[i][j];
if(x!=n-1)
p=temp[x][y];
temp[x][y]=temp[x+1][y];
temp[x+1][y]=p;
m=cal(temp,t);
dmin=l+m;
if(dmin<d)
d=dmin;
for(i=0;i<n;i++)
for(j=0;j<n;j++)
r[i][j]=temp[i][j];
for(i=0;i<n;i++)
for(j=0;j<n;j++)
temp[i][j]=a[i][j];
Analysis of algorithms
if(y!=n-1)
p=temp[x][y];
temp[x][y]=temp[x][y+1];
temp[x][y+1]=p;
m=cal(temp,t);
dmin=l+m;
if(dmin<d)
d=dmin;
for(i=0;i<n;i++)
for(j=0;j<n;j++)
r[i][j]=temp[i][j];
for(i=0;i<n;i++)
for(j=0;j<n;j++)
temp[i][j]=a[i][j];
if(y!=0)
p=temp[x][y];
temp[x][y]=temp[x][y-1];
temp[x][y-1]=p;
Analysis of algorithms
}
m=cal(temp,t);
dmin=l+m;
if(dmin<d)
d=dmin;
for(i=0;i<n;i++)
for(j=0;j<n;j++)
r[i][j]=temp[i][j];
:\n"); for(i=0;i<n;i++)
for(j=0;j<n;j++)
printf("%d\t",r[i][j]);
printf("\n");
for(i=0;i<n;i++)
for(j=0;j<n;j++)
a[i][j]=r[i][j];
temp[i][j]=0;
Analysis of algorithms
}
OUTPUT:
1234
5608
9 10 7 11
13 14 15 12
1234
5678
9 10 11 12
13 14 15 0
Value: 1 2 3 4
5678
9 10 0 11
13 14 15 12
Minimum cost: 4
Value: 1 2 3 4
5678
9 10 11 0
Analysis of algorithms
13 14 15 12
Minimum cost: 4
Value: 1 2 3 4
5678
9 10 11 12
13 14 15 0
Minimum cost: 3
T IME COMPLEXITY :
THE TIME COMPLEXITY OF THIS ALGORITHM IS O(N^2 * N!) WHERE N IS THE
NUMBER OF TILES IN THE PUZZLE, AND THE SPACE COMPLEXITY IS O(N^2).
CONCLUSION:
With
of 15 the
help of branch and bound approach one can find the proper positioning
puzzle
problem.
Analysis of algorithms
Experiment No. 10
THEORY:
Naive string-matching:
This is the simplest method which works using Brute force method. This algorithm
checks all the positions in the text starting from 0 to n-m, whether an occurrence of
the pattern starts there or not. Then after each attempt, it shifts the pattern exactly
by one positions to right.
If the match is found then it returns otherwise the matching process is continued
by shifting one character to right. If there is no match for given text in pattern
even we have to do n comparisons.
ALGORITHM :
SAMPLE CODE:
#include <stdio.h>
#include <string.h>
int flag;
#define d 256
// calculate h
for (i = 0; i < m - 1; i++)
h = (h * d) % q;
int main() {
int choice, q;
char pattern[100], text[100];
while (1) {
printf("\nString Matching Algorithms:\n");
printf("1. Naive String Matching\n");
printf("2. Rabin-Karp Algorithm\n");
printf("3. Exit\n");
printf("Enter your choice: ");
scanf("%d", &choice);
switch (choice) {
case 1:
printf("Enter the text: ");
scanf("%s", text);
printf("Enter the pattern: ");
scanf("%s", pattern);
naive_search(pattern, text);
if(flag==1)
{
printf("No match found\n");
}
break;
case 2:
printf("Enter the text: ");
scanf("%s", text);
printf("Enter the pattern: ");
scanf("%s", pattern);
printf("Enter a prime number: ");
scanf("%d", &q);
rabin_karp_search(pattern, text, q);
if(flag==1)
{
printf("No match found\n");
}
break;
case 3:
printf("Exiting program.\n");
return 0;
default:
printf("Invalid choice. Try again.\n");
}
}
return 0;
}
OUTPUT :
TIME COMPLEXITY:
The time complexity of the Naive Algorithm is O(mn), where m is the size of the pattern to
be searched and n is the size of the container string.
Rabin Karp algorithm complexity:
Best case: O(m+n)
Worst case: O(nm)
CONCLUSION: Thus, one can use these both algorithms to find patterns from the given
string.