Aoa Lab Manual
Aoa Lab Manual
Semester : IV
Div : A&B
● To enhance students’ skills towards emerging technologies to propose solutions for engineering
and entrepreneurial pursuits, making them employable.
● To produce technology professionals with ethical values and commitment to lifelong learning.
● Graduates of programme will be able to provide effective and efficient real time solutions using
practical knowledge in Computer Engineering domain.
● Graduate of programme will be able to use engineering practices, strategies and tactics for the
development, operation and maintenance of software system.
Course Objectives
Course Outcomes
List of Experiments
Analysis of Algorithm(CSL 401)
Sr.
No. Title CO’s
1 Write a program to implement Linear Search & Binary search and study its
CO1
complexity.
2 Write a program to implement Selection Sort and study its complexity. CO1
3 Write a program to implement Quick sort study their complexities. CO1
4 Write a program to implement Merge sort and study its complexities. CO1
5 Write a program to implement all pair shortest path using Floyd Warshall's
CO2
algorithm and its analysis.
6 Write a program to implement 0/1 Knapsack Problem and its analysis. CO2
11 Write a program to implement the naïve string matching Algorithms and its
CO3
analysis.
Experiment No: 01
Theory:
Linear search is also called a sequential search algorithm. It is the simplest searching algorithm. In
Linear search, we simply traverse the list completely and match each element of the list with the item
whose location is to be found. If the match is found, then the location of the item is returned; otherwise,
the algorithm returns NULL.
It is widely used to search an element from the unordered list, i.e., the list in which items are not sorted.
The worst-case time complexity of linear search is O(n).
Algorithm:
The steps used in the implementation of Linear Search are listed as follows -
2. In each iteration of for loop, compare the search element with the current array element, and -
a. If the element matches, then return the index of the corresponding array element.
b. If the element does not match, then move to the next element.
3. If there is no match or the search element is not present in the given array, return -1.
Linear_Search(a, n, val) // 'a' is the given array, 'n' is the size of given array, 'val' is the value to search
Step 1: set pos = -1
Step 2: set i = 1
Step 3: repeat step 4 while i <= n
Step 4: if a[i] == val
set pos = i
print pos
go to step 6
[end of if]
set i = i + 1
[end of loop]
Step 5: if pos = -1
print "value is not present in the array "
[end of if]
A Binary search algorithm finds the position of a specified value (the input "key") within a sorted
array. At each stage, the algorithm compares the input key value with the key value of the middle element
of the array. If the keys matches, then a matching element has been found so its index, or position, is
returned. Otherwise, if the sought key is less than the middle element's key, then the algorithm repeats
its action on the sub-array to the left of the middle element or, if the input key is greater, on the sub-array
to the right. If the remaining array to be searched is reduced to zero, then the key cannot be found in the
array and a special "Not found" indication is returned.
Algorithm:
Now we should define when iterations should stop. First case is when searched element is found. Second
one is when sub array has no elements. In this case, we can conclude, that searched value doesn't present
in the array.
Binary_Search(Array[0..N-1], value, low, high):
if (high < low):
return -1 // not found
mid = (low + high) / 2
if (A[mid] > value):
return Binary_Search(A, value, low, mid-1)
else if (A[mid] < value):
return Binary_Search (A, value, mid+1, high)
else:
return mid // found
Example 1. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Step 1 (middle element is 19 > 6): -1 5 6 18 19 25 46 78 102 114
Step 2 (middle element is 5 < 6): -1 5 6 18 19 25 46 78 102 114
Step 3 (middle element is 6 == 6): -1 5 6 18 19 25 46 78 102 114
The search space is reduced by half at each step and this guides us in computing the time complexity.
For an array with n elements, we check if the middle-most element matches the target. If so, we return
True and terminate the search.
The time complexity of binary search is, therefore, O(logn). This is much more efficient than the linear
time O(n), especially for large values of n.
Aim: Write a program to implement Selection Sort and study its complexity.
Theory:
Selection sort is a simple and efficient sorting algorithm that works by repeatedly selecting the smallest
(or largest) element from the unsorted portion of the list and moving it to the sorted portion of the list.
The algorithm repeatedly selects the smallest (or largest) element from the unsorted portion of the list
and swaps it with the first element of the unsorted portion. This process is repeated for the remaining
unsorted portion of the list until the entire list is sorted.
2. Compare minimum with the second element. If the second element is smaller than minimum,
assign the second element as minimum.
3. Compare minimum with the third element. Again, if the third element is smaller, then assign
minimum to the third element otherwise do nothing. The process goes on until the
last element.
After each iteration, minimum is placed in the front of the unsorted list.
Analysis
Computer Engineering Analysis of Algorithms (CSL401) Sem IV
Avreage case and Worst Case
The time complexity of Selection Sort is O(N2) as there are two nested loops:
One loop to select an element of Array one by one = O(N)
Another loop to compare that element with every other Array element = O(N)
Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N 2)
Best Case
The best case is when the array is already sorted. So even if the array is already sorted, we will
traverse the entire array for checking, and in each iteration or traversal, we will perform the searching
operation. Only the swapping will not be performed as the elements will be at the correct position Since
the swapping only takes a constant amount of time i.e.O(1)the best time complexity of selection sort comes
out to be O(N2)
Time complexity
Conclusion:
Thus we implemented Selection sort and understood its time complexity
Theory:
Quick sort is a divide and conquer algorithm which relies on a partition operation: to partition an array,
an element, called a pivot is chosen, all smaller elements are moved before the pivot, and all greater
elements are moved after it. This can be done efficiently in linear time and in-place. Then recursively
sorting can be done for the lesser and greater sub lists. Efficient implementations of quick sort (with in-
place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting
algorithms in practice. Together with its modest O(log n) space usage, this makes quick sort one of the
most popular sorting algorithms, available in many standard libraries. The most complex issue in quick
sort is choosing a good pivot element; consistently poor choices of pivots can result in drastically slower
(O(n2)) performance, but if at each step we choose the median as the pivot then it works in O(n log n).
Quick sort sorts by employing a divide and conquer strategy to divide a list into two sub-lists.
Recursively sort the sub-list of lesser elements and the sub-list of greater elements.
pivotValue := a[pivotIndex]
swap(a[pivotIndex], a[right]) // Move pivot to end
storeIndex := left
for i from left to right-1
if a[i] ≤ pivotValue
swap(a[storeIndex], a[i])
storeIndex := storeIndex + 1
swap(a[right], a[storeIndex]) // Move pivot to its final place
return storeIndex
ANALYSIS
The partition routine examines every item in the array at most once, so complexity is clearly O(n).
Usually, the partition routine will divide the problem into two roughly equal sized partitions. We know
that we can divide n items in half log2n times.
Conclusion: Thus we implemented quick sort which is fastest algorithm and worst case time
complexity is O(N2)
Aim: Write a program to implement Merge sort and study its complexities.
Theory:
Merge sort is based on the divide-and-conquer paradigm. Its worst-case running time has a
lower order of growth than insertion sort. Since we are dealing with sub problems, we state each sub
problem as sorting a sub array A[p .. r]. Initially, p = 1 and r = n, but these values change as we recourse
through sub problems.
1. Divide Step
If a given array A has zero or one element, simply return; it is already sorted. Otherwise,
split A[p .. r] into two sub arrays A[p .. q] and A[q + 1 .. r], each containing about half of the
elements of A[p .. r]. That is, q is the halfway point of A[p .. r].
2. Conquer Step
Conquer by recursively sorting the two sub arrays A[p .. q] and A[q + 1 .. r].
3. Combine Step
Combine the elements back in A[p .. r] by merging the two sorted sub arrays A[p .. q] and
A[q + 1 .. r] into a sorted sequence. To accomplish this step, we will define a procedure
MERGE (A, p, q, r).
Note that the recursion bottoms out when the sub array has just one element, so that it is trivially sorted.
mergesort(m)
var list left, right
if length(m) ≤ 1
return m
else
middle = length(m) / 2
for each x in m up to middle
add x to left
for each x in m after middle
add x to right
There are several variants for the merge() function, the simplest variant could look like this:
merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append left to result
if length(right) > 0
append right to result
return result
ANALYSIS
The straightforward version of function merge requires at most 2n steps (n steps for copying the sequence
to the intermediate array b, and at most n steps for copying it back to array a). The time complexity of
mergesort is therefore
T(n) 2n + 2 T(n/2) and
T(1) = 0
The solution of this recursion yields
T(n) 2n log(n) O(n log(n))
Thus, the Mergesort algorithm is optimal, since the lower bound for the sorting problem of Ω(n log(n))
is attained.
In the more efficient variant, function merge requires at most 1.5n steps (n/2 steps for copying the first
half of the sequence to the intermediate array b, n/2 steps for copying it back to array a, and at most n/2
steps for processing the second half). This yields a running time of mergesort of at most 1.5n log(n)
steps. Algorithm Mergesort has a time complexity of Θ(n log(n)) which is optimal.
A drawback of Mergesort is that it needs an additional space of Θ(n) for the temporary array b.
Computer Engineering Analysis of Algorithms (CSL401) Sem IV
Example: Bottom-up view of the above procedure for n = 8.
Merging
What remains is the MERGE procedure. The following is the input and output of the MERGE
procedure.
INPUT: Array A and indices p, q, r such that p ≤ q ≤ r and sub array A[p .. q] is sorted and
sub array A[q + 1 .. r] is sorted. By restrictions on p, q, r, neither sub array is empty.
OUTPUT: The two sub arrays are merged into a single sorted sub array in A[p .. r].
We implement it so that it takes Θ(n) time, where n = r − p + 1, which is the number of elements being
merged.
Conclusion: Thus we implemented Merge Sort which is having a time complexity is O(nlog n).
Aim: Write a program to implement all pair shortest path using Floyd Warshall's algorithm and its
analysis.
Theory:
The all pairs shortest path problem involves finding the shortest path from each node in the graph to
every other node in the graph. A directed graph is presented in Figure 2.1. This can be calculated by
using a number of different algorithms. One way is simply to perform a single source algorithm, this
problem calculates the shortest path from one node to every other node, and applies it to all the nodes in
the graph. Another way is to present the graph as a matrix; this can be seen in Figure 2.1 which shows a
graph. The corresponding matrix representation of the graph is given in Figure 2.2. Once this matrix has
been constructed, distance matrix multiplication can be performed on it, distance matrix multiplication
is explained in section 2.3. The method of repeated squaring is used on the matrix log n times. Once this
repeated squaring has been completed the matrix will then contain the solution to the all pairs shortest
path problem.
There are other methods for calculating the all pairs shortest path problem,
Given a directed graph G = (V,E), where each edge (v,w) has a nonnegative cost C[v,w], for all pairs of
vertices (v,w) find the cost of the lowest cost path from v to w.
● We must recover the path itself, and not just the cost of the path.
Floyd's Algorithm
ANALYSIS
The time complexity is Θ (V³).
Theory:
The knapsack problem is a problem in combinatorial optimization. It derives its name from the
maximization problem of choosing possible essentials that can fit into one bag (of maximum weight) to
be carried on a trip. A similar problem very often appears in business, combinatory, complexity theory,
cryptography and applied mathematics. Given a set of items, each with a cost and a value, then determine
the number of each item to include in a collection so that the total cost is less than some given cost and
the total value is as large as possible.
● As the name suggests, items are indivisible i.e. we can not take the fraction of any item.
● We have to either take an item completely or leave it completely.
● It is solved using dynamic programming approach.
For this algorithm let c[i,w] = value of solution for items 1..i and maximum weight w.
c[i,w] =
DP-01K(v, w, n, W)
1 for w = 0 to W
2 c[0,w] = 0
3 for i = 1 to n
4 c[i,0] = 0
5 for w = 1 to W
● It takes θ(nw) time to fill (n+1)(w+1) table entries where each entry takes constant time θ(1) for its
computation.
● It takes θ(n) time for tracing the solution as the tracing process traces the n rows starting from row n
and then moving up by 1 row at every step.
● Thus, overall θ(nw) time is taken to solve 0/1 knapsack problem using dynamic programming
approach.
Conclusion: Thus we implemented 0/1 Knapsack problem using Dynamic programming approach.
Aim: Write a program to implement Minimum cost Spanning tree using Prim’s algorithm and its
analysis.
Theory:
Prim’s algorithm is a greedy algorithm that is used to find a minimum spanning tree (MST) of a given
connected weighted graph. This algorithm is preferred when the graph is dense. The dense graph is a
graph in which there is a large number of edges in the graph. This algorithm can be applied to only
undirected connected graph and there should not be any negative weighted edge. In this case, the
algorithm is quite efficient. Since there are no nonnegative weight cycles, there will be a shortest path
whenever there is a path.
The steps to find minimum spanning tree using Prim’s algorithm are as follows:
1. If graph has loops and parallel edges than remove loops and parallel edges of that graph.
2. Randomly choose any node, labelling it with a distance of 0 and all other nodes as ∞. The chosen node
is treated as current node and considered as visited. All other nodes are considered as unvisited.
3. Identify all the unvisited nodes that are presently connected to the current node. Calculate the distance
from the unvisited nodes to the current node.
4. Label each of the vertices with their corresponding weight to the current node, but relabel of a node,
if it is less than the previous value of the label. Each time, the nodes are labelled with its weights; keep
track of the path with the smallest weight.
5. Mark the current node as visited by colouring over it. Once a vertex is visited, we not need to look at
it again.
6. From all the unvisited nodes, find out the node which has minimum weight to the current node,
consider this node as visited and treat it as the current working node.
7. Repeat steps 3, 4 and 5 until all nodes are visited.
8. After completed all steps get desired MST.
Algorithm:
/* S= set of visited node, Q= Queue, G=Graph, w=Weight */
Prim_MST (G, w, S)
{
Initialization (G,S)
S ←Ø // the set of visited nodes is initially empty
Q← v[G] // The queue is initially contain all nodes
while (Q= Ø) // Queue not empty
do u← extract_min(Q) // select the minimum distance of Q
S ←SU{u} // the u is add in visited set S
Initialization (G, S)
{
For each vertex v ϵ adj (u)
d [v] ←∞ //Unknown Distance from source node
π[v] ←nil // Predecessor node initially nil.
d[s] ←0 //Distance of source nodes is zero
}
Finding the minimum distance is O (V) and overall complexity with adjacency list representation is O
(V2).
If queue is kept as a binary heap, relax will need a decrease-key operation which is O (logV) and the
overall complexity is O (V log V + E log V) i.e. O (E log V).
If queue is kept as a Fibonacci heap, decrease-key has an amortized complexity O (1) and the overall
complexity is O (E+V log V)
Aim: Write a program to implement Minimum cost Spanning tree using Kruskal's algorithm and its
analysis.
Kruskal’s Algorithm:
Kruskal’s algorithm is the following: first, sort the edges in increasing (or rather non decreasing) order
of costs, so that c(e1) ≤ c(e2) ≤ . . . ≤ c(em); then, starting with an initially empty tree T, go through the
edges one at a time, putting an edge in T if it will not cause a cycle, but throwing the edge out if it would
cause a cycle.
Below are the steps for finding MST using Kruskal’s algorithm
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree
formed so far. If cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
Algorithm:
MST-Kruskal(G, w)
1. A ← ∅ // initially A is empty
2. for each vertex v ∈ V[G] // line 2-3 takes O(V) time
3. do Create-Set(v) // create set for each vertex
4. sort the edges of E by non decreasing weight w
5. for each edge (u, v) ∈ E, in order by non decreasing weight
6. do if Find-Set(u) ≠ Find-Set(v) // u&v on different trees
7. then A ← A ∪ {(u, v)}
8. Union(u,v)
9. return A
Conclusion: Thus we implemented the minimum spanning tree using Prim’s and Kruskal’s algorithm.
Aim: Write a program to implement Single source shortest path using Dijkstra algorithm and its analysis.
Theory:
Dijkstra's algorithm solves the single-source shortest-path problem when all edges have non-negative
weights. It is a greedy algorithm and similar to Prim's algorithm. Algorithm starts at the source vertex,
s, it grows a tree, T, that ultimately spans all vertices reachable from S. Vertices are added to T in order
of distance i.e., first S, then the vertex closest to S, then the next closest, and so on. Following
implementation assumes that graph G is represented by adjacency lists.
DIJKSTRA (G, w, s)
1. INITIALIZE SINGLE-SOURCE (G, s)
2. S ← { } // S will ultimately contains vertices of final shortest-
path weights from s
3. Initialize priority queue Q i.e., Q ← V[G]
4. while priority queue Q is not empty do
5. u ← EXTRACT_MIN(Q) // Pull out new vertex
6. S ← S È {u} // Perform relaxation for
each vertex v adjacent to u
7. for each vertex v in Adj[u] do
8. Relax (u, v, w)
ANALYSIS
Like Prim's algorithm, Dijkstra's algorithm runs in O(|E|lg|V|) time.
Conclusion: Thus we implemented the Single source shortest path using the Dijkstra algorithm.
Aim: Write a program to implement N-Queen Problem using Backtracking and its analysis.
Theory:
One of the most common examples of the backtracking is to arrange N queens on an NxN chessboard
such that no queen can strike down any other queen. A queen can attack horizontally, vertically, or
diagonally. The solution to this problem is also attempted in a similar way.
We first place the first queen anywhere arbitrarily and then place the next queen in any of the safe places.
We continue this process until the number of unplaced queens becomes zero (a solution is found) or no
safe place is left. If no safe place is left, then we change the position of the previously placed queen.
● Select an item and set it to one of its options such that it meets current constraints
● Recursively try to set next item
● If you reach a point where all items are assigned and meet constraints, done…return through
recursion stack with solution
● If no viable value for an item exists, backtrack to previous item and repeat from the top
● If viable options for the 1st item are exhausted, no solution exists
Analysis:
The backtracking algorithm, is a slight improvement on the permutation method to solve N X N queens
problem,
● Constructs the search tree by considering one row of the board at a time, eliminating most non-
solution board positions at a very early stage in their construction.
● Because it rejects row and diagonal attacks even on incomplete boards, it examines only 15,720
possible queen placements.
● A further improvement which examines only 5,508 possible queen placements is to combine the
permutation based method with the early pruning method:
● The permutations are generated depth-first, and the search space is pruned if the partial
permutation produces a diagonal attack
Theory:
The problem of string matching is a prevalent and important problem in computer science today. There
are really two forms of string matching. The first, exact string matching, finds instances of some pattern
in a target string.
For example, if the pattern is "go" and the target is "agogo", then two instances of the pattern appear in
the text (at the second and fourth characters, respectively). The second, inexact string matching or string
alignment, attempts to find the "best" match of a pattern to some target. Usually, the match of a pattern
to a target is either probabilistic or evaluated based on some fixed criteria (for example, the pattern
"aggtgc" matches the target "agtgcggtg" pretty well in two places, located at the first character of the
string and the sixth character).
Both string matching algorithms are used extensively in bioinformatics to isolate structurally similar
regions of DNA or a protein (usually in the context of a gene map or a protein database).
Naive Algorithm:
The idea of the naive solution is just to make a comparison character by character of the text T[s...s + m
− 1] for all s ∈ {0, . . . , n − m + 1} and the pattern P[0...m − 1]. It returns all the valid shifts found. Let’s
see how the algorithm works in a practical example. Below is the example of operation of the naive
string matcher in a DNA string.
Analysis:
The problem of this approach is the effectiveness. In fact, the time complexity of the Naïve algorithm in
its worst case is O(M x N). For example if the pattern to search is a m and the text is an, then we need M
operation of comparison by shift. For all the text, we need (N -M +1) x M operation, generally M is very
small compared to N, it is why we can simply consider the complexity as O(M x N).