0% found this document useful (0 votes)
22 views36 pages

Aoa Lab Manual

The document is a lab manual for the Analysis of Algorithms course (CSC402) for B.E. Computer Engineering students in the IV semester for the academic year 2023-2024. It outlines the course objectives, program outcomes, specific outcomes, and includes a detailed syllabus with practical experiments on various algorithms such as sorting, searching, and dynamic programming. The manual also emphasizes the vision and mission of the Computer Engineering department, aiming to produce competent engineers with ethical values and lifelong learning skills.

Uploaded by

piklu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views36 pages

Aoa Lab Manual

The document is a lab manual for the Analysis of Algorithms course (CSC402) for B.E. Computer Engineering students in the IV semester for the academic year 2023-2024. It outlines the course objectives, program outcomes, specific outcomes, and includes a detailed syllabus with practical experiments on various algorithms such as sorting, searching, and dynamic programming. The manual also emphasizes the vision and mission of the Computer Engineering department, aiming to produce competent engineers with ethical values and lifelong learning skills.

Uploaded by

piklu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

LAB MANUAL

Course Name : Analysis of Algorithm

Course Code : CSC402

Lab Code : CSL401

Class : B.E. Computer Engineering

Semester : IV

Div : A&B

Academic Year : 2023-2024 (REV- ‘C’ Scheme)

Asst. Prof. Shefali Raina Dr. Rais A. Mulla


(SUB. IN-CHARGE) (H.O.D., Comp. Engg. Dept.)

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Vision of Computer Department

● To develop a center of excellence in computer engineering and produce globally


competent engineers who contribute towards the progress of the engineering
community and society as a whole.

Mission of Computer Department

● To provide students with diversified engineering knowledge to work in a


multidisciplinary environment.
● To provide a platform to cultivate research, innovation, and entrepreneurial skills.
● To produce world-class computer engineering professionals with moral values
and leadership abilities for the sustainable development of society.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Program Outcomes (POs)
PO1: Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals,
and an engineering specialization to the solution of complex engineering problems.
PO2: Problem analysis: Identity, formulate complex engineering problems reaching substantiated
conclusions using principles of Computer Engineering.
PO3: Design/development of solutions: Design / develop solutions for complex engineering problems
and design system components or processes that meet the specified needs with appropriate consideration
for the society.
PO4: Conduct investigations of complex problems: Use knowledge for the design of experiments,
analysis, interpretation of data, and synthesis of the information to provide valid conclusions.
PO5: Modern tool usage: Create, select and apply appropriate techniques and modern engineering tools,
including predictions and modeling to complex engineering activities with an understanding of the
limitations.
PO6: The engineer and society: Apply the knowledge to assess social issues and the responsibilities
relevant to engineering practices.
PO7: Environment and sustainability: Understand the impact of the professional engineering solutions
in social and environmental contexts, and demonstrate the knowledge for sustainable development.
PO8: Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms
of the engineering practice.
PO9: Individual and teamwork: Function effectively as an individual, and as a member or leader in
diverse teams, and in multidisciplinary settings.
PO10: Communication: Communicate effectively such as being able to comprehend and write effective
reports and design documentation, make effective presentations.
PO11: Project management and finance: Demonstrate knowledge and understanding of the engineering
and management skills and apply the skills to manage projects effectively.
PO12: Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.

Program Educational Objectives


Computer Engineering Analysis of Algorithms (CSL401) Sem IV
● To create graduates with sound fundamental knowledge of computer engineering.

● To enhance students’ skills towards emerging technologies to propose solutions for engineering
and entrepreneurial pursuits, making them employable.

● To produce technology professionals with ethical values and commitment to lifelong learning.

Program Specific Outcomes

● Graduates of programme will be able to provide effective and efficient real time solutions using
practical knowledge in Computer Engineering domain.

● Graduate of programme will be able to use engineering practices, strategies and tactics for the
development, operation and maintenance of software system.

Course Objectives

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


1. To introduce the methods of designing and analyzing algorithms
2. Design and implement efficient algorithms for a specified application
3. Strengthen the ability to identify and apply the suitable algorithm for the given real-world
problem.
4. Analyze worst-case running time of algorithms and understand fundamental algorithmic
problems.

Course Outcomes

1. Implement the algorithms using different approaches.


2. Analyze the complexities of various algorithms.
3. Analyze the complexity of the algorithms for specific problem.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Syllabus

Suggested Practical List:


Sr No Suggested Experiment List
1 Introduction
1.1 Selection sort, Insertion sort
2 Divide and Conquer Approach
2.1 Finding Minimum and Maximum, Merge sort, Quick sort, Binary search
3 Greedy Method Approach

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


3.1 Single source shortest path-
Dijkstra Fractional
Knapsack problem
Job sequencing with deadlines
Minimum cost spanning trees-Kruskal and Prim’s algorithm
4 Dynamic Programming Approach
4.1 Single source shortest path-
Bellman Ford All pair shortest
path- Floyd Warshall
0/1 knapsack
Travelling salesperson
problem Longest
common subsequence
5 Backtracking and Branch and bound
5.1 N-queen
problem
Sum of
subsets
Graph coloring
6 String Matching Algorithms
6.1 The Naïve string-matching
Algorithms The Rabin Karp
algorithm
The Knuth-Morris-Pratt algorithm
Term Work:
1 Term work should consist of 10 experiments.
2 Journal must include at least 2 assignments on content of
theory and practical of “Analysis of Algorithms”
3 The final certification and acceptance of term work
ensures that satisfactory performance of laboratory work
and minimum passing marks in term work.
4 Total 25 Marks (Experiments: 15-marks, Attendance
Theory & Practical: 05-marks, Assignments: 05-marks)

Oral & Practical exam


Based on the entire syllabus of CSC402: Analysis of
Algorithms

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Computer Engineering Analysis of Algorithms (CSL401) Sem IV
Academic Year 2023-24

List of Experiments
Analysis of Algorithm(CSL 401)

Sr.
No. Title CO’s

1 Write a program to implement Linear Search & Binary search and study its
CO1
complexity.
2 Write a program to implement Selection Sort and study its complexity. CO1
3 Write a program to implement Quick sort study their complexities. CO1
4 Write a program to implement Merge sort and study its complexities. CO1

5 Write a program to implement all pair shortest path using Floyd Warshall's
CO2
algorithm and its analysis.
6 Write a program to implement 0/1 Knapsack Problem and its analysis. CO2

7 Write a program to implement Minimum cost Spanning tree using Prim’s


CO2
algorithm and its analysis.
8 Write a program to implement Minimum cost Spanning tree using Kruskal's
CO2
algorithm and its analysis.
9 Write a program to implement Single source shortest path using Dijkstra algorithm
CO2
and its analysis.
10 Write a program to implement N-Queen Problem using Backtracking and its
CO3
analysis.

11 Write a program to implement the naïve string matching Algorithms and its
CO3
analysis.

Prof. Shefali Raina Dr. Rais A. Mulla


Subject In-charge HOD
Computer Engineering

Experiment No: 01

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Aim: Write a program to implement Linear Search & Binary search and study its complexity.

Theory:

Linear search is also called a sequential search algorithm. It is the simplest searching algorithm. In
Linear search, we simply traverse the list completely and match each element of the list with the item
whose location is to be found. If the match is found, then the location of the item is returned; otherwise,
the algorithm returns NULL.

It is widely used to search an element from the unordered list, i.e., the list in which items are not sorted.
The worst-case time complexity of linear search is O(n).
Algorithm:

The steps used in the implementation of Linear Search are listed as follows -

1. First, we have to traverse the array elements using a for loop.

2. In each iteration of for loop, compare the search element with the current array element, and -

a. If the element matches, then return the index of the corresponding array element.

b. If the element does not match, then move to the next element.

3. If there is no match or the search element is not present in the given array, return -1.

Linear_Search(a, n, val) // 'a' is the given array, 'n' is the size of given array, 'val' is the value to search
Step 1: set pos = -1
Step 2: set i = 1
Step 3: repeat step 4 while i <= n
Step 4: if a[i] == val
set pos = i
print pos
go to step 6
[end of if]
set i = i + 1
[end of loop]
Step 5: if pos = -1
print "value is not present in the array "
[end of if]

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Step 6: exit

Analysis of Time Complexity:


● Best Case: In the best case, the key might be present at the first index. So the best case
complexity is O(1)
● Worst Case: In the worst case, the key might be present at the last index i.e., opposite to
the end from which the sethe search space is reduced by half at each step and this guides us
in computing the time complexity.

● For an array with n elements, we check if the middle-most element matches the target. If so,
we return True and terminate the search.

● But if the middle element does not match the target, we perform binary search on a
subarray of size at most n/2. In the next step, we have to search through an array of size at
most n/4. And we continue this recursively until we can make a decision in a constant time
(when the subarray is empty).

● At step k, we need to search through an array of size at most n/(2^k). And we need to find
the smallest such k for which we have no subarray to search through.

● Mathematically:

● image-68
● The time complexity of binary search is, therefore, O(logn). This is much more efficient
than the linear time O(n), especially for large values of n.arch has started in the list. So the
worst-case complexity is O(N) where N is the size of the list.
● Average Case: O(N)

A Binary search algorithm finds the position of a specified value (the input "key") within a sorted
array. At each stage, the algorithm compares the input key value with the key value of the middle element
of the array. If the keys matches, then a matching element has been found so its index, or position, is
returned. Otherwise, if the sought key is less than the middle element's key, then the algorithm repeats
its action on the sub-array to the left of the middle element or, if the input key is greater, on the sub-array
to the right. If the remaining array to be searched is reduced to zero, then the key cannot be found in the
array and a special "Not found" indication is returned.
Algorithm:

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Algorithm is quite simple. The basic idea of binary search is comparatively straightforward, It can be
done either recursively or iteratively:

1. get the middle element;


2. if the middle element equals to the searched value, the algorithm stops;
3. otherwise, two cases are possible:
○ Searched value is less, than the middle element. In this case, go to the step 1 for the part
of the array, before middle element.
○ Searched value is greater, than the middle element. In this case, go to the step 1 for the
part of the array, after middle element.

Now we should define when iterations should stop. First case is when searched element is found. Second
one is when sub array has no elements. In this case, we can conclude, that searched value doesn't present
in the array.
Binary_Search(Array[0..N-1], value, low, high):
if (high < low):
return -1 // not found
mid = (low + high) / 2
if (A[mid] > value):
return Binary_Search(A, value, low, mid-1)
else if (A[mid] < value):
return Binary_Search (A, value, mid+1, high)
else:
return mid // found
Example 1. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Step 1 (middle element is 19 > 6): -1 5 6 18 19 25 46 78 102 114
Step 2 (middle element is 5 < 6): -1 5 6 18 19 25 46 78 102 114
Step 3 (middle element is 6 == 6): -1 5 6 18 19 25 46 78 102 114

Analysis of Binary Search Algorithm

The search space is reduced by half at each step and this guides us in computing the time complexity.
For an array with n elements, we check if the middle-most element matches the target. If so, we return
True and terminate the search.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


But if the middle element does not match the target, we perform binary search on a subarray of size at
most n/2. In the next step, we have to search through an array of size at most n/4. And we continue this
recursively until we can make a decision in a constant time (when the subarray is empty).
At step k, we need to search through an array of size at most n/(2^k). And we need to find the smallest
such k for which we have no subarray to search through.
Mathematically:

The time complexity of binary search is, therefore, O(logn). This is much more efficient than the linear
time O(n), especially for large values of n.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Experiment No: 02

Aim: Write a program to implement Selection Sort and study its complexity.
Theory:

Objective: Implement Selection Sort program using C


Outcome: The students are able to use Selection Sort algorithm

Selection sort is a simple and efficient sorting algorithm that works by repeatedly selecting the smallest
(or largest) element from the unsorted portion of the list and moving it to the sorted portion of the list.
The algorithm repeatedly selects the smallest (or largest) element from the unsorted portion of the list
and swaps it with the first element of the unsorted portion. This process is repeated for the remaining
unsorted portion of the list until the entire list is sorted.

1. Set the first element as minimum.

Set first element as minimum

2. Compare minimum with the second element. If the second element is smaller than minimum,
assign the second element as minimum.

3. Compare minimum with the third element. Again, if the third element is smaller, then assign
minimum to the third element otherwise do nothing. The process goes on until the
last element.

After each iteration, minimum is placed in the front of the unsorted list.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


For each iteration, indexing starts from the first unsorted element. Step 1 to 3 are repeated until all the
elements are placed at their correct positions.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Computer Engineering Analysis of Algorithms (CSL401) Sem IV
Pseudocode:
array : array of items
n : size of list
-------------------------------------------------------------------------
for i = 1 to n - 1
/* set current element as minimum*/
min_pos= i
/* check the element to be minimum */
for j = i+1 to n
if array[j] < array[min_pos] then
min_pos = j;
end if
end for
/* swap the minimum element with the current element*/
if indexMin != i then
swap array[min_pos] and array[i]
end if
end for

Analysis
Computer Engineering Analysis of Algorithms (CSL401) Sem IV
Avreage case and Worst Case
The time complexity of Selection Sort is O(N2) as there are two nested loops:
One loop to select an element of Array one by one = O(N)
Another loop to compare that element with every other Array element = O(N)
Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N 2)
Best Case

The best case is when the array is already sorted. So even if the array is already sorted, we will
traverse the entire array for checking, and in each iteration or traversal, we will perform the searching
operation. Only the swapping will not be performed as the elements will be at the correct position Since
the swapping only takes a constant amount of time i.e.O(1)the best time complexity of selection sort comes
out to be O(N2)
Time complexity

Case Time Complexity

Best Case O(N2)

Average Case O(N2)

Worst Case O(N2)

Conclusion:
Thus we implemented Selection sort and understood its time complexity

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Experiment No: 03

Aim: Write a program to implement Quick sort study their complexities.

Theory:

Quick sort is a divide and conquer algorithm which relies on a partition operation: to partition an array,
an element, called a pivot is chosen, all smaller elements are moved before the pivot, and all greater
elements are moved after it. This can be done efficiently in linear time and in-place. Then recursively
sorting can be done for the lesser and greater sub lists. Efficient implementations of quick sort (with in-
place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting
algorithms in practice. Together with its modest O(log n) space usage, this makes quick sort one of the
most popular sorting algorithms, available in many standard libraries. The most complex issue in quick
sort is choosing a good pivot element; consistently poor choices of pivots can result in drastically slower
(O(n2)) performance, but if at each step we choose the median as the pivot then it works in O(n log n).
Quick sort sorts by employing a divide and conquer strategy to divide a list into two sub-lists.

Pick an element, called a pivot, from the list.


Reorder the list so that all elements which are less than pivot come before the pivot and so that all
elements greater than the pivot come after it (equal values can go either way). After this partitioning, the
pivot is in its final position. This is called the partition operation.

Recursively sort the sub-list of lesser elements and the sub-list of greater elements.

Pseudo code For partition(a, left, right, pivotIndex)

pivotValue := a[pivotIndex]
swap(a[pivotIndex], a[right]) // Move pivot to end
storeIndex := left
for i from left to right-1
if a[i] ≤ pivotValue
swap(a[storeIndex], a[i])
storeIndex := storeIndex + 1
swap(a[right], a[storeIndex]) // Move pivot to its final place
return storeIndex

Pseudocode For quicksort(a, left, right)

if right > left


select a pivot value a[pivotIndex]
pivotNewIndex := partition(a, left, right, pivotIndex)

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


quicksort(a, left, pivotNewIndex-1)
quicksort(a, pivotNewIndex+1, right)

ANALYSIS
The partition routine examines every item in the array at most once, so complexity is clearly O(n).

Usually, the partition routine will divide the problem into two roughly equal sized partitions. We know
that we can divide n items in half log2n times.

This makes quicksort a O(nlogn) algorithm - equivalent to heapsort.

Conclusion: Thus we implemented quick sort which is fastest algorithm and worst case time
complexity is O(N2)

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Experiment No: 04

Aim: Write a program to implement Merge sort and study its complexities.

Theory:
Merge sort is based on the divide-and-conquer paradigm. Its worst-case running time has a
lower order of growth than insertion sort. Since we are dealing with sub problems, we state each sub
problem as sorting a sub array A[p .. r]. Initially, p = 1 and r = n, but these values change as we recourse
through sub problems.

To sort A[p .. r]:

1. Divide Step

If a given array A has zero or one element, simply return; it is already sorted. Otherwise,
split A[p .. r] into two sub arrays A[p .. q] and A[q + 1 .. r], each containing about half of the
elements of A[p .. r]. That is, q is the halfway point of A[p .. r].

2. Conquer Step

Conquer by recursively sorting the two sub arrays A[p .. q] and A[q + 1 .. r].

3. Combine Step

Combine the elements back in A[p .. r] by merging the two sorted sub arrays A[p .. q] and
A[q + 1 .. r] into a sorted sequence. To accomplish this step, we will define a procedure
MERGE (A, p, q, r).

Note that the recursion bottoms out when the sub array has just one element, so that it is trivially sorted.

Pseudo code for mergesort

mergesort(m)
var list left, right
if length(m) ≤ 1
return m
else
middle = length(m) / 2
for each x in m up to middle
add x to left
for each x in m after middle
add x to right

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


left = mergesort(left)
right = mergesort(right)
result = merge(left, right)
return result

There are several variants for the merge() function, the simplest variant could look like this:

Pseudo code for merge

merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result

left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append left to result
if length(right) > 0
append right to result
return result

ANALYSIS
The straightforward version of function merge requires at most 2n steps (n steps for copying the sequence
to the intermediate array b, and at most n steps for copying it back to array a). The time complexity of
mergesort is therefore
T(n) 2n + 2 T(n/2) and
T(1) = 0
The solution of this recursion yields
T(n) 2n log(n) O(n log(n))
Thus, the Mergesort algorithm is optimal, since the lower bound for the sorting problem of Ω(n log(n))
is attained.
In the more efficient variant, function merge requires at most 1.5n steps (n/2 steps for copying the first
half of the sequence to the intermediate array b, n/2 steps for copying it back to array a, and at most n/2
steps for processing the second half). This yields a running time of mergesort of at most 1.5n log(n)
steps. Algorithm Mergesort has a time complexity of Θ(n log(n)) which is optimal.
A drawback of Mergesort is that it needs an additional space of Θ(n) for the temporary array b.
Computer Engineering Analysis of Algorithms (CSL401) Sem IV
Example: Bottom-up view of the above procedure for n = 8.

Merging

What remains is the MERGE procedure. The following is the input and output of the MERGE
procedure.

INPUT: Array A and indices p, q, r such that p ≤ q ≤ r and sub array A[p .. q] is sorted and
sub array A[q + 1 .. r] is sorted. By restrictions on p, q, r, neither sub array is empty.

OUTPUT: The two sub arrays are merged into a single sorted sub array in A[p .. r].

We implement it so that it takes Θ(n) time, where n = r − p + 1, which is the number of elements being
merged.

Conclusion: Thus we implemented Merge Sort which is having a time complexity is O(nlog n).

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Experiment No: 05

Aim: Write a program to implement all pair shortest path using Floyd Warshall's algorithm and its
analysis.

Theory:

The all pairs shortest path problem involves finding the shortest path from each node in the graph to
every other node in the graph. A directed graph is presented in Figure 2.1. This can be calculated by
using a number of different algorithms. One way is simply to perform a single source algorithm, this
problem calculates the shortest path from one node to every other node, and applies it to all the nodes in
the graph. Another way is to present the graph as a matrix; this can be seen in Figure 2.1 which shows a
graph. The corresponding matrix representation of the graph is given in Figure 2.2. Once this matrix has
been constructed, distance matrix multiplication can be performed on it, distance matrix multiplication
is explained in section 2.3. The method of repeated squaring is used on the matrix log n times. Once this
repeated squaring has been completed the matrix will then contain the solution to the all pairs shortest
path problem.
There are other methods for calculating the all pairs shortest path problem,
Given a directed graph G = (V,E), where each edge (v,w) has a nonnegative cost C[v,w], for all pairs of
vertices (v,w) find the cost of the lowest cost path from v to w.

● A generalization of the single-source-shortest-path problem.


● Use Dijkstra's algorithm, varying the source node among all the nodes in the graph. We will
consider a slight extension to this problem: find the lowest cost path between each pair of
vertices.

● We must recover the path itself, and not just the cost of the path.

Floyd's Algorithm

Floyd's algorithm takes as input the cost matrix C[v,w]


● C[v,w] = ∞ if (v,w) is not in E
It returns as output
● a distance matrix D[v,w] containing the cost of the lowest cost path from v to w
o initially D[v,w] = C[v,w]

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


● a path matrix P, where P[v,w] holds the intermediate vertex k on the least cost path between v
and w that led to the cost stored in D[v,w].
We iterate N times over the matrix D, using k as an index. On the kth iteration, the D matrix contains
the solution to the APSP problem, where the paths only use vertices numbered 1 to k.
On the next iteration, we compare the cost of going from i to j using only vertices numbered 1..k (stored
in D[i,j] on the kth iteration) with the cost of using the k+1th vertex as an intermediate step, which is
D[i,k+1] (to get from i to k+1) plus D[k+1,j] (to get from k+1 to j).
If this results in a lower cost path, we remember it. After N iterations, all possible paths have been
examined, so D[v,w] contains the cost of the lowest cost path from v to w using all vertices if necessary.
The Algorithm
FloydAPSP (int N, rmatrix &C, rmatrix &D, imatrix &P)
{
int i,j,k;
for (i = 0; i < N; i++) {
for (j = 0; j < N; j++) {
D[i][j] = C[i][j];
P[i][j] = -1;
}
D[i][i] = 0.0;
}
for (k = 0; k < N; k++) {
for (i = 0; i < N; i++) {
for (j = 0; j < N; j++) {
if (D[i][k] + D[k][j] < D[i][j]) {
D[i][j] = D[i][k] + D[k][j];
P[i][j] = k;
}}}}
} /* FloydAPSP */

ANALYSIS
The time complexity is Θ (V³).

Conclusion: Thus we implemented all pair shortest path algorithm.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Experiment No: 06
Aim: Implementation of 0/1 Knapsack problem using Dynamic programming approach.

Theory:
The knapsack problem is a problem in combinatorial optimization. It derives its name from the
maximization problem of choosing possible essentials that can fit into one bag (of maximum weight) to
be carried on a trip. A similar problem very often appears in business, combinatory, complexity theory,
cryptography and applied mathematics. Given a set of items, each with a cost and a value, then determine
the number of each item to include in a collection so that the total cost is less than some given cost and
the total value is as large as possible.

Dynamic Programming for 0-1 Knapsack Problem


In 0/1 Knapsack Problem,

● As the name suggests, items are indivisible i.e. we can not take the fraction of any item.
● We have to either take an item completely or leave it completely.
● It is solved using dynamic programming approach.

For this algorithm let c[i,w] = value of solution for items 1..i and maximum weight w.

c[i,w] =

Algorithm for 0/1 Knapsack problem:

DP-01K(v, w, n, W)
1 for w = 0 to W
2 c[0,w] = 0
3 for i = 1 to n

4 c[i,0] = 0
5 for w = 1 to W

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


6 if w[i] <= w
7 then if v[i] + c[i-1,w-w[i]] > c[i-1,w]
8 then c[i,w] = v[i] + c[i-1,w-w[i]]
9 else c[i,w] = c[i-1,w]
10 else c[i,w] = c[i-1,w]

Analysis of 0/1 Knapsack:

● It takes θ(nw) time to fill (n+1)(w+1) table entries where each entry takes constant time θ(1) for its
computation.
● It takes θ(n) time for tracing the solution as the tracing process traces the n rows starting from row n
and then moving up by 1 row at every step.
● Thus, overall θ(nw) time is taken to solve 0/1 knapsack problem using dynamic programming
approach.

Conclusion: Thus we implemented 0/1 Knapsack problem using Dynamic programming approach.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Experiment No: 07

Aim: Write a program to implement Minimum cost Spanning tree using Prim’s algorithm and its
analysis.

Theory:

Prim’s algorithm is a greedy algorithm that is used to find a minimum spanning tree (MST) of a given
connected weighted graph. This algorithm is preferred when the graph is dense. The dense graph is a
graph in which there is a large number of edges in the graph. This algorithm can be applied to only
undirected connected graph and there should not be any negative weighted edge. In this case, the
algorithm is quite efficient. Since there are no nonnegative weight cycles, there will be a shortest path
whenever there is a path.

The steps to find minimum spanning tree using Prim’s algorithm are as follows:
1. If graph has loops and parallel edges than remove loops and parallel edges of that graph.
2. Randomly choose any node, labelling it with a distance of 0 and all other nodes as ∞. The chosen node
is treated as current node and considered as visited. All other nodes are considered as unvisited.
3. Identify all the unvisited nodes that are presently connected to the current node. Calculate the distance
from the unvisited nodes to the current node.
4. Label each of the vertices with their corresponding weight to the current node, but relabel of a node,
if it is less than the previous value of the label. Each time, the nodes are labelled with its weights; keep
track of the path with the smallest weight.
5. Mark the current node as visited by colouring over it. Once a vertex is visited, we not need to look at
it again.
6. From all the unvisited nodes, find out the node which has minimum weight to the current node,
consider this node as visited and treat it as the current working node.
7. Repeat steps 3, 4 and 5 until all nodes are visited.
8. After completed all steps get desired MST.

Algorithm:
/* S= set of visited node, Q= Queue, G=Graph, w=Weight */
Prim_MST (G, w, S)
{
Initialization (G,S)
S ←Ø // the set of visited nodes is initially empty
Q← v[G] // The queue is initially contain all nodes
while (Q= Ø) // Queue not empty
do u← extract_min(Q) // select the minimum distance of Q
S ←SU{u} // the u is add in visited set S

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


For each vertex v ϵ adj(u)
do relax(u, v, w)
}

Initialization (G, S)
{
For each vertex v ϵ adj (u)
d [v] ←∞ //Unknown Distance from source node
π[v] ←nil // Predecessor node initially nil.
d[s] ←0 //Distance of source nodes is zero
}

Relax (u, v,w)


{
If d[v]>w[u,v] // comparing new distance with existing value
{
d[v] ← w[u,v]
π[u] ←u
}
}

Analysis of Prim’s Algorithm:

Finding the minimum distance is O (V) and overall complexity with adjacency list representation is O
(V2).
If queue is kept as a binary heap, relax will need a decrease-key operation which is O (logV) and the
overall complexity is O (V log V + E log V) i.e. O (E log V).
If queue is kept as a Fibonacci heap, decrease-key has an amortized complexity O (1) and the overall
complexity is O (E+V log V)

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Experiment No: 08

Aim: Write a program to implement Minimum cost Spanning tree using Kruskal's algorithm and its
analysis.

Kruskal’s Algorithm:

Kruskal’s algorithm is the following: first, sort the edges in increasing (or rather non decreasing) order
of costs, so that c(e1) ≤ c(e2) ≤ . . . ≤ c(em); then, starting with an initially empty tree T, go through the
edges one at a time, putting an edge in T if it will not cause a cycle, but throwing the edge out if it would
cause a cycle.

Below are the steps for finding MST using Kruskal’s algorithm
1. Sort all the edges in non-decreasing order of their weight.

2. Pick the smallest edge. Check if it forms a cycle with the spanning tree
formed so far. If cycle is not formed, include this edge. Else, discard it.

3. Repeat step#2 until there are (V-1) edges in the spanning tree.

Algorithm:

MST-Kruskal(G, w)
1. A ← ∅ // initially A is empty
2. for each vertex v ∈ V[G] // line 2-3 takes O(V) time
3. do Create-Set(v) // create set for each vertex
4. sort the edges of E by non decreasing weight w
5. for each edge (u, v) ∈ E, in order by non decreasing weight
6. do if Find-Set(u) ≠ Find-Set(v) // u&v on different trees
7. then A ← A ∪ {(u, v)}
8. Union(u,v)
9. return A

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Analysis:
We can easily make each of them run in time O(E lgV) using ordinary binary heaps.
By using Fibonacci heaps, This algorithm runs in time O(E+ V lgV), which improves over the binary-
heap implementation if |V| is much smaller than |E|.

Conclusion: Thus we implemented the minimum spanning tree using Prim’s and Kruskal’s algorithm.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Experiment No: 09

Aim: Write a program to implement Single source shortest path using Dijkstra algorithm and its analysis.

Theory:

Dijkstra's algorithm solves the single-source shortest-path problem when all edges have non-negative
weights. It is a greedy algorithm and similar to Prim's algorithm. Algorithm starts at the source vertex,
s, it grows a tree, T, that ultimately spans all vertices reachable from S. Vertices are added to T in order
of distance i.e., first S, then the vertex closest to S, then the next closest, and so on. Following
implementation assumes that graph G is represented by adjacency lists.

DIJKSTRA (G, w, s)
1. INITIALIZE SINGLE-SOURCE (G, s)
2. S ← { } // S will ultimately contains vertices of final shortest-
path weights from s
3. Initialize priority queue Q i.e., Q ← V[G]
4. while priority queue Q is not empty do
5. u ← EXTRACT_MIN(Q) // Pull out new vertex
6. S ← S È {u} // Perform relaxation for
each vertex v adjacent to u
7. for each vertex v in Adj[u] do
8. Relax (u, v, w)

ANALYSIS
Like Prim's algorithm, Dijkstra's algorithm runs in O(|E|lg|V|) time.

Conclusion: Thus we implemented the Single source shortest path using the Dijkstra algorithm.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Experiment No: 10

Aim: Write a program to implement N-Queen Problem using Backtracking and its analysis.

Theory:

One of the most common examples of the backtracking is to arrange N queens on an NxN chessboard
such that no queen can strike down any other queen. A queen can attack horizontally, vertically, or
diagonally. The solution to this problem is also attempted in a similar way.

We first place the first queen anywhere arbitrarily and then place the next queen in any of the safe places.
We continue this process until the number of unplaced queens becomes zero (a solution is found) or no
safe place is left. If no safe place is left, then we change the position of the previously placed queen.

Initial state Solution of 4 X 4 problem

General Backtrack Search Approach

● Select an item and set it to one of its options such that it meets current constraints
● Recursively try to set next item
● If you reach a point where all items are assigned and meet constraints, done…return through
recursion stack with solution
● If no viable value for an item exists, backtrack to previous item and repeat from the top
● If viable options for the 1st item are exhausted, no solution exists

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Algorithm:

1) Start in the left most column


2) If all queens are placed
return true
3) Try all rows in the current column. Do the following for every tried row.
a) If the queen can be placed safely in this row then mark this [row, column] as part of the
solution and recursively check if placing queen here leads to a solution.
b) If placing the queen in [row, column] leads to a solution then return
true.
c) If placing queen doesn't lead to a solution then unmark this [row, column] (Backtrack) and
go to step (a) to try other rows.
4) If all rows have been tried and nothing worked, return false to trigger backtracking.

Analysis:

The backtracking algorithm, is a slight improvement on the permutation method to solve N X N queens
problem,
● Constructs the search tree by considering one row of the board at a time, eliminating most non-
solution board positions at a very early stage in their construction.
● Because it rejects row and diagonal attacks even on incomplete boards, it examines only 15,720
possible queen placements.
● A further improvement which examines only 5,508 possible queen placements is to combine the
permutation based method with the early pruning method:
● The permutations are generated depth-first, and the search space is pruned if the partial
permutation produces a diagonal attack

Conclusion: Thus we implemented N-Queen Problem using the Backtracking method.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Experiment No: 11
Aim: Implementation of the Naïve string matching Algorithms and its analysis.

Theory:
The problem of string matching is a prevalent and important problem in computer science today. There
are really two forms of string matching. The first, exact string matching, finds instances of some pattern
in a target string.
For example, if the pattern is "go" and the target is "agogo", then two instances of the pattern appear in
the text (at the second and fourth characters, respectively). The second, inexact string matching or string
alignment, attempts to find the "best" match of a pattern to some target. Usually, the match of a pattern
to a target is either probabilistic or evaluated based on some fixed criteria (for example, the pattern
"aggtgc" matches the target "agtgcggtg" pretty well in two places, located at the first character of the
string and the sixth character).
Both string matching algorithms are used extensively in bioinformatics to isolate structurally similar
regions of DNA or a protein (usually in the context of a gene map or a protein database).

Naive Algorithm:
The idea of the naive solution is just to make a comparison character by character of the text T[s...s + m
− 1] for all s ∈ {0, . . . , n − m + 1} and the pattern P[0...m − 1]. It returns all the valid shifts found. Let’s
see how the algorithm works in a practical example. Below is the example of operation of the naive
string matcher in a DNA string.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV


Algorithm:

// Return an array which contains all valid shifts in text (str)


Naïve_Method(text, pattern)
{
N = length(text);
M = length(patter,);
limit = N-M;
j = 0, k = 0;
arrayOfValidShift[];
for(i = 0; i <= limit; i++)
{
j = 0;
k = i;
for(j = 0; j <= M AND str[k] == pat[j]; j++)
k++;
if(j >= M)
Add i to arrayOfValidShift;
}
return arrayOfValidShift;
}

Analysis:

The problem of this approach is the effectiveness. In fact, the time complexity of the Naïve algorithm in
its worst case is O(M x N). For example if the pattern to search is a m and the text is an, then we need M
operation of comparison by shift. For all the text, we need (N -M +1) x M operation, generally M is very
small compared to N, it is why we can simply consider the complexity as O(M x N).

Conclusion: Thus we implemented the Naïve string matching Algorithm.

Computer Engineering Analysis of Algorithms (CSL401) Sem IV

You might also like