0% found this document useful (0 votes)
2 views

ADA Lab File

The document outlines various algorithms and their time complexities, including sorting algorithms (Bubble Sort, Insertion Sort, Selection Sort, Merge Sort, Quick Sort) and searching algorithms (Linear Search, Binary Search). It also discusses matrix operations (addition and multiplication), dynamic programming problems (Longest Common Subsequence, Matrix Chain Multiplication), and graph algorithms (Dijkstra’s and Bellman Ford). Each program aims to analyze the time complexity associated with its respective algorithm.

Uploaded by

Bhumika Piplani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

ADA Lab File

The document outlines various algorithms and their time complexities, including sorting algorithms (Bubble Sort, Insertion Sort, Selection Sort, Merge Sort, Quick Sort) and searching algorithms (Linear Search, Binary Search). It also discusses matrix operations (addition and multiplication), dynamic programming problems (Longest Common Subsequence, Matrix Chain Multiplication), and graph algorithms (Dijkstra’s and Bellman Ford). Each program aims to analyze the time complexity associated with its respective algorithm.

Uploaded by

Bhumika Piplani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

PROGRAM 1

AIM: To find time complexity of displaying Table to ‘n’ iterations


THEORY:
Asymptotic Notations:
Asymptotic Notations are programming languages that allow you to analyze an algorithm’s
running time by identifying its behavior as its input size grows.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)

PROGRAM CODE:
Time complexity is the time taken by an algorithm as a function of input.
OUTPUT:
PROGRAM 2

AIM: To implement and analyze the time complexity of Bubble Sortalgorithm.


THEORY: The bubble sort algorithm is a reliable sorting algorithm.
This algorithm has a worst-case time complexity of O(n2). The bubble sort has a
space complexity of O(1). The number of swaps in bubble sort equals the number of
inversion pairs in the given array.
In this algorithm,
• traverse from left and compare adjacent elements and the higher one is placed at
right side.
• In this way, the largest element is moved to the rightmost end at first.
• This process is then continued to find the second largest and place it and so on
until the data is sorted.
PROGRAM CODE:
OUTPUT:
PROGRAM 3

AIM: To implement and analyse the time complexity of Insertion Sort


algorithm.
THEORY: The array is searched sequentially and unsorted items are moved and
inserted into the sorted sub-list (in the same array). This algorithm is not suitable for
large data sets as its average and worst case complexity are of Ο(n 2), where n is the
number of items.
To sort an array of size N in ascending order iterate over the array and compare the
current element (key) to its predecessor, if the key element is smaller than its
predecessor, compare it to the elements before. Move the greater elements one
position up to make space for the swapped element.
PROGRAM CODE:
OUTPUT:
PROGRAM 4

AIM: To implement and analyse the time complexity of Selection Sort


algorithm.
THEORY: Selection sort is a simple and efficient sorting algorithm that works by
repeatedly selecting the smallest (or largest) element from the unsorted portion of the
list and moving it to the sorted portion of the list
Algorithm
Step 1 − Set MIN to location 0
Step 2 − Search the minimum element in the list
Step 3 − Swap with value at location MIN
Step 4 − Increment MIN to point to next element
Step 5 − Repeat until list is sorted
PROGRAM CODE:
OUTPUT:
PROGRAM 5

AIM: To implement and analyse the time complexity of Merge Sort


algorithm.
THEORY: Merge sort is defined as a sorting algorithm that works by dividing an
array into smaller subarrays, sorting each subarray, and then merging the sorted
subarrays back together to form the final sorted array.
Merge sort is a recursive algorithm that continuously splits the array in half until it
cannot be further divided i.e., the array has only one element left (an array with one
element is always sorted). Then the sorted subarrays are merged into one sorted array.
PROGRAM CODE:
OUTPUT:
PROGRAM 6

AIM: To implement and analyse the time complexity of Quick Sort


algorithm.
THEORY: QuickSort is a sorting algorithm based on the Divide and Conquer
algorithm that picks an element as a pivot and partitions the given array around the
picked pivot by placing the pivot in its correct position in the sorted array.
The key process in quickSort is a partition ().The target of partitions is to place the
pivot (any element can be chosen to be a pivot) at its correct position in the sorted
array and put all smaller elements to the left of the pivot, and all greater elements to
the right of the pivot.
Partition is done recursively on each side of the pivot after the pivot is placed in its
correct position and this finally sorts the array.

PROGRAM CODE:
OUTPUT:
PROGRAM 7

AIM: To implement and analyse the time complexity of Linear Searchalgorithm.


THEORY: Linear search is also called as sequential search algorithm. It is the
simplest searching algorithm. In Linear search, we simply traverse the list completely
and match each element of the list with the item whose location is to be found. If the
match is found, then the location of the item is returned; otherwise, the algorithm
returns NULL.
PROGRAM CODE:
OUTPUT:
PROGRAM 8

AIM: To implement and analyse the time complexity of Binary Search


algorithm.

THEORY: Binary search is the search technique that works efficiently on sorted lists.
Hence, to search an element into some list using the binary search technique, we must
ensure that the list is sorted.

Binary search follows the divide and conquer approach in which the list is divided into
two halves, and the item is compared with the middle element of the list. If the match is
found then, the location of the middle element is returned. Otherwise, we search into
either of the halves depending upon the result produced through the match.

PROGRAM CODE:
OUTPUT:
PROGRAM 9

AIM: To implement and analyse the time complexity of Matrix


Addition.
THEORY: To add two matrices the number of row and number of columns in both the
matrices must be same the only addition can take place.
Step 1: Start
Step 2: Declare matrix mat1[row][col];
and matrix mat2[row][col];
and matrix sum[row][col]; row= no. of rows, col= no. of columns
Step 3: Read row, col, mat1[][] and mat2[][]
Step 4: Declare variable i=0, j=0
Step 5: Repeat until i < row
5.1: Repeat until j < col
sum[i][j]=mat1[i][j] + mat2[i][j]
Set j=j+1
5.2: Set i=i+1
Step 6: sum is the required matrix after addition
Step 7: Stop

PROGRAM CODE:
OUTPUT:
PROGRAM 10

AIM: To implement and analyse the time complexity of Matrix


Multiplication.
THEORY: The matrix multiplication can only be performed, if it satisfies this
condition. Suppose two matrices are A and B, and their dimensions are A (m x n) and
B (p x q) the resultant matrix can be found if and only if n = p. Then the order of the
resultant matrix C will be (m x q).
Algorithm
matrixMultiply(A, B):
Assume dimension of A is (m x n), dimension of B is (p x q)
Begin
if n is not same as p, then exit
otherwise define C matrix as (m x q)
for i in range 0 to m - 1, do
for j in range 0 to q – 1, do
for k in range 0 to p, do
C[i, j] = C[i, j] + (A[i, k] * A[k, j])
done
done
done
End
PROGRAM CODE:
OUTPUT:
PROGRAM 11

AIM: To implement and analyse the time complexity of Matrix Chain


Multiplication.
THEORY: Two matrices of size m*n and n*p when multiplied, they generate a
matrix of size m*p and the number of multiplications performed are m*n*p.
Now, for a given chain of N matrices, the first partition can be done in N-1 ways. For
example, sequence of matrices A, B, C and D can be grouped as (A)(BCD),
(AB)(CD) or (ABC)(D) in these 3 ways.
So a range [i, j] can be broken into two groups like {[i, i+1], [i+1, j]}, {[i, i+2], [i+2,
j]}, . . . , {[i, j-1], [j-1, j]}.
• Each of the groups can be further partitioned into smaller groups and we can find
the total required multiplications by solving for each of the groups.
• The minimum number of multiplications among all the first partitions is the
required answer.
PROGRAM CODE:
OUTPUT:
PROGRAM 12

AIM: To implement and analyse the time complexity of Longest


Common Subsequence Problem.
THEORY: Here longest means that the subsequence should be the biggest one. The
common means that some of the characters are common between the two strings. The
subsequence means that some of the characters are taken from the string that is written
in increasing order to form a subsequence.
The longest common subsequence (LCS) is defined as the longest subsequence that is
common to all the given sequences, provided that the elements of the subsequence are
not required to occupy consecutive positions within the original sequences.

PROGRAM CODE:
OUTPUT:
PROGRAM 13

AIM: To implement and analyse the time complexity of Optimal binary


Search Problem.
THEORY: An Optimal Binary Search Tree (OBST), also known as a Weighted
Binary Search Tree, is a binary search tree that minimizes the expected search cost.
The time complexity of constructing an OBST is O(n^3), where n is the number of
keys. However, with some optimizations, we can reduce the time complexity to
O(n^2). Once the OBST is constructed, the time complexity of searching for a key is
O(log n), the same as for a regular binary search tree.
PROGRAM CODE:
A binary search tree is a binary tree in which left node w.r.t to root node has smaller
values and right node has larger values than the root node.
OUTPUT:
PROGRAM 14

AIM: To implement and analyse the time complexity of Huffman


Coding.
THEORY: Huffman coding is a lossless data compression algorithm. The idea is to
assign variable-length codes to input characters, lengths of the assigned codes are
based on the frequencies of corresponding characters.
Algorithm Huffman (c)
{
n= |c|
Q=c
for i<-1 to n-1
do
{
temp <- get node ()
left (temp] Get_min (Q) right [temp] Get Min (Q)
a = left [templ b = right [temp]
F [temp]<- f[a] + [b]
insert (Q, temp)
}
return Get_min (0)
}
PROGRAM CODE:
OUTPUT:
PROGRAM 15

AIM: To implement and analyse the time complexity of Dijkstra’s


Algorithm.
THEORY: Dijkstra’s algorithm is the single-source shortest path algorithm in
which sourcenode is given to us and we have to find the shortest path to all other
nodes.Dijkstra's Algorithm works on the basis that any subpath B -> D of the shortest
path A -> D between vertices A and D is also the shortest path between vertices B and
D.
PROGRAM CODE:
OUTPUT:
PROGRAM 16

AIM: To implement and analyse the time complexity of Bellman Ford


Algorithm.
THEORY: Bellman Ford algorithm helps us find the shortest path from a vertex to
all other vertices of a weighted graph. It is similar to Dijkstra's algorithm but it can
work with graphs in which edges can have negative weights.
Bellman Ford algorithm works by overestimating the length of the path from the
starting vertex to all other vertices. Then it iteratively relaxes those estimates by
finding new paths that are shorter than the previously overestimated paths.
PROGRAM CODE:
OUTPUT:
PROGRAM 17

AIM: To implement and analyse the time complexity of Naïve String


Matching Algorithm.
THEORY: Naïve string matching algorithm is the simplest and basic algorithm to
match thegiven pattern with given text.
Naive Algorithm:
i) It is the simplest method which uses brute force approach.
ii) It is a straight forward approach of solving the problem.
iii) It compares first character of pattern with searchable text. If match is found,
pointers in both strings are advanced. If match not found, pointer of text is
incremented and pointer ofpattern is reset. This process is repeated until the end of
the text.
iv) It does not require any pre-processing. It directly starts comparing both strings
character by character.
v) Time Complexity = O(m* (n-m))
PROGRAM CODE:
OUTPUT:
PROGRAM 18

AIM: To implement and analyse the time complexity of Rabin Karp


Algorithm.
THEORY: Rabin karp is also a matching algorithm which matches pattern with
text with thehelp of hash function.
RABIN-KARP-MATCHER (T, P, d, q)
1. n ← length [T]
2. m ← length [P]
3. h ← dm-1 mod q
4. p ← 0
5. t0 ← 0
6. for i ← 1 to m
7. do p ← (dp + P[i]) mod q
8. t0 ← (dt0+T [i]) mod q
9. for s ← 0 to n-m
10. do if p = ts
11. then if P [1.....m] = T [s+1.....s + m]
12. then "Pattern occurs with shift" s
13. If s < n-m
14. then ts+1 ← (d (ts-T [s+1]h)+T [s+m+1])mod q
PROGRAM CODE:
OUTPUT:
PROGRAM 19

AIM: To implement and analyse the time complexity of Knuth Morris


Pratt Algorithm.
THEORY: 1. The Prefix Function (Π): The Prefix Function, Π for a pattern
encapsulates knowledge about how the pattern matches against the shift of itself. This
information can be used to avoid a useless shift of the pattern 'p.' In other words, this
enables avoiding backtracking of the string 'S.'

2. The KMP Matcher: With string 'S,' pattern 'p' and prefix function 'Π' as inputs,
find the occurrence of 'p' in 'S' and returns the number of shifts of 'p' after which
occurrences are found.

PROGRAM CODE:
OUTPUT:

You might also like