Week 1 Merged Compressed
Week 1 Merged Compressed
Algorithm:
Step 1: Input the array of size n elements and read the elements
Step 2: Input the target element x, which is to be searched
Step 3: Start with the starting index 0 of the array and run a loop
for (int i=0 to n-1)
Step 4: Using a control statement (if-else condition) for matching the array with the key element
if (array[i]==x)
Step 5: Set a timer when the loop begins, start Time
Step 6: If the target element is found, print that index
Print the element found at that particular index
Step 7: Else, the target element is not found, print not found
print element not found
Step 8: After finishing the search, the timer will stop and display how much time it took and how much space
It also shows
Execution time (in nanoseconds)
Approximate memory used (in bytes)
Flow chart: -
Program:
import java.util.Scanner;
public class SearchPerformance {
public static int search (int arr[], int n, int x) {
for (int i = 0; i < n; i++) {
if (arr[i] == x)
return i;
}
return -1;
}
public static void main (String [] args)
{Scanner sc = new Scanner (System.in);
System.out.print("Enter the number of elements: ");
int n = sc.nextInt();
int arr[] = new int[n];
for (int i = 0; i < n; i++)
{arr[i] = i + 1;
}
System.out.print("Enter the element to search: ");
int x = sc.nextInt();
long startTime = System×nanoTime();
int index = search(arr, n, x);
long endTime = System×nanoTime();
if (index == -1) {
System.out.println("No");
} else {
System.out.println("Yes, found at index: " + index);
}
long duration = endTime - startTime; System.out.println("Execution
time (nanoseconds): " + duration); long spaceUsed = (long) n * 4;
System.out.println("Approximate memory used (bytes): " + spaceUsed);
sc.close();
}
}
Analysis:
The best-case time complexity: for linear search is O (1) because the element is present in the starting index of
the array so only one comparison is required, the time complexity is:
Worst case time complexity: for linear search is O(n) because worst case means the element is not present in the
array, so it compares all the elements. It means if it performs search according to the size of the array, if the size of
the array is n, it performs n searches
The average time complexity for linear search is O(n) because finding the 1st element takes 1 operation, finding the
2nd takes 2 operations, and finding the n-th takes n operations, equal to O(n).
Space Complexity:
—>The space complexity for linear search is O (1) – For variables like i (loop counter) and target. And for n
elements O(n) – For storing the array of n elements.
1b) Aim: Design and perform analysis (time and space complexity) to find the presence of an
element or not in a given data type /database/data set using the Fibonacci Search technique
Algorithm:
Step 1: Input the array
Step 2: Input the target element
Step 3: Set a timer, it will start the timer
when the loop starts,
start Time
Step 4: As the first step, find the immediate Fibonacci number that is greater than or equal
to the size of the input array. Then, also hold the two preceding numbers of the selected
Fibonacci number, that is, we hold Fm, Fm-1, and Fm-2 numbers from the Fibonacci Series.
Step 5: Initialize the offset value as -1, as we are considering the entire array as the searching
range in the beginning.
Step 6: Until Fm-2 is greater than 0, compare the key element to be found with the element
at index [min(offset+Fm-2,n-1)]. If a match is found, print the index. If the key element is
found to be of lesser value than this element, we reduce the range of the input from 0 to the
index of this element. The Fibonacci numbers are also updated with Fm= Fm-2. But if the key
element is greater than the element at this index, we remove the elements before this element
from the search range. The Fibonacci numbers are updated as Fm= Fm-1. The offset value is
set to the index of this element.
Step 7: As there are two 1s in the Fibonacci series, there arises a case where your two preceding
numbers will become 1. So, if Fm-1 becomes 1, there is only one element left in the array to
be searched. We compare the key element with that element and print the 1st index. Otherwise,
it prints elements not found.
Step 8: After finishing the search, the timer will automatically stop and display how much
time it took and how much space it occupied It also shows
Execution time (nanoseconds) and Approximate
memory used (bytes)
Flow Chart:
Program:
import java.util.*;
public class FibonacciSearchPerformance {
public static int fibSearch(int arr[], int x, int n) { int fibM2 = 0;
int fibM1 = 1;
int fibM = fibM2 + fibM1; while (fibM < n) {
fibM2 = fibM1; fibM1 = fibM;
fibM = fibM2 + fibM1;
}
int offset = -1; while (fibM > 1) {
int i = Math×min(offset + fibM2, n - 1); if (arr[i] < x) {
fibM = fibM1; fibM1 = fibM2;
fibM2 = fibM - fibM1; offset = i;
} else if (arr[i] > x) { fibM = fibM2; fibM1 -= fibM2;
fibM2 = fibM - fibM1;
} else {
return i;
}}
if (fibM1 == 1 && arr[offset + 1] == x) { return offset + 1;
}
return -1;
}
public static void main (String [] args) {Scanner sc = new Scanner (System.in);
System.out.print("Enter the number of elements: "); int n = sc.nextInt();
int arr[] = new int[n];
for (int i = 0; i < n; i++)
{arr[i] = i + 1;
}
System.out.print("Enter the element to search: "); int x = sc.nextInt();
long startTime = System×nanoTime(); int index = fibSearch(arr, x, n);
long endTime = System×nanoTime(); if (index == -1) {
System.out.println("No");
} else {
System.out.println("Yes, found at index: " + index);
}
long duration = endTime - startTime; System.out.println("Execution time (nanoseconds): " +
duration); long spaceUsed = (long) n * 4;
System.out.println("Approximate memory used (bytes): " + spaceUsed); sc.close();
}
}
Output:
Analysis:
The Best Case: Fibonacci search has a best-case time complexity of O (1) because if the
middle element is the target, it requires only one comparison. The time complexity is:
T(n) = 1 —> O (1)
Worst Case: For the worst case, Fibonacci search has a time complexity of O (log n) because
the search space is divided based on Fibonacci numbers after each comparison. If the element
is not present, it takes log(n) comparisons, where n is the size of the dataset. So,
T(n) = log n —> O (log n)
The Average Case: In the average case, the search time depends on the position of the target
relative to the calculated Fibonacci points. Each iteration reduces the search space
logarithmically, so:
T(n) = log n —> O (log n)
This logarithmic growth reflects the Fibonacci sequence's nature, making Fibonacci search
suitable for large datasets with uniformly distributed elements.
Space Complexity:
Uses additional space to store string characters, increasing with length.
2a) Aim: Design and perform analysis (time and space complexity) to sort the given element
in ascending or descending order for a given data type /database/data set using Bubble sort or
exchange sort technique
Algorithm:
Step 1: Define the array
We have an unsorted array that needs to be sorted using Bubble Sort.
Step 2: Start the Timer and iterate through the array
Compare adjacent elements and swap them if they are in
the wrong order. Continue this process for the entire array.
function bubble Sort(array): n = length(array)
for i in range (n - 1): # Number of passes swapped = False
for j in range (n - i - 1): # Traverse unsorted part
if array[j] > array [j + 1]: # If current element is greater than next swap(array[j], array [j + 1])
swapped = True if not swapped:
break # Optimization: Stop if no swaps occurred
Step 3: Repeat the process for all elements
Continue swapping adjacent elements until the entire array is sorted. for i in range (n - 1):
for j in range (n - i - 1):
if array[j] > array [j + 1]: swap(array[j], array [j + 1])
Step 4: The array is now sorted
After completing all passes, the array is sorted in ascending order.
Step 5: Stop the Timer and Calculate the Execution Time
Measure the time
taken for sorting.
startTime =
currentTime()
bubbleSort(array
)
endTime =
currentTime()
executionTime =
endTime -
startTime
print ("Execution Time:", executionTime)
Flow chart:
Program:
import java.util.*;
sc.close();
}
}
Output:
Enter the size of the array: 12
Original Array: [275, 418, 410, 680, 292, 958, 571, 433, 707, 841, 290, 408]
Sorted Array: [275, 290, 292, 408, 410, 418, 433, 571, 680, 707, 841, 958]
Approximate Space
Complexity: 96 bytes
Execution Time: 7340
nanoseconds
Analysis:
1. Time Complexity:
Best Case (Already Sorted): O(n)
→ One full pass without swaps confirms the array is sorted.
Average Case: O(n^2)
→ On average, the algorithm performs n(n-1)/2 comparisons and swaps.
Worst Case (Reverse Sorted): O(n^2)
→ Requires maximum swaps for each adjacent pair.
2. Space Complexity:
Auxiliary Space: O (1)
→ In-place sorting (no extra space required).
2b) Aim: Design and perform analysis (time and space complexity) to sort the given element
in ascending or descending order for the given data type /database/data set using the insertion
sort technique
Algorithm:
Step 1: Define the array
We have an unsorted array that needs to be sorted using Insertion Sort.
Step 2: Start the Timer and Begin Sorting
Start from the second element (index 1) and insert it into its correct position in the sorted part
of the array. function insertion Sort(array):
n = length(array)
for i in range (1, n): # Start from
the second element key =
array[i] # Pick the current
element
j = i - 1 # Start comparing with elements before it
while j >= 0 and array[j] > key: # Shift elements to the right
if they are larger array [j + 1] = array[j]
j -= 1
array [j + 1] = key # Place key at the
correct position
Step 3: Insert elements one by one into
the correct position
Compare the current element with previous elements and shift
them to make space. for i in range (1, n):
key = array[i] j = i - 1
while j >= 0 and array[j] > key:
array [j + 1] = array[j] # Shift larger elements right j -= 1
array [j + 1] = key # Insert the key element in its correct position
Step 4: The array is now sorted
After all insertions, the array will be sorted.
Step 5: Stop the Timer and Calculate Execution Time
Measure the time taken for sorting. startTime = currentTime()
insertionSort(array) endTime = currentTime()
executionTime = endTime - startTime print ("Execution Time:", executionTime)
Flow chart
Program:
import java.util.*;
public class WEK {
void sort (int arr[]) {
int n = arr.length;
for (int i = 1; i < n; i++) {
int key = arr[i];
int j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}}
void printArray(int arr[]) {
int n = arr.length;
for (int i = 0; i < n; i++) {
System.out.print(arr[i] + " ");
}
System.out.println();
}
public static void main (String [] args) {
WEK obj = new WEK ();
Scanner sc = new Scanner (System.in);
System.out.print("Enter the size of the array: ");
int size = sc.nextInt();
int [] arr = new int[size];
Random random = new Random (); for (int i = 0; i < size; i++) {
arr[i] = random.nextInt(1000);
}
System.out.println("Original Array: " + Arrays.toString(arr)); long startTime =
System.nanoTime();
obj.sort(arr);
long endTime = System.nanoTime();
System.out.println("Sorted Array: " + Arrays.toString(arr)); int spaceComplexity = arr.length
* 4;
System.out.println("Approximate Space Complexity: " + spaceComplexity + " bytes");
System.out.println("Execution Time: " + (endTime - startTime) + " nanoseconds");
sc.close();
}
}
Output:
Enter the size of the array: 12
Original Array: [864, 764, 35, 456, 154, 205, 303, 933, 325, 272, 800, 768]
Sorted Array: [35, 154, 205, 272, 303, 325, 456, 764, 768, 800, 864, 933]
Approximate Space
Complexity: 48 bytes
Execution Time: 6320
nanoseconds
Analysis:
1. Time Complexity:
Best Case (Already Sorted): O(n)
→ Only one comparison per element (no shifting required).
Average Case: O(n^2)
→ Each element is compared and shifted on average.
Worst Case (Reverse Sorted): O(n^2)
→ Each element requires maximum comparisons and shifts.
2. Space Complexity:
Auxiliary Space: O (1)
→ In-place sorting (no additional space required).
Week 2
Week 2: Trees and Graphs Traversal Techniques:
Aim: Design and perform analysis (time and space complexity) to perform tree traversal techniques (3 types
of traversals: inorder, preorder, and postorder) for a given data type/database/data set.
Algorithm:
Inorder Traversal Algorithm (Left, Root, Right)
Step 1: Define the Binary Tree
Each node has a left child, a right child, and a value.
class Node:
def init (self, value):
self.value = value
self.left = None
self.right = None
Step 2: Start Inorder Traversal (Recursive Approach)
Traverse the left subtree, then visit the root, and finally traverse the right subtree.
function inorderTraversal(root):
if root is not None:
inorderTraversal(root.left) # Visit left subtree
print(root.value) # Process the current node
inorderTraversal(root.right) # Visit right subtree
Step 3: Recursively Visit All Nodes
Call the function recursively for left and right subtrees.
inorderTraversal(root.left)
print(root.value)
inorderTraversal(root.right)
Step 4: Inorder Traversal is Complete
Once all recursive calls are done, the traversal is finished.
Step 5: Measure Execution Time
Record the time taken for traversal.
startTime = currentTime()
inorderTraversal(root)
endTime = currentTime()
executionTime = endTime - startTime
print ("Execution Time:", executionTime)
class Node
{int data;
Node left, right;
scanner.close();
}
}
Output:
Enter the number of nodes: 5
Enter the root value: 34
Enter value for node 2: 24
Insert 24 to the left or right of 34? (L/R): L
Enter value for node 3: 23
Insert 23 to the left or right of 34? (L/R): L
Insert 23 to the left or right of 24? (L/R): L
Enter value for node 4: 45
Insert 45 to the left or right of 34? (L/R): R
Enter value for node 5: 56
Insert 56 to the left or right of 34? (L/R): R
Inorder Traversal: 24 23 34 45 56
Time taken: 1831166 ns
Preorder Traversal: 34 24 23 45 56
Time taken: 368041 ns
Postorder Traversal: 23 24 56 45 34
Time taken: 320250 ns
Analysis:
Algorithm:
Output:
Enter number of vertices: 5 Enter
number of edges: 4
Enter edges (source, destination):
01
02
13
14
Enter start node for traversal: 0
DFS Traversal: 0 1 3 4 2
Time taken: 1217970 ns
BFS Traversal: 0 1 2 3 4
Time taken: 380970 ns
Analysis:
1. Depth-First Search (DFS)
Time Complexity:
Best Case: O (V + E)
→ If the graph is sparse (few edges), DFS only explores a small part of the graph.
Average Case: O (V + E)
→ Each vertex and edge is processed once.
Worst Case: O (V + E)
→ When DFS traverses the entire graph, visiting every vertex and edge.
DFS visits each vertex once and explores all its edges. The time depends on the number of vertices V and edges
E.
Space Complexity:
Auxiliary Space: O(V)
→ For the visited array and recursion stack (in the worst case, the stack depth reaches V).
Adjacency List Storage: O (V + E)
→ Storing the graph itself requires space for all vertices and edges.
2. Breadth-First Search
(BFS) Time Complexity:
Best Case: O (V + E)
→ If the start node connects to most vertices directly (shallow graph).
Average Case: O (V + E)
→ Visits each vertex and edge once.
Worst Case: O (V + E)
→ Fully connected graph where BFS explores all vertices and edges.
BFS explores each vertex and processes all its neighbors using a queue, resulting in O(V + E) complexity.
Space Complexity:
Auxiliary Space: O(V)
→ For the visited array and queue.
Adjacency List Storage: O (V + E)
→ Same as DFS for graph storage
Aim: Design and perform analysis (time and space complexity) to find the presence of an element or not in a given
data type/database/data set using the Binary Search technique
Algorithm:
Step 1: Input the array in sorted order (ascending or descending order)
Step 2: Input the target element
Step 3: Set a timer, it will start when the loop starts, start Time
Step 4: Split the array into two halves if the size of the array is n, and even then, divide the array by n/2;
If it's odd, then
divide it by n+1/2
mid = arr.length/2 (if even) mid = arr. length+1/2 (if odd).
Step 5: The array of size n runs a loop and uses a control statement. If the target element is exactly at the middle,
then print the middle index of the array
If (arr[mid]==x)
print mid
Step 6: Use control statements and set two pointers low (which is the starting index of array) and high (which is
last index of array) after dividing the array if the middle element is less than target element then we completely
avoid the left sub array and we perform search on right dub array.
If (arr[mid]<x)
low=mid+1
Step 7: If the middle element is greater than the target element, then we completely avoid the right subarray and we
perform a search on the left subarray.
If (arr[mid]<x)
high=mid-1
Step 8: Repeat until the target element is the middle element and print the middle index.
Step 9: If the element was not equal to the middle index after several iterations, then print element not found.
Step 10: After finishing the searching, automatically timer will automatically stop and display how much time
it took and how much space it occupied, also it shows
Execution time (nanoseconds
Approximate memory used (bytes)
Flow chart:
Program:
import java.util.*;
public class BinarySearchPerformance {
public static int binarySearch(int arr[], int low,×int high, int x) { while (low <= high) {
int mid = (low + high) / 2; if (arr[×id] == x) {
return mid;
}
if (arr[mid] < x) {
low = mid + 1;
} else {
high = mid - 1;
}
}
return -1;
}
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.print("Enter the number of elements: ");
int n = sc.nextInt();
int arr[] = new int[n];
for (int i = 0; i < n; i++) {
arr[i] = i + 1;
}
System.out.print("Enter the element to search: ");
int x = sc.nextInt();
long startTime = System ×nanoTime();
int index = binarySearch(arr, 0, n - 1, x);
long endTime = System×nanoTime();
if (index == -1) {
System.out.println("No");
} else {
System.out.println("Yes, found at index: " + index);
}
long duration = endTime - startTime;
System.out.println("Execution time (nanoseconds): " + duration);
long spaceUsed = (long) n * 4;
System.out.println("Approximate memory used (bytes): " + spaceUsed);
sc.close();
}
}
Expected Output:
Enter the number of elements: 100
Enter the element to search: 74
Yes, found at index: 73
Execution time (nanoseconds): 10120
Approximate memory used (bytes): 400
Analysis:
The Best Case:
Binary search has a best-case time complexity of O(1) because if the middle element is the
target, it requires only one comparison. The time complexity is:
T(n) = 1 —> O(1)
Worst Case:
For the worst case, binary search has a time complexity of O(log n) because the search space
is divided by 2 after each comparison. If the element is not present, it takes log(n) comparisons,
where n is the size of the dataset. So, T(n) = log n —> O(log n)
The Average Case:
In the average case, the search time depends on the position of the target relative to the middle.
Each iteration reduces the search space logarithmically, so:
T(n) = log n —> O(log n)
Space Complexity:
—>The space complexity for binary search is O(1) – For variables like i (loop counter) and
target. And for n elements, O(n) – For storing the array of n elements. And for recursive call
splits the array in half, requiring log₂ (n) levels, so O (log n).
—>Iterative Binary Search is more space-efficient due to constant auxiliary space O(1).
—>Recursive Binary Search consumes extra space O(log n) due to the
recursive calls.
Compared with the above data types, the
This logarithmic growth reflects the halving process, making binary search much faster for
large datasets compared to linear search.
Week 4
Week 4: Divide And Conquer 2:
1a) Aim: Design and perform analysis (time and space complexity) to sort the given
element in ascending or descending order for the given data type /database/data set using
Quick sort or exchange sort technique
Algorithm:
Step 1: Define the array
We have an unsorted array that needs to be sorted using Quicksort.
Step 2: Choose a Pivot and Start the Timer
Choose a pivot element from the array (typically the last element, the first element, or a
random element). Partition the array so that elements smaller than the pivot go to the left,
and elements greater go to the right. function quicksort(array, low, high):
if low < high:
pivotIndex = partition(array, low, high)
quickSort(array, low, pivotIndex - 1) #
Recursively sort left part quickSort(array,
pivotIndex + 1, high) # Recursively sort right
part
Step 3: Partition the array around the pivot
Rearrange the elements so that elements smaller than the pivot move to the left and greater
elements move to the right.
function partition(array, low, high):
pivot = array[high] # Choosing the last element as pivot i = low - 1 # Pointer for smaller element
for j in range(low, high): if array[j] < pivot:
i += 1
swap(array[i], array[j])
swap(array[i + 1], array[high]) # Place pivot in correct position return i + 1 # Return pivot index
Step 4: Recursively apply QuickSort to sub-arrays Sort the left and right partitions
recursively. quickSort(array, low, pivotIndex - 1)
quickSort(array, pivotIndex + 1, high)
Step 5: The array is now sorted
Once all recursive calls are completed, the array is fully sorted.
Step 6: Stop Timer and Calculate Execution Time
Measure the time taken for sorting. startTime = currentTime() quickSort(array, 0, length(array) - 1)
endTime = currentTime() executionTime = endTime - startTime
print("Execution Time:", executionTime)
Flow chart:
Program:
public class QuickSort {
static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j]; arr[j] = temp;
}
static int partition(int[] arr, int low, int high) {
int pivot = arr[high];
int i = (low - 1);
for (int j = low; j < high; j++) {
if (arr[j] < pivot) {
i++;
swap(arr, i, j);
}
}
swap(arr, i + 1, high);
return (i + 1);
}
static void quickSort(int[] arr, int low, int high) {
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
Algorithm:
Merge Sort Algorithm:
Merge Sort is a divide-and-conquer algorithm that splits an array into two halves, recursively
sorts them, and then merges the sorted halves.
This simple algorithm efficiently sorts an array using the merge sort technique, leveraging
recursion and merging steps to handle the sorting process.
Step 1: Define the array:
Step 2: Split the array into two halves, and from here start the timer
function mergeSort(array): if length(array) > 1:
mid = length(array) // 2 leftHalf = array[0:mid]
rightHalf = array [mid: length(array)] mergeSort(leftHalf)
mergeSort(rightHalf) merge (array, leftHalf, rightHalf)
Step 3: Recursively divide the subarrays until you have subarrays that each contain a single
element. mergeSort(leftHalf)
mergeSort(rightHalf) merge (array, leftHalf, rightHalf)
Step 4: Merge the sub arrays back together by comparing the elements if its big arrange them
in sorted order. function merge (array, leftHalf, rightHalf):
i=j=k=0
while i < length(leftHalf) and j < length(rightHalf): if leftHalf[i] < rightHalf[j]:
array[k] = leftHalf[i] i += 1
else:
array[k] = rightHalf[j]
j += 1
k += 1
Step 5: The merging process ensures the elements from the sub-arrays are in sorted order.
while i < length(leftHalf):
array[k] = leftHalf[i] i += 1
k += 1
while j < length(rightHalf): array[k] = rightHalf[j]
j += 1
k += 1
Step 6: Arrange each sub-array in the sorted order and build the actual sorted array and stop
the timer, and calculate time by Execution Time nanoseconds. This is for 50 random elements.
Step 7: Calculate space complexity in bytes
Approximate Space Complexity in bytes This is for 50 elements
Flow chart:
Program:
import java.util.*;
public class Week3 {
public static void mergeSort(int [] arr) {
if (arr.length <= 1) return;
int mid = arr.length / 2;
int [] left = Arrays.copyOfRange(arr, 0, mid);
int [] right = Arrays.copyOfRange(arr, mid, arr.length);
mergeSort(left);
mergeSort(right);
merge (arr, left, right);
}
public static void merge (int [] arr, int [] left, int [] right)
{
int i = 0, j = 0, k = 0;
while (i < left.length && j < right.length) {
arr[k++] = (left[i] <= right[j])? left[i++]: right[j++];
}
while (i < left.length) arr[k++] = left[i++];
while (j < right.length) arr[k++] = right[j++];
}
public static void main (String [] args) {
Scanner sc = new Scanner (System.in);
System.out.print("Enter the size of the array: "); int size = sc.nextInt();
int [] array = new int[size];
Random random = new Random ();
for (int i = 0; i < size; i++) {
array[i] = random.nextInt(1000);
}
System.out.println("Original Array: " + Arrays.toString(array));
long startTime = System.nanoTime();
mergeSort(array);
long endTime = System.nanoTime();
System.out.println("Sorted Array: " + Arrays.toString(array));
int spaceComplexity = array.length * 4 + (array.length / 2) * 4 * 2;
System.out.println("Approximate Space Complexity: " + spaceComplexity + " bytes");
System.out.println("Execution Time: " + (endTime - startTime) + " nanoseconds");
sc.close();
}
}
Output:
Enter the size of the array: 50
Original Array: [165, 748, 751, 889, 931, 436, 782, 122, 582, 148, 170, 237, 514, 75, 779,
727, 474, 206, 173,
174, 223, 250, 152, 718, 922, 111, 806, 251, 322, 888, 216, 680, 435, 318, 261, 742, 235,
860, 920, 336, 674, 478,
613, 651, 686, 851, 777, 411, 927, 185]
Sorted Array: [75, 111, 122, 148, 152, 165, 170, 173, 174, 185, 206, 216, 223, 235, 237, 250,
251, 261, 318, 322,
336, 411, 435, 436, 474, 478, 514, 582, 613, 651, 674, 680, 686, 718, 727, 742, 748, 751,
777, 779, 782, 806, 851,
860, 888, 889, 920, 922, 927, 931]
Approximate Space
Complexity: 400 bytes
Execution Time: 60250
nanoseconds
Analysis :
1. Time Complexity Analysis:
Merge Sort is a divide-and-conquer algorithm that recursively splits the array and merges
the sorted halves.
Time Complexity Breakdown:
Dividing Step: Each recursive call splits the array into two halves. This requires O(log n)
time due to the depth of the recursion tree.
Merging Step: Each level of recursion requires O(n) time to
merge ×wo ×ub arrays. Total time complexity = O(n log n)
Best Case, Worst Case, and Average Case:
Best Case: O(n log n) (Occurs when the array is already sorted or reverse-sorted)
Worst Case: O(n log n) (Occurs in all cases due to the consistent split
and merge process) Average Case: O(n log n)
2. Space Complexity Analysis:
Merge Sort requires additional space for the temporary subarrays during the merging process.
Space Usage Calculation:
Input Array: Requires 4*n bytes (each integer takes 4 bytes)
Left and Right Arrays: At each level, two subarrays of approximately n/2 size
are created recursively. Total auxiliary space required = O(n)
In the program:
Input array: 4 * n bytes
Left and right arrays: (n/2) × 4 × 2 = 4n bytes
Total Space Complexity: O(n)
WEEK 6
Aim:
Design and perform analysis (time and space complexity) to perform a Minimum Spanning
Tree (MST) or Minimum Weight Spanning Tree for a weighted, connected, undirected
graph in a spanning tree with a weight less than or equal to the weight of every other spanning
tree.
Algorithm:
Step 1: Input the number of vertices and edges in the graph
Step 2: Input the edges (u, v, weight)
Step 3: Sort all edges based on their weights (non-decreasing)
Step 4: Initialize DSU (Disjoint Set Union) to detect cycles
Step 5: Traverse the sorted edge list
If adding an edge doesn’t form a cycle, include it in the MST
Step 6: Stop when MST has (V-1) edges
Step 7: Print edges included in MST and total weight
Step 8: Display Execution Time (in nanoseconds) and Memory Used (in bytes)
FLOW CHART:
Program:
import java.util.*;
class DisjointSet {
int [] parent;
DisjointSet(int n) {
parent = new int[n];
for (int i = 0; i < n; i++) parent[i] = i;
}
int find (int x) {
if (parent[x]! = x) parent[x] = find(parent[x]);
return parent[x];
}
void union (int x, int y) {
parent[find(x)] = find(y);
}
}
Arrays.sort(edges);
DisjointSet ds = new DisjointSet(V);
int totalWeight = 0;
List<Edge> mst = new ArrayList<> ();
System.out.println("\nEdges in MST:");
for (Edge edge : mst) {
System.out.println(edge.u + " - " + edge.v + " : " + edge.weight);
}
System.out.println("Total Weight of MST: " + totalWeight);
System.out.println("Execution Time (nanoseconds): " + executionTime);
sc.close();
}
}
Output:
Enter number of vertices: 4
Enter number of edges: 5
Enter edges (u v weight):
0 1 10
026
035
1 3 15
234
Edges in MST:
2 - 3: 4
0 - 3: 5
0 - 1: 10
Total Weight of MST: 19
Execution Time (nanoseconds): 84200
Approximate Memory Used (bytes): 68
Output:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Index 5 out of
bounds for length 4
at DisjointSet.find(...)
at KruskalMST.main(...)
ANALYSIS:
Best Case Time Complexity: O(E log E)
● Occurs when:
o The edges are already sorted by weight.
o The MST is formed quickly without complex union operations.
● Sorting edges dominates: O (E log E)
● Union-Find operations are near-constant due to path compression.
Input:
Enter number of vertices: 4
Enter number of edges: 3
Enter edges (u v weight):
011
122
233
Output:
Edges in MST:
0 - 1: 1
1 - 2: 2
2 - 3: 3
Total Weight of MST: 6
Execution Time (nanoseconds): 7450
Approximate Memory Used (bytes): 60
Input:
Enter number of vertices: 6
Enter number of edges: 8
Enter edges (u v weight):
013
021
137
235
342
454
256
158
Output:
Edges in MST:
0 - 2: 1
3 - 4: 2
0 - 1: 3
4 - 5: 4
2 - 3: 5
Total Weight of MST: 15
Execution Time (nanoseconds): 18300
Approximate Memory Used (bytes): 80
Space Complexity:
Component Complexity
Edge list O(E)
DSU parent array O(V)
MST edge storage O(V)
Auxiliary variables O(1)
● 🔸 Total Space: O (E + V)
● 🔸 Approximate Memory Used (bytes):
E * 12 for edges, V * 4 for DSU, V * 12 for MST edges
→ Memory = (E * 12) + (V * 4) + (V * 12) + small constants
Week 7
Aim: - Design and perform analysis (time and space complexity) for the following, all pairs
shortest path algorithm, which is also known as the Floyd-Warshall algorithm, to generate a
matrix representing the minimum distances between nodes in a weighted graph; for an Optimal
Binary Search Tree (OBST) that minimizes search cost; and for maximizing profits by
optimally filling a bag with given items based on weight and profit constraints.
Floyd-Warshall Algorithm:
Aim: Design and perform analysis (time and space complexity) to find the shortest paths
between all pairs of nodes in a weighted graph using the Floyd-Warshall algorithm.
Algorithm:
1. Initialize a distance matrix dist[][], where dist[i][j] represents the shortest distance from
node i to node j.
2. Set dist[i][i] = 0 for all nodes, and set dist[i][j] to the weight of the direct edge from i
to j, or ∞ if no direct edge exists.
3. Iterate through all intermediate nodes k:
o For each pair of nodes (i, j), update dist[i][j] as:
4. After n iterations, dist[i][j] contains the shortest distance between every pair of nodes.
Flowchart:
Program Implementation:
import java.util.Scanner;
floydWarshall(graph, n);
sc.close();
}
}
Input:
Enter number of vertices: 4
Enter the adjacency matrix (use 99999 for INF):
0 3 99999 99999
2 0 99999 99999
99999 7 0 1
6 99999 99999 0
Output:
Shortest distance matrix:
0 3 INF INF
2 0 INF INF
9701
6 9 INF 0
Wrong input:
Enter number of vertices: 4
Enter the adjacency matrix (use 99999 for INF):
0 2 99999 3
99999 0 5 99999
1 99999 0 99999
99999 99999 99999 0
Output:
Shortest distance matrix:
0273
3056
1304
5430
● The Floyd-Warshall algorithm runs three nested loops over n vertices, leading to a time
complexity of: T(n)= O(n^3)
● For a graph with n = 100 nodes, assuming each basic operation (comparison, addition,
assignment) takes 1 nanosecond (ns), the estimated runtime is:
1002×4=40,000 bytes=40
● For n = 500:
5002×4=1,000,000 bytes=1 MB
Optimal Binary Search Tree (OBST)
Aim: Design and perform analysis (time and space complexity) to
construct an OBST that minimizes search cost.
Algorithm:
4. The root that minimizes the cost is chosen as the root of the OBST.
FlowChart:
Program Implementation:
import java.util.Scanner;
int spaceUsed = (n * n * 4) + (n * 4) * 2;
System.out.println("Approximate Memory Used (bytes): " +
spaceUsed);
System.out.println("Enter keys:");
for (int i = 0; i < n; i++) {
keys[i] = sc.nextInt();
}
sc.close();
}
}
Input:
Enter number of keys: 4
Enter keys:
10 20 30 40
Enter corresponding frequencies:
4263
Output:
Execution Time (nanoseconds): 51800
Approximate Memory Used (bytes): 112
Optimal Cost: 26
Wrong Input:
Enter number of keys: 5
Enter keys:
5 10 15 20 25
Enter corresponding frequencies:
73526
Output:
Execution Time (nanoseconds): 42666
Approximate Memory Used (bytes): 144
Optimal Cost: 42
503=125,000 ns=125
● For n = 100:
● For n = 100:
1. Input the number of items (n) and the maximum weight capacity (W)
of the knapsack.
2. Input the weight and value of each item.
3. Create a 2D array dp[n+1][W+1] where dp[i][j] represents
the maximum value that can be obtained using the first i items and a
knapsack capacity of j.
4. Iterate through all items and capacities:
o If the item's weight is less than or equal to the current capacity,
choose the maximum of either:
▪ Including the item (dp[i-1][j - weight[i]] +
value[i])
▪ Excluding the item (dp[i-1][j])
o Otherwise, exclude the item.
5. The final answer is stored in dp[n][W].
6. Display the maximum profit and the items included.
Flowchart:
Program Implementation:
import java.util.Scanner;
return dp[n][W];
}
System.out.println("Enter values:");
for (int i = 0; i < n; i++) {
val[i] = sc.nextInt();
}
System.out.println("Enter weights:");
for (int i = 0; i < n; i++) {
wt[i] = sc.nextInt();
}
sc.close();
}
}
Input:
Enter number of items: 3
Enter values:
60 100 120
Enter weights:
10 20 30
Enter maximum weight capacity: 50
Output:
Execution Time (nanoseconds): 43400
Approximate Memory Used (bytes): 816
Maximum Profit: 220
Wrong Input:
Enter number of items: 4
Enter values:
20 40 50 100
Enter weights:
5 10 15 30
Enter maximum weight capacity: 40
Output:
Execution Time (nanoseconds): 56000
Approximate Memory Used (bytes): 1240
Maximum Profit: 160
50×1000 = 50,000 ns = 50 µs
50×1000×4=200,000 bytes=200 KB
100×2000×4=800,000 bytes=800 KB
WEEK 8
(Dynamic Programming 2)
Aim: Design and perform analysis (time and space complexity) of the concepts of reliability
of flow shop scheduling by designing a system of devices connected in series or parallel to
ensure device reliability, while also scheduling m machines to process n jobs, each with n
operations, in a specific order on designated machines, ensuring optimal operation
execution.
Algorithm
Step – 1: Initialize DP Table
● Create a 2D DP table dp[m+1][n+1], where:
sc.close();
}
}
Input :
Enter number of jobs (m): 3
Enter number of machines (n): 3
Enter the processing time matrix (m x n):
Enter time for job 1 on machine 1: 5
Enter time for job 1 on machine 2: 3
Enter time for job 1 on machine 3: 2
Enter time for job 2 on machine 1: 4
Enter time for job 2 on machine 2: 6
Enter time for job 2 on machine 3: 3
Enter time for job 3 on machine 1: 7
Enter time for job 3 on machine 2: 2
Enter time for job 3 on machine 3: 5
Enter the reliability values for each job:
Enter reliability for job 1: 0.9
Enter reliability for job 2: 0.8
Enter reliability for job 3: 0.95
Output :
Minimum makespan: 17
Execution time (ns): 125000
Space used (bytes): 64
Wrong input:
Enter number of jobs (m): 3
Enter number of machines (n): 4
Enter the processing time matrix (m x n):
Enter time for job 1 on machine 1: 5
Enter time for job 1 on machine 2: 3
Enter time for job 1 on machine 3: 4
Enter time for job 1 on machine 4: 6
Enter time for job 2 on machine 1: 7
Enter time for job 2 on machine 2: 2
Enter time for job 2 on machine 3: 5
Enter time for job 2 on machine 4: 4
Enter time for job 3 on machine 1: 6
Enter time for job 3 on machine 2: 3
Enter time for job 3 on machine 3: 2
Enter time for job 3 on machine 4: 7
Enter the reliability values for each job:
Enter reliability for job 1: 0.9
Enter reliability for job 2: 0.85
Enter reliability for job 3: 0.95
Output:
Minimum makespan: 26
Execution time (ns): 135000
Space used (bytes): 80
Time Complexity:
1. Best Case Time Complexity:
The best-case scenario occurs when the jobs have very low processing times or perfect
reliability (i.e., reliability = 1), and the jobs can be processed in an optimal order without
delays.
Example:
● Jobs (m): 3
● Machines (n): 3
● Machines (n): 3
Time matrix (processing time on each machine):
{ {10, 20, 30},
{15, 25, 35},
{20, 30, 40} }
● Reliability: {0.5, 0.4, 0.3}
● Machines (n): 3
{ {3, 5, 7},
{4, 6, 8},
{2, 3, 5} }
● Reliability: {0.9, 0.95, 0.98}
Aim: Design and perform analysis (time and space complexity) of determining an array A
from an array B, ensuring that for all i, A[i] ≤ B[i], while maximizing the sum of the
absolute differences of consecutive pairs in A; concurrently segment array A into contiguous
pieces to store as array B, connect N ropes of varying lengths, and predict market share
prices of Wooden Orange Toothpicks Inc. for the upcoming days?
Algorithm
● Set A[0]=B[0].
● For each iii from 1 to n−1:
● Find array A.
Step 8: Stop the timer, calculate execution time, and measure memory usage.
import java.util.*;
int n = B.length;
A[0] = B[0];
return A;
}
public static List<List<Integer>> segmentArray(int[] A) {
segment.add(A[i]);
segments.add(segment); return
segments;
minHeap.add(rope);
int totalCost = 0;
+= cost;
minHeap.add(cost);
return totalCost;
}
public static double predictStockPrice(int[] prices, int k) { if
double sum = 0;
+= prices[i];
return sum / k;
= sc.nextInt();
B[i] = sc.nextInt();
runtime = Runtime.getRuntime();
int[] A = constructArrayA(B);
= sc.nextInt();
ropes[i] = sc.nextInt();
prices[i] = sc.nextInt();
sc.close();
Input:
Enter the number of elements in B: 5 Enter
elements of B:
10 5 8 12 15
rope lengths:
4326
Output:
Segmented Array A into contiguous pieces: [[10, 10, 10], [12], [15]]
Wrong Input:
Enter elements of B:
43526
8345
Enter the number of stock prices to consider: 3
Output:
Since heap operations are dominant when m ≈ n, the worst-case time complexity becomes:
O(n+mlogm+k)
● Heap operations (rope merging) generally take O(mlogm), but with random rope
lengths, it may not always be the full O(m log m).
● Stock price prediction remains O(k).
For most practical cases where m is not extremely large compared to n, the complexity
simplifies to:
O(n+mlogm)
If m is significantly smaller than n, then heap operations become negligible, and the time
complexity approaches O(n)
Final Summary
Design and perform analysis (time and space complexity) for the dividing an integer array of N
elements into K non-empty subsets such that the sum of elements in each subset is the same ; &
printing all the longest common sub sequences of two strings in lexicographical order, and finding all
possible palindromic partitions of a given string, ensuring every element is part of exactly one
partition.
1. Dividing an Integer Array into K Subsets with Equal Sum:
Problem Description:
We need to divide an array of NNN integers into KKK non-empty subsets such that the sum of
elements in each subset is the same.
Algorithm:
import java.util.*;
private static boolean backtrack(int[] nums, boolean[] used, int k, int start, int currSum, int target) {
if (k == 0) return true;
if (currSum == target) return backtrack(nums, used, k - 1, 0, 0, target);
scanner.close();
}
}
Incorrect Input:
Enter number of elements: 5
Enter the elements:
43235
Enter number of subsets (K): 4
Output:
Not possible to partition
Execution Time: 3283799 nanoseconds
Approximate Space Used: 928 bytes
Correct Example Input :
4323
Expected Output :
Possible to partition
Execution Time: 2673400 nanoseconds
Approximate Space Used: 800 bytes
ANALYSIS:
Time Complexity:
If the array is already balanced into K equal sum subsets, the function completes quickly with
minimal backtracking.
Input:
Some pruning occurs due to early termination when invalid subsets are detected.
Input:
Enter number of elements: 5
Enter the elements:
1 5 11 5
Enter number of subsets (K): 2
Output:
Possible to partition into 2 subsets with equal sum.
3. Worst Case: O(2^N)
In the worst case, the function explores all possible subsets due to backtracking.
Input:
Enter number of elements: 5 Enter the
elements:
12345
Enter number of subsets (K): 3
Output:
Not possible to partition
Space Complexity:
Example:
For arr = [4, 3, 2, 3, 5, 2, 1] and K = 4, early pruning may allow quick partitioning, making it closer to
the average case.
2. Printing All Longest Common Sub sequences (LCS) in Lexicographical Order
Problem Definition:
Given two strings s1 and s2, find all longest common subsequences (LCS) and print them in
lexicographical order.
Algorithm:
• Create dp[m+1][n+1] where dp[i][j] stores the LCS length of prefixes s1[0…i-1] and s2[0…j-
1].
import java.util.*;
private static Set<String> backtrackLCS(String s1, String s2, int i, int j, int[][] dp) {
if (i == 0 || j == 0)
return new HashSet<>(Collections.singleton(""));
scanner.close();
}
}
INPUT:
Enter first string: abcabcaa
Enter second string: acbacba
OUTPUT:
Longest Common Subsequences in Lexicographical Order:
abcaba
abcbba
acbcaa
Wrong Input:
Enter first string: babdc
Enter second string: abac
Output:
Longest Common Subsequences in Lexicographical Order:
abc
bac
bbc
Time Complexity:
When all sub sequences are di erent, backtracking explores every possibility..
Input:
Enter first string: abc Enter
second string: def
Output:
Longest Common Subsequences in Lexicographical Order:
(No common subsequences)
Space Complexity:
Example:
For s1 = "abcabcaa" and s2 = "acbacba", multiple LCS exist, requiring exponential
backtracking in the worst case.
3. Finding All Possible Palindromic Partitions Problem
Definition:
Given a string s, find all possible palindromic partitions ensuring every element is part of exactly one
partition.
Algorithm:
import java.util.*;
private static void backtrack(String s, int start, List<String> current, List<List<String>> result) {
if (start == s.length()) {
result.add(new ArrayList<>(current));
return;
}
for (int end = start + 1; end <= s.length(); end++) {
String substring = s.substring(start, end);
if (isPalindrome(substring)) {
current.add(substring);
backtrack(s, end, current, result);
current.remove(current.size() - 1);
}
}
}
Input:
Output:
[a, a, b]
[aa, b]
Wrong Input:
Wrong Output:
[a, b, a, b, a]
[a, b, aba]
[a, bab, a]
[aba, b, a]
[ababa]
Time Complexity:
Example:
Input:
S = "Aba"
output:
Step 2: Branching:
For the binary string generation:
● Append '0' to current_string and explore this branch.
● Append '1' to current_string and explore this branch.
Step 3: Bounding:
For the binary string generation:
● Check if the length of current_string is equal to NN. If yes, consider this as a valid binary string.
Step4: Repeat:
Continue branching and bounding for both binary string generation and feature selection:
For binary string generation, append '0' or '1' to the current string until the length is NN.
For feature selection, include or exclude each feature until all subsets are evaluated.
Step5: Termination:
For binary string generation:
● Stop when all possible binary strings of length NN are explored.
scanner.close();
}
}
INPUT:
Enter the length of the binary string (N): 2
Enter the number of features to select (numFeatures): 2
Output:
Binary strings of length 2: [00, 01, 10, 11]
Selected feature subset: [0, 1]
Best accuracy: 0.5
Wrong Input:
Enter the length of the binary string (N): 5
Enter the number of features to select (numFeatures): 3
Output:
Binary strings of length 5: [00000, 00001, 00010, ..., 11111]
Selected feature subset: [0, 2, 3]
Best accuracy: 0.75
Execution Time: 5245398 nanoseconds
Approximate Space Used: 45312 bytes
● Complexity: O(2k)O(2^k).
● Space Complexity: O(N⋅2N)O(N \cdot 2^N) (Store all strings of length NN).
Feature Selection:
● Time Complexity: O(2m)O(2^m) (Evaluate all subsets of mm features).